Skip to content

fix(dlq): preserve parallel queue type when adding jobs to DLQ#9

Merged
ceejbot merged 1 commit intoceejbot:latestfrom
rrogers-machinify:fix/dlq-parallel-queue-name
Jan 26, 2026
Merged

fix(dlq): preserve parallel queue type when adding jobs to DLQ#9
ceejbot merged 1 commit intoceejbot:latestfrom
rrogers-machinify:fix/dlq-parallel-queue-name

Conversation

@rrogers-machinify
Copy link
Copy Markdown
Collaborator

Summary

  • Fix parallel jobs being converted to serial when requeued from DLQ
  • Use empty string instead of "default" for parallel jobs in DLQ

Problem

When parallel jobs (Queue::Parallel) fail and are added to the DLQ, they were being stored with queue_name = "default". When requeued, the requeue logic sees a non-empty string and creates Queue::Serial("default") instead of Queue::Parallel.

The requeue logic already correctly handles this distinction:

let queue = if dlq_job.queue_name.is_empty() {
    Queue::Parallel      // ← Empty string = parallel
} else {
    Queue::Serial(dlq_job.queue_name.clone())  // ← Non-empty = serial
};

But add_to_dlq() and process_failed_jobs() were defaulting to "default" instead of empty string for parallel jobs (which have no job_queue_id).

Fix

  • add_to_dlq(): Use String::new() when job_queue_id() is None (parallel job)
  • process_failed_jobs(): Use unwrap_or_default() when queue_name is NULL

This ensures parallel jobs stay parallel when requeued from DLQ.

Test plan

  • cargo check passes
  • just fmt - no changes needed

🤖 Generated with Claude Code

When parallel jobs (Queue::Parallel) fail and are added to the DLQ,
they were being stored with queue_name = "default". When requeued,
this caused them to be enqueued as Queue::Serial("default") instead
of Queue::Parallel.

The requeue logic already correctly handles this:
- Empty queue_name → Queue::Parallel
- Non-empty queue_name → Queue::Serial(name)

But the add_to_dlq and process_failed_jobs functions were defaulting
to "default" instead of empty string for parallel jobs.

This fix:
- Changes add_to_dlq to use String::new() for parallel jobs (no queue_id)
- Changes process_failed_jobs to use unwrap_or_default() for NULL queue_name
- Adds test_parallel_job_stays_parallel_through_dlq to prevent regression

This ensures parallel jobs stay parallel when requeued from DLQ.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@rrogers-machinify rrogers-machinify force-pushed the fix/dlq-parallel-queue-name branch from 27ebd28 to 594e3ca Compare January 26, 2026 19:36
Copy link
Copy Markdown
Owner

@ceejbot ceejbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was a case I missed, sorry. Good catch.

@ceejbot ceejbot merged commit 0efae24 into ceejbot:latest Jan 26, 2026
1 check passed
ceejbot added a commit that referenced this pull request May 7, 2026
`RetryPolicy.{initial_delay, max_delay, backoff_multiplier, jitter_factor}`
were stored on the struct but never reached graphile_worker — only
`max_attempts` is forwarded by `From<JobSpec> for GraphileJobSpec`.
graphile_worker uses a hard-coded `exp(min(attempts, 10))` second SQL
formula for every retry. So `RetryPolicy::fast()` and
`RetryPolicy::conservative()` produced identical retry timing in practice
even though the docs promised "100ms-30s delays" vs. "1 min - 8 hour
delays".

This commit makes the fact match the promise:

- Marks the unused math helpers (`RetryPolicy::new`, `with_jitter`,
  `calculate_delay`, `calculate_retry_time`, and `JobSpec::calculate_retry_time`)
  as `#[deprecated(since = "1.2.0")]` with notes pointing users at
  `RetryPolicy { max_attempts: n, ..Default::default() }` or the presets.
- Rewrites the rustdoc on `RetryPolicy`, on each preset, on the
  `with_*_retries` builders, and on the `enqueue_*_with_retries`
  convenience helpers to describe what actually happens (only
  `max_attempts` differs across presets, fixed exp-backoff timing).
- Updates the lib.rs module-level rustdoc.
- Migrates `examples/enqueue_jobs.rs` to the recommended pattern so it
  doesn't trigger the new deprecation warnings.
- Updates README.md to drop the false delay-range claims and replace
  the wrong "Pre-configured Fast/Bulk queues" / "Custom(name)" listing
  with the actual `Queue::Parallel` / `Queue::Serial(name)` enum.
- Updates docs/02-dlq.md to mark the post-#9 "queue_name shows as
  default" warning as resolved (it was the change in v1.1.1 that fixed
  this).

The struct fields themselves stay public for source-compatibility with
existing struct-literal construction. Per-job backoff customization needs
upstream graphile_worker support and is deferred.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants