Skip to content

Drop deprecated RetryPolicy and QueueConfig surface (2.0.0)#11

Merged
ceejbot merged 3 commits intolatestfrom
ceej/2.0-deprecation-cleanup
May 7, 2026
Merged

Drop deprecated RetryPolicy and QueueConfig surface (2.0.0)#11
ceejbot merged 3 commits intolatestfrom
ceej/2.0-deprecation-cleanup

Conversation

@ceejbot
Copy link
Copy Markdown
Owner

@ceejbot ceejbot commented May 7, 2026

Summary

Removes the API surface marked #[deprecated(since = "1.2.0")] in the
previous release. No behavioural changes — every removed item was already
non-functional or duplicative; the deprecation warnings in 1.2.0 were the
upgrade signal, and this PR is the follow-through.

What's gone

RetryPolicy timing math — graphile_worker schedules retries via a
fixed exp(min(attempts, 10))-second SQL formula, so these never reached
the worker:

  • RetryPolicy::new(4-arg) constructor
  • RetryPolicy::with_jitter
  • RetryPolicy::calculate_delay / calculate_retry_time
  • JobSpec::calculate_retry_time

The RetryPolicy struct and all five fields are preserved. Only
max_attempts is honored at runtime; the others stay so a future
upstream that exposes per-job timing config can light up without an
API break. Presets (fast / aggressive / conservative) and the
JobSpec retry builders are unchanged.

QueueConfig multi-queue surface — graphile_worker doesn't expose
per-worker queue filtering:

  • QueueConfig struct and all its constructors
  • WorkerConfig::with_queues(Vec<QueueConfig>)
  • WorkerConfig.queue_configs field → replaced with
    WorkerConfig.concurrency: usize
  • WorkerOptionsBuilder.queue_name field and with_queue_name method

For per-job queue routing (the actual use case), Queue::serial(name)
and Queue::serial_for(entity, id) at enqueue time remain — that was
always the correct path; QueueConfig::named_queue was a misleading
second path that silently delivered serial-by-default behaviour.

Migration

// RetryPolicy
RetryPolicy::new(8, ..., ..., ...)RetryPolicy { max_attempts: 8, ..Default::default() }
policy.with_jitter(0.2)              →  drop the call (no runtime effect)
policy.calculate_delay(n)            →  drop the call
policy.calculate_retry_time(...)     →  drop the call
spec.calculate_retry_time(...)       →  drop the call

// QueueConfig
cfg.with_queues(vec![QueueConfig::default_queue(N)])    →  cfg.with_concurrency(N)
cfg.with_queues(vec![QueueConfig::named_queue(_, N)])   →  cfg.with_concurrency(N)
                                                            + use Queue::serial(name) at enqueue time
cfg.queue_configs[0].concurrency                         →  cfg.concurrency
cfg.queue_configs                                        →  removed; use cfg.concurrency

Verification

  • 108 tests passing (was 113 in 1.2.0). Five removed: three for the
    deleted QueueConfig constructors, one for the "multi-queue first-config
    wins" behaviour that no longer exists, one in-source unit test for
    the deleted RetryPolicy math.
  • cargo clippy --all-targets -F axum: clean.
  • cargo +nightly fmt --check: clean.
  • cargo test --doc -F axum: 11/11 (1 intentionally ignored).

Test plan

  • CI passes (clippy + nextest + doctests + security audit + nightly fmt)
  • Spot-check the migration table in the PR body against your own callsites
  • Bisect-friendly: each commit is a self-contained removal

ceejbot added 3 commits May 6, 2026 20:23
Removes the methods deprecated in 1.2.0 (since="1.2.0", removal scheduled
for 2.0.0):

- RetryPolicy::new(max, initial, max_delay, mult)
- RetryPolicy::with_jitter(f64)
- RetryPolicy::calculate_delay(attempt) -> Duration
- RetryPolicy::calculate_retry_time(attempt, base) -> DateTime<Utc>
- JobSpec::calculate_retry_time(attempt, failed_at) -> Option<DateTime<Utc>>

The math these computed was never reachable at runtime — graphile_worker
schedules retries via its own SQL formula (`exp(min(attempts, 10))`
seconds). Keeping the methods around as informational helpers risked
users reading them as configuration that mattered.

The struct itself is preserved with all five fields. Only `max_attempts`
is honored; the other fields are documented as not-honored. Keeping the
shape gives us a place to land per-job timing config if upstream
graphile_worker ever exposes it, without another API break.

The presets (fast / aggressive / conservative) and the JobSpec builders
(with_fast_retries / with_aggressive_retries / with_conservative_retries)
are unchanged — they're cheap convenience for setting max_attempts and
have no semantic baggage.

Tests: drop the three tests that exercised the deleted math
(retry_policy_calculate_delay, retry_policy_max_delay_cap,
job_spec_with_retry_policies). Add two cheap replacements covering
RetryPolicy::should_retry, total_attempts, and that the presets pin to
the documented attempt counts.

BREAKING CHANGE: callers of any of the listed methods will get compile
errors. Migrate by:
  RetryPolicy::new(8, ..., ..., ...) → RetryPolicy { max_attempts: 8, ..Default::default() }
  policy.with_jitter(0.2)            → drop the call (had no runtime effect)
  policy.calculate_delay(n)          → drop the call (no runtime effect)
  policy.calculate_retry_time(...)   → drop the call (no runtime effect)
  spec.calculate_retry_time(...)     → drop the call (no runtime effect)
Removes the entire QueueConfig surface that was deprecated in 1.2.0
(scheduled for removal in 2.0.0):

- pub struct QueueConfig
- QueueConfig::default_queue, named_queue, priority_queue
- WorkerConfig::with_queues(Vec<QueueConfig>)
- WorkerConfig.queue_configs field
- WorkerOptionsBuilder.queue_name field and with_queue_name() method
- The "Queue name configuration is not supported" WARN log in
  WorkerOptionsBuilder->WorkerOptions conversion (unreachable now)

The library never actually consumed any of this beyond the first config's
concurrency value — graphile_worker's WorkerOptions doesn't expose
per-worker queue filtering. Keeping the surface around in deprecated
form invited misuse: users who reasonably expected named_queue to filter
jobs got silent serialization-by-default instead, which is the opposite
of what they intended.

The new shape is one field on WorkerConfig:

    pub struct WorkerConfig {
        pub database_url: String,
        pub schema: String,
        pub concurrency: usize,        // was: queue_configs: Vec<QueueConfig>
        pub poll_interval: Duration,
        ...
    }

Set it via `WorkerConfig::with_concurrency(n)` (already added in 1.2.0)
or via struct-literal init. To run multiple specialized workers, spawn
multiple WorkerRunner instances yourself. Per-job queue routing remains
available at enqueue time via Queue::serial(name) / Queue::serial_for(...)
— that path was never tied to QueueConfig.

Tests:
- Drop test_queue_config_default_queue, test_queue_config_named_queue,
  test_queue_config_priority_queue (testing deleted constructors).
- Drop test_worker_runner_with_multiple_queues_only_first_honored (the
  whole "only first honored" concept is gone with the API).
- Drop test_queue_config_builders unit test in src/worker.rs.
- Update test_worker_config_default and test_worker_config_builder to
  assert config.concurrency directly instead of queue_configs[0].concurrency.
- Update three tests in integration_tests_clean.rs that used struct-literal
  init with `queue_configs: vec![],` — switch to `concurrency: 10,`.

BREAKING CHANGE: any code referencing QueueConfig, WorkerConfig.queue_configs,
or WorkerConfig::with_queues stops compiling. Migration:

    cfg.with_queues(vec![QueueConfig::default_queue(N)])  → cfg.with_concurrency(N)
    cfg.with_queues(vec![QueueConfig::named_queue(_, N)]) → cfg.with_concurrency(N)
    cfg.queue_configs[0].concurrency                       → cfg.concurrency
    cfg.queue_configs                                      → (gone; use cfg.concurrency)

For per-job queue routing (the named_queue use case), use Queue::serial(name)
at enqueue time — that's the correct mechanism and always was.
@ceejbot ceejbot merged commit b03f249 into latest May 7, 2026
1 check passed
@ceejbot ceejbot deleted the ceej/2.0-deprecation-cleanup branch May 7, 2026 04:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant