Skip to content

fix(migrations): make spans.task_id rollout safe for large tables#223

Merged
declan-scale merged 6 commits intomainfrom
declan-scale/fix-spans-task-id-migration
May 6, 2026
Merged

fix(migrations): make spans.task_id rollout safe for large tables#223
declan-scale merged 6 commits intomainfrom
declan-scale/fix-spans-task-id-migration

Conversation

@declan-scale
Copy link
Copy Markdown
Collaborator

@declan-scale declan-scale commented May 6, 2026

Summary

Splits the broken 57c5ed4f59ae migration into a safe, idempotent column add plus a non-blocking finalize migration, and adds lock_timeout/statement_timeout defaults to the migration runner. Eliminates the in-band backfill by tolerating NULL task_id at read time.

Why

The original 57c5ed4f59ae combined three heavy operations on a large, write-heavy spans table in a single Alembic revision:

  • A single multi-million-row UPDATE backfill inside the migration transaction (held row locks, prevented autovacuum, bloated the table).
  • ADD CONSTRAINT … FOREIGN KEY (full-table validation under AccessExclusiveLock).
  • Non-concurrent CREATE INDEX (blocks writes during build).

On a sufficiently large table this combination exhausts the application connection pool while concurrent span writes pile up behind the lock. The fix splits the work into safe pieces.

Changes

Migration 57c5ed4f59ae — reduced to an idempotent column add only:

ALTER TABLE spans ADD COLUMN IF NOT EXISTS task_id VARCHAR;

Metadata-only on PG ≥ 11; runs in milliseconds. Idempotency makes it a no-op on environments where the original (heavier) version already completed.

New migration a9959ebcbe98 (tail after e9c4ff9e6542) — finalizes the FK + index using non-blocking operations:

  • ALTER TABLE … ADD CONSTRAINT … FOREIGN KEY … NOT VALID (skips full-table scan, brief lock only).
  • CREATE INDEX CONCURRENTLY IF NOT EXISTS ix_spans_task_id (does not block writes).
  • Both guarded by pg_constraint lookup / IF NOT EXISTS for idempotency.
  • Wrapped in op.get_context().autocommit_block() so they run outside the migration transaction.

VALIDATE CONSTRAINT is intentionally not run. The FK is enforced on all new inserts/updates and ON DELETE SET NULL still applies — NOT VALID only skips the historical scan. Acceptable tradeoff to avoid a multi-minute table scan on a large prod table.

No in-band backfill. SpanRepository.list now ORs on trace_id when filtering by task_id, returning historical task-scoped spans correctly without populating task_id on every old row. Both columns are indexed. The full backfill (cleanup, optional) is a separate operator-driven runbook (companion PR).

env.py:

  • transaction_per_migration=True so individual migrations can use autocommit_block().
  • SET LOCAL lock_timeout = '3s' and SET LOCAL statement_timeout = '30s' applied per migration via a SQLAlchemy begin listener. Future long-running migrations now abort cleanly instead of queueing behind active writes. autocommit_block() ops bypass these (no transaction = no listener trigger), which is the right behavior for CREATE INDEX CONCURRENTLY etc.

Behavior matrix

Env state Effect of this PR
alembic_version at 4a9b7787ccd7 All pending migrations run; 57c5ed4f59ae adds the column fast, intermediate migrations run as before, a9959ebcbe98 adds NOT VALID FK + concurrent index
alembic_version past 57c5ed4f59ae (FK + index already in place) 57c5ed4f59ae does not re-run; a9959ebcbe98 runs but the pg_constraint and IF NOT EXISTS guards make every operation a no-op

Follow-ups (separate PRs)

  • Backfill runbook + CLAUDE.md migration safety guidance (companion PR).
  • Move agentex migrations from pod-startup to a pre-deploy Job.
  • Once historical data ages out, drop the trace_id fallback in SpanRepository.list.

Test plan

  • make test FILE=tests/unit/repositories/test_span_repository.py passes (3 new tests cover the OR fallback)
  • Local make dev: pod starts, migrations apply, \d spans shows column + FK + index
  • Spot-check alembic upgrade head against a prod-shaped DB (20M+ rows on a perf clone) to confirm the new migrations don't block writes

Greptile Summary

  • Splits the original heavy spans migration into a fast column-add revision (57c5ed4f59ae, idempotent ADD COLUMN IF NOT EXISTS) and a new finalize migration (a9959ebcbe98) that adds the FK with NOT VALID and builds the index CONCURRENTLY inside autocommit_block(), avoiding write-blocking operations on large tables.
  • SpanRepository.list now ORs task_id and trace_id at read time for non-null filters, correctly tolerating pre-backfill rows without a mass in-migration UPDATE. The None guard is present and tested.
  • env.py applies session-level lock_timeout/statement_timeout defaults; the session-level statement_timeout = '30s' persists into autocommit_block() and will abort CREATE INDEX CONCURRENTLY on a large prod table — the PR description's claim that autocommit_block() bypasses it is incorrect.

Confidence Score: 3/5

Not safe to merge until the statement_timeout vs CREATE INDEX CONCURRENTLY conflict is resolved — the finalize migration will reliably fail on large prod tables.

One P1 remains: the session-level statement_timeout=30s set in env.py persists across autocommit_block() boundaries and will cancel CREATE INDEX CONCURRENTLY after 30s on a 20M+ row table. The env.py comment itself confirms the session-level persistence. Two P2s (offline mode missing transaction_per_migration, lock wait vs lock_timeout race) are low probability but worth fixing. The span repository fix and migration splitting are correct.

agentex/database/migrations/alembic/env.py — the timeout application strategy needs to distinguish between transactional statements and long-running concurrent operations.

Important Files Changed

Filename Overview
agentex/database/migrations/alembic/env.py Adds session-level migration timeouts and transaction_per_migration=True; session-level statement_timeout=30s will abort CREATE INDEX CONCURRENTLY in the finalize migration, and offline mode is missing transaction_per_migration=True.
agentex/database/migrations/alembic/versions/2026_04_14_1126_add_task_id_to_spans_57c5ed4f59ae.py Reduced to a safe, idempotent ALTER TABLE spans ADD COLUMN IF NOT EXISTS task_id VARCHAR; removes the original heavy backfill, FK, and non-concurrent index creation.
agentex/database/migrations/alembic/versions/2026_05_06_1200_finalize_spans_task_id_a9959ebcbe98.py New migration adds FK NOT VALID and CREATE INDEX CONCURRENTLY inside autocommit_block(); both are idempotent, but the session-level statement_timeout=30s from env.py will abort the index build on large tables.
agentex/src/domain/repositories/span_repository.py Adds OR fallback for historical trace_id-as-task-id rows; correctly guards against None task_id to prevent accidental full-table scan, and delegates remaining filters to the base class.
agentex/tests/unit/database/test_alembic_env_timeouts.py New unit tests verify timeout constants and SQL formatting helper; tests check text presence rather than runtime behavior, which is an acceptable tradeoff for migration runner tests.
agentex/tests/unit/repositories/test_span_repository.py Adds 3 integration tests covering the OR fallback, None task_id guard, and combined filter behavior; test logic correctly exercises all branches of the new list() override.

Sequence Diagram

sequenceDiagram
    participant Pod as Migration Pod
    participant PG as PostgreSQL

    Note over Pod,PG: env.py run_migrations_online()
    Pod->>PG: SET lock_timeout = '3s' (session-level)
    Pod->>PG: SET statement_timeout = '30s' (session-level)
    Pod->>PG: SET idle_in_transaction_session_timeout = '10s' (session-level)
    Pod->>PG: COMMIT (persists session GUCs)

    Note over Pod,PG: Migration 57c5ed4f59ae
    Pod->>PG: BEGIN
    Pod->>PG: ALTER TABLE spans ADD COLUMN IF NOT EXISTS task_id VARCHAR
    Pod->>PG: COMMIT (fast, metadata-only on PG>=11)

    Note over Pod,PG: Migration a9959ebcbe98 (autocommit_block)
    Pod->>PG: autocommit=True on connection
    Pod->>PG: DO block - ADD CONSTRAINT fk_spans_task_id_tasks NOT VALID
    Pod->>PG: CREATE INDEX CONCURRENTLY IF NOT EXISTS ix_spans_task_id ON spans(task_id)
    Note over Pod,PG: statement_timeout=30s still active - CIC on 20M+ rows exceeds 30s and is aborted

    Note over Pod,PG: SpanRepository.list(filters={task_id: id})
    Pod->>PG: SELECT ... WHERE (task_id = :v OR trace_id = :v) AND remaining filters
    PG-->>Pod: rows (new-style + historical pre-backfill)
Loading

Comments Outside Diff (3)

  1. agentex/src/domain/repositories/span_repository.py, line 48 (link)

    P1 NULL task_id filter returns massive unintended result set

    When a caller passes filters={"task_id": None}, SQLAlchemy translates SpanORM.task_id == Nonetask_id IS NULL and SpanORM.trace_id == Nonetrace_id IS NULL. The generated predicate becomes WHERE (task_id IS NULL OR trace_id IS NULL). On the 24M-row spans table where virtually all pre-backfill rows have task_id NULL, this silently returns an enormous result set instead of the caller's expected empty/null-scoped result. The test_list_by_task_id_falls_back_to_trace_id test only exercises a non-null UUID, so the None path is untested.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: agentex/src/domain/repositories/span_repository.py
    Line: 48
    
    Comment:
    **NULL `task_id` filter returns massive unintended result set**
    
    When a caller passes `filters={"task_id": None}`, SQLAlchemy translates `SpanORM.task_id == None``task_id IS NULL` and `SpanORM.trace_id == None``trace_id IS NULL`. The generated predicate becomes `WHERE (task_id IS NULL OR trace_id IS NULL)`. On the 24M-row spans table where virtually all pre-backfill rows have `task_id NULL`, this silently returns an enormous result set instead of the caller's expected empty/null-scoped result. The `test_list_by_task_id_falls_back_to_trace_id` test only exercises a non-null UUID, so the None path is untested.
    
    
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Cursor Fix in Claude Code Fix in Codex

  2. agentex/database/migrations/alembic/env.py, line 34-38 (link)

    P1 statement_timeout is session-level and will abort CREATE INDEX CONCURRENTLY on large tables

    SET statement_timeout = '30s' is applied once as a session-level GUC via connection.exec_driver_sql. Unlike SET LOCAL, this persists across all subsequent statements on that connection — including those executed inside autocommit_block(). CREATE INDEX CONCURRENTLY is a single statement in PostgreSQL's view and the timer runs for its entire duration; on a 20M+ row table it will reliably exceed 30 s and be cancelled.

    The PR description states that autocommit_block() ops "bypass" the timeout because "no transaction = no listener trigger", but no listener is used here — the settings are applied directly at session scope — so they are not bypassed. The finalize migration is specifically designed to handle the large-table case, yet the timeout introduced in this same PR will kill it.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: agentex/database/migrations/alembic/env.py
    Line: 34-38
    
    Comment:
    **`statement_timeout` is session-level and will abort `CREATE INDEX CONCURRENTLY` on large tables**
    
    `SET statement_timeout = '30s'` is applied once as a session-level GUC via `connection.exec_driver_sql`. Unlike `SET LOCAL`, this persists across all subsequent statements on that connection — including those executed inside `autocommit_block()`. `CREATE INDEX CONCURRENTLY` is a single statement in PostgreSQL's view and the timer runs for its entire duration; on a 20M+ row table it will reliably exceed 30 s and be cancelled.
    
    The PR description states that `autocommit_block()` ops "bypass" the timeout because "no transaction = no listener trigger", but no listener is used here — the settings are applied directly at session scope — so they are not bypassed. The finalize migration is specifically designed to handle the large-table case, yet the timeout introduced in this same PR will kill it.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Cursor Fix in Claude Code Fix in Codex

  3. agentex/database/migrations/alembic/env.py, line 150-152 (link)

    P1 statement_timeout aborts CREATE INDEX CONCURRENTLY on large tables

    connection.exec_driver_sql("SET statement_timeout = '30s'") sets a session-level GUC. Unlike SET LOCAL, session-level GUCs persist across transaction boundaries — including the autocommit_block() entered by the finalize migration. The env.py comment itself confirms this: "These are session-level so they persist across…autocommit_block boundaries on the same connection." The PR description's claim that autocommit_block() "bypasses" the timeout because "no transaction = no listener trigger" is incorrect — there is no listener; the GUC is set directly on the connection.

    CREATE INDEX CONCURRENTLY on a 20 M+ row spans table will run well past 30 s, and PostgreSQL will abort it with ERROR: canceling statement due to statement timeout, leaving the index in an invalid state. The IF NOT EXISTS guard won't prevent a re-run from failing the same way.

    Fix: apply statement_timeout via SET LOCAL inside each transactional migration instead of at session level, or add SET statement_timeout = 0 before entering autocommit_block() inside the finalize migration.

    Prompt To Fix With AI
    This is a comment left during a code review.
    Path: agentex/database/migrations/alembic/env.py
    Line: 150-152
    
    Comment:
    **`statement_timeout` aborts `CREATE INDEX CONCURRENTLY` on large tables**
    
    `connection.exec_driver_sql("SET statement_timeout = '30s'")` sets a session-level GUC. Unlike `SET LOCAL`, session-level GUCs persist across transaction boundaries — including the `autocommit_block()` entered by the finalize migration. The `env.py` comment itself confirms this: *"These are session-level so they persist across…autocommit_block boundaries on the same connection."* The PR description's claim that `autocommit_block()` "bypasses" the timeout because "no transaction = no listener trigger" is incorrect — there is no listener; the GUC is set directly on the connection.
    
    `CREATE INDEX CONCURRENTLY` on a 20 M+ row `spans` table will run well past 30 s, and PostgreSQL will abort it with `ERROR: canceling statement due to statement timeout`, leaving the index in an invalid state. The `IF NOT EXISTS` guard won't prevent a re-run from failing the same way.
    
    Fix: apply `statement_timeout` via `SET LOCAL` inside each transactional migration instead of at session level, or add `SET statement_timeout = 0` before entering `autocommit_block()` inside the finalize migration.
    
    How can I resolve this? If you propose a fix, please make it concise.

    Fix in Cursor Fix in Claude Code Fix in Codex

Fix All in Cursor Fix All in Claude Code Fix All in Codex

Prompt To Fix All With AI
Fix the following 3 code review issues. Work through them one at a time, proposing concise fixes.

---

### Issue 1 of 3
agentex/database/migrations/alembic/env.py:150-152
**`statement_timeout` aborts `CREATE INDEX CONCURRENTLY` on large tables**

`connection.exec_driver_sql("SET statement_timeout = '30s'")` sets a session-level GUC. Unlike `SET LOCAL`, session-level GUCs persist across transaction boundaries — including the `autocommit_block()` entered by the finalize migration. The `env.py` comment itself confirms this: *"These are session-level so they persist across…autocommit_block boundaries on the same connection."* The PR description's claim that `autocommit_block()` "bypasses" the timeout because "no transaction = no listener trigger" is incorrect — there is no listener; the GUC is set directly on the connection.

`CREATE INDEX CONCURRENTLY` on a 20 M+ row `spans` table will run well past 30 s, and PostgreSQL will abort it with `ERROR: canceling statement due to statement timeout`, leaving the index in an invalid state. The `IF NOT EXISTS` guard won't prevent a re-run from failing the same way.

Fix: apply `statement_timeout` via `SET LOCAL` inside each transactional migration instead of at session level, or add `SET statement_timeout = 0` before entering `autocommit_block()` inside the finalize migration.

### Issue 2 of 3
agentex/database/migrations/alembic/versions/2026_05_06_1200_finalize_spans_task_id_a9959ebcbe98.py:40-55
**Lock wait on FK constraint may exceed `lock_timeout = '3s'`**

`ALTER TABLE spans ADD CONSTRAINT … NOT VALID` acquires a brief `ShareRowExclusiveLock`. On a busy write-heavy table this lock wait can exceed the session-level `lock_timeout = '3s'` set in `env.py`, aborting the statement before it even starts. The idempotency guard means a retry will attempt it again, but under sustained write load it may never succeed within the window. Ensure the `lock_timeout` value is aligned with the expected contention on this table, or document the retry behavior.

### Issue 3 of 3
agentex/database/migrations/alembic/env.py:107-117
**Offline mode missing `transaction_per_migration=True`**

`run_migrations_online()` sets `transaction_per_migration=True`, which is required for `autocommit_block()` to function correctly (Alembic asserts `self._transaction is not None` internally). The offline-mode `context.configure()` call does not set this flag, so `alembic upgrade head --sql` would generate incorrect SQL when the finalize migration calls `autocommit_block()`. While offline mode is rarely used for direct execution, the generated SQL script would not reflect the actual execution semantics.

Reviews (8): Last reviewed commit: "fix(migrations): commit autobegun txn be..." | Re-trigger Greptile

@declan-scale declan-scale requested a review from a team as a code owner May 6, 2026 14:55
@declan-scale declan-scale force-pushed the declan-scale/fix-spans-task-id-migration branch from 9d81691 to cec6afa Compare May 6, 2026 15:04
Copy link
Copy Markdown

@mohammadatallah-scale mohammadatallah-scale left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would happen if a migration adds an operation that cannot run inside a transaction without an autocommit_block? Such as CREATE INDEX CONCURRENTLY

@declan-scale
Copy link
Copy Markdown
Collaborator Author

What would happen if a migration adds an operation that cannot run inside a transaction without an autocommit_block? Such as CREATE INDEX CONCURRENTLY

Postgres rejects it immediately with SQLSTATE 25001ERROR: CREATE INDEX CONCURRENTLY cannot run inside a transaction block. The check happens at the parse/plan stage before any index work begins, so:

  • No partial / INVALID index is left behind.
  • The migration's transaction goes into an error state.
  • Alembic rolls back, the migration fails, the pod's startup fails.
  • The next pod attempt sees alembic_version unchanged (because the failed migration was rolled back) and tries again — same failure.

So it fails loudly and fast, which is the right outcome — much better than a silent corrupt state. The author has to wrap the statement in op.get_context().autocommit_block() to make it work.

A few specifics worth noting:

  • The runtime timeouts don't help here. lock_timeout / statement_timeout / idle_in_transaction_session_timeout only fire while a statement is running or a transaction is open. The 25001 rejection happens before any of those clocks start.
  • transaction_per_migration=True doesn't change the outcome. It controls how many transactions wrap the migration set, not whether there is one at all. With it on, each migration gets its own BEGIN ... COMMIT; without it, all migrations share one outer transaction. Either way, a CREATE INDEX CONCURRENTLY at the top level hits an active transaction and is rejected.
  • Other "cannot run in a transaction" statements behave the same way: DROP INDEX CONCURRENTLY, REINDEX CONCURRENTLY, VACUUM, CREATE DATABASE, ALTER TYPE ... ADD VALUE (on PG < 12), CLUSTER. All hit the same 25001 path.
  • The migration linter (SGP-5785, going up as a separate PR shortly) catches this at PR time. Specifically the transaction-nesting rule — it flags migrations that mix CONCURRENTLY operations with same-transaction DDL. So the normal failure mode is a CI fail on the PR, not a deploy fail. The deploy-time 25001 rejection is the safety net.

Net: this is one of the better failure modes in the migration safety story. The wrong shape is mechanically impossible to deploy successfully — you find out at the latest at pod startup, with no DB damage to clean up.

declan-scale and others added 6 commits May 6, 2026 13:51
The original 57c5ed4f59ae migration combined a multi-million-row UPDATE
backfill, a foreign-key add (full-table validation under
AccessExclusiveLock), and a non-concurrent CREATE INDEX. On a sufficiently
large spans table this combination exhausts the application connection pool
while concurrent writes pile up behind the lock.

Changes:

* Reduce 57c5ed4f59ae to a fast, idempotent ADD COLUMN IF NOT EXISTS.
  Adding a nullable column with no default is metadata-only on PG >= 11
  and does not block writes. Idempotency makes it a no-op on environments
  where the original heavier version already completed.
* Add follow-up migration a9959ebcbe98 that finalizes the FK and index
  using non-blocking operations: ADD CONSTRAINT NOT VALID (skips the
  full-table scan) and CREATE INDEX CONCURRENTLY (does not block writes).
  Both are guarded with pg_constraint / IF NOT EXISTS so the migration is
  a no-op on environments that already finished the original migration.
* Skip the in-band backfill. The application reads tolerate NULL task_id
  by ORing on trace_id at query time (SpanRepository.list), which returns
  the same set of spans for task-scoped traces. Full backfill is now an
  operator-driven, batched runbook (separate PR).
* Set transaction_per_migration=True so individual migrations can use
  alembic's autocommit_block() for CREATE INDEX CONCURRENTLY etc.
* Apply default lock_timeout=3s and statement_timeout=30s per migration
  via SET LOCAL inside the transaction. This prevents future long-running
  migrations from queueing behind active writes and caps total runtime so
  they abort cleanly instead of blocking pod startup. autocommit_block()
  statements run outside the transaction and bypass these timeouts
  deliberately (they are inherently long but non-blocking).
A None task_id filter would expand to
(task_id IS NULL OR trace_id IS NULL), which on a partially-backfilled
spans table where most historical rows have task_id NULL would return
nearly every row instead of the caller's expected NULL-scoped subset.

Use filters.get("task_id") is not None to gate the OR fallback so a
None value falls through to the parent repository's normal IS NULL
handling. Adds a test that creates a row with task_id NULL and asserts
filtering by task_id=None does not pick up rows whose only NULL column
is task_id-via-trace_id-fallback.
…cape hatch

Two follow-ups to the runner-default-timeouts work:

* Add a per-migration SET LOCAL idle_in_transaction_session_timeout = '10s'
  alongside lock_timeout and statement_timeout. Without it, a stalled
  migration that has already acquired AccessExclusiveLock can hold the lock
  indefinitely until its connection drops — the other two timeouts do not
  catch the "open transaction making no progress" case. 10 s is short
  enough to recover quickly, long enough to absorb normal pauses between
  statements within a single migration.
* Document the escape-hatch convention (`# migration-unsafe-ack: <reason>`
  top-of-file directive paired with a `migration-unsafe-ack` PR label)
  that the planned migration linter will enforce. Until the linter ships,
  this comment is the canonical signal that a migration's author has
  consciously opted out of the runner defaults and expects review under
  maintenance-window assumptions.
Updates the env.py escape-hatch comment to reference the linter as
actually shipped rather than as planned work.
Switches the migration runner timeout setup to mirror the egp-api-backend
implementation so the two repos are aligned on outcome:

* Use a DEFAULT_MIGRATION_TIMEOUTS dict (lock_timeout, statement_timeout,
  idle_in_transaction_session_timeout) and a _format_set_statements
  helper, so the values are easy to keep in sync between repos.
* Apply session-level SET (no LOCAL) at connection start instead of
  per-transaction SET LOCAL via a SQLAlchemy begin event listener.
  Session-level values persist across per-migration transactions and
  across autocommit_block boundaries on the same connection — the
  latter matters because autocommit_block is exactly what
  CREATE INDEX CONCURRENTLY uses, and we want the timeouts to
  cover those statements too.
* Apply timeouts in both online and offline modes (offline emits the
  SET statements at the top of the generated SQL via context.execute).
* Update the docstring to reference scripts/ci_tools/migration_lint.py
  (the linter location aligned with sgp's layout) and the
  migration-unsafe-ack PR label as the documented escape hatch.

Adds tests/unit/database/test_alembic_env_timeouts.py with a focused
sanity-check on the constants, the helper shape, and the both-modes
wiring — same shape as the sgp test.
exec_driver_sql for the session SET statements autobegins a SQLAlchemy
transaction. context.configure() then sees the connection as already in
a transaction and sets _in_external_transaction=True, which silently
disables transaction_per_migration: begin_transaction() returns
nullcontext() and self._transaction is never assigned. Any migration
that opens an autocommit_block() then trips its
`assert self._transaction is not None` check (the connection IS in a
txn, but alembic's handle to it is None).

Committing immediately after the SETs closes the autobegun txn so
configure() sees a clean connection. Postgres SET is session-level, so
the timeouts persist past the commit.

Surfaced by finalize_spans_task_id_a9959ebcbe98, the first migration in
this tree to use autocommit_block.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@declan-scale declan-scale force-pushed the declan-scale/fix-spans-task-id-migration branch from fd14699 to acfea10 Compare May 6, 2026 17:51
@declan-scale declan-scale merged commit 4e41c46 into main May 6, 2026
31 checks passed
@declan-scale declan-scale deleted the declan-scale/fix-spans-task-id-migration branch May 6, 2026 18:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants