Skip to content

Support context managers#71

Open
martinmkhitaryan wants to merge 1 commit intoNeoteroi:mainfrom
martinmkhitaryan:feat/support-context-managers
Open

Support context managers#71
martinmkhitaryan wants to merge 1 commit intoNeoteroi:mainfrom
martinmkhitaryan:feat/support-context-managers

Conversation

@martinmkhitaryan
Copy link
Copy Markdown

Add opt-in context manager management via manage_context flag

Summary

Adds opt-in support for rodi to enter and exit services that implement the
context manager protocol (__enter__/__exit__ and/or __aenter__/__aexit__).
Behavior is controlled by a new manage_context: bool = False parameter on the
registration methods. The default is False, so all existing behavior is
preserved exactly.

Motivation

The rodi documentation page on
context managers
states the core problem directly:

There is no way to unambiguously know the intentions of the developer:
should a context be entered automatically and disposed automatically?

Today rodi resolves the ambiguity by doing nothing: classes that implement the
context manager protocol are instantiated but never entered or exited. Users
have to wrap every call site themselves with with / async with, or wire
per-request middleware that mirrors what a DI container should already be
doing.

This PR resolves the ambiguity by giving the developer an explicit declaration
of intent. manage_context=True on registration tells rodi "yes, please enter
this on resolve and exit it when the owning scope ends." The default
(manage_context=False) preserves today's behavior verbatim, so no existing
app is affected.

A concrete case where this matters is SQLAlchemy's async session:

from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker

session_factory = async_sessionmaker(engine, expire_on_commit=False)

# Without this PR: every handler has to do `async with session_factory() as s:`
# itself, or the framework has to wire its own per-request middleware.

# With manage_context=True: rodi enters and exits the session per request scope.
container.add_scoped_by_factory(
    session_factory, AsyncSession, manage_context=True
)

@app.router.post("/users")
async def create_user(session: AsyncSession, payload: UserIn):
    session.add(User(**payload.dict()))
    # On scope close, rodi awaits __aexit__ in LIFO order across all managed
    # services in the request graph.

The same pattern applies to httpx clients, file handles, locks, tracing
spans, and anything else that wants entry on borrow / exit on return.

API

# Sync — managed by ActivationScope's ExitStack
container.add_scoped(MyResource, manage_context=True)
container.add_transient(MyResource, manage_context=True)
container.add_scoped_by_factory(my_factory, manage_context=True)
container.register(MyResource, manage_context=True)

with provider.create_scope() as scope:
    res = scope.get(MyResource)   # __enter__ called here
# __exit__ called here, in LIFO order across all managed instances


# Async — managed by AsyncActivationScope's AsyncExitStack
container.add_scoped(MyAsyncResource, manage_context=True)

async with provider.create_async_scope() as scope:
    res = await scope.aget(MyAsyncResource)   # __aenter__ awaited here
# __aexit__ awaited here, in LIFO order


# Mixed sync + async dependency graphs work in async scope
async with provider.create_async_scope() as scope:
    obj = await scope.aget(SomethingThatDependsOnBothKinds)


# Framework-facing async resolve via Container.aresolve
async with provider.create_async_scope() as scope:
    obj = await container.aresolve(MyAsyncResource, scope)
# `aresolve` requires an AsyncActivationScope and raises TypeError otherwise.
# It is the async counterpart to Container.resolve, declared by the new
# AsyncContainerProtocol so frameworks can opt into async resolution
# structurally without forcing every other DI implementation to follow.

Behavior matrix

Scope Sync-only CM Async-only CM Class with both protocols
provider.create_scope() entered & exited TypeError entered as sync
provider.create_async_scope() entered as sync entered as async entered as async
  • Scoped + managed: entered once per scope on first resolve, exited on
    scope end.
  • Transient + managed: entered on every resolve, all instances exited on
    scope end (LIFO).
  • Singleton + managed: rejected at registration with
    InvalidContextManagerRegistration (rationale below).

Why singleton is not supported (yet)

Singletons live for the lifetime of the Services provider, but rodi has no
explicit teardown hook for the provider/container today. Supporting
manage_context=True for singletons would require introducing one (e.g.
Services.dispose() or making Container itself a context manager), which is
its own design discussion (atexit semantics, shutdown ordering, error
handling).

To keep this PR focused and reviewable, singleton support is deferred to a
follow-up issue. The current behavior is to raise
InvalidContextManagerRegistration at registration with a clear message
pointing the user to SCOPED/TRANSIENT or to manual management.

Implementation notes

  • New ManagedScopedProvider and ManagedTransientProvider wrap any existing
    inner provider (TypeProvider, ArgsTypeProvider, FactoryTypeProvider).
    This keeps the existing 6 provider classes completely untouched, so users
    who don't opt in pay zero overhead on the hot resolve path.
  • ActivationScope gained a lazy _exit_stack: ExitStack | None. Drained
    before dispose() in __exit__, in LIFO order. Standard
    swap-and-call idiom for single-shot consumption.
  • AsyncActivationScope is a new subclass that adds a lazy
    _async_exit_stack: AsyncExitStack | None. Resolution stays synchronous —
    aget flips an _async_mode flag for the duration of Services.get,
    causing _enter_context to defer instances into a _pending_aenter
    list. Once sync resolution returns, aget drains the list using
    enter_async_context (for async CMs) or enter_context (for sync CMs)
    on the same AsyncExitStack. Sync get calls on the same async scope
    also route into that unified stack via an overridden
    _register_sync_context, so cross-protocol exit order across mixed
    get + aget calls is strictly LIFO regardless of which entry point
    added the instance.
  • AsyncContainerProtocol is a separate Protocol declaring aresolve.
    Container implements both ContainerProtocol and
    AsyncContainerProtocol. The existing ContainerProtocol is unchanged,
    so sync-only third-party DI containers remain compatible. Frameworks that
    want async resolution opt into it structurally by typing against
    AsyncContainerProtocol (or against Container directly).
    Container.aresolve is a thin wrapper around scope.aget that validates
    the scope type up front, so misuse fails with a clear TypeError instead
    of silently leaking managed CMs into an un-awaited throwaway scope.
  • TrackingActivationScope.__exit__ now calls super().__exit__(...) instead
    of self.dispose() directly, so the new exit-stack draining runs for users
    who opt into nested-scope tracking. Pre-existing tests confirm dispose
    semantics are unchanged.
  • Validation of "is this a context manager?" happens at resolution time inside
    _enter_context rather than at registration. Class-level checks at
    registration are an easy follow-up; deferred because the factory path can't
    reliably introspect the type the factory will return.

Backward compatibility

manage_context defaults to False everywhere. All existing call sites,
fixtures, and tests work without changes. No public symbol was renamed or
removed. New public symbols added:

  • AsyncActivationScope
  • AsyncContainerProtocol
  • Container.aresolve
  • InvalidContextManagerRegistration
  • ManagedScopedProvider, ManagedTransientProvider (intended internal but
    exposed for parity with the other provider classes already in the public
    module surface)
  • Services.create_async_scope, Services.aget

Follow-up (not in this PR)

A separate issue/PR will discuss singleton + manage_context=True. The shape
of that work — provider-level dispose, atexit semantics, container-as-context-
manager — is large enough that bundling it here would make this PR much harder
to review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant