Feat/notification admin#387
Conversation
There was a problem hiding this comment.
Pull request overview
Adds an admin-facing notification configuration system backed by a new ccf_system_notification_destinations table, including provider catalog metadata, destination CRUD APIs, and digest dispatch updates to use stored destinations (with a legacy Slack digest channel migration path).
Changes:
- Introduces
SystemNotificationDestinationrelational model + migrations/backfill from legacy Slack digest config. - Adds notification provider catalog metadata + target configurator interfaces for admin/system configuration flows.
- Adds admin/public notification provider & destination management endpoints + updates digest dispatch to use configured destinations.
Reviewed changes
Copilot reviewed 34 out of 34 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/tests/migrate.go | Includes SystemNotificationDestination in test migrator up/down entity lists. |
| internal/service/slack/service_test.go | Adds tests for Slack workspace configuration lookups and client interface fakes. |
| internal/service/slack/service.go | Refactors Slack client to an interface; adds workspace configuration retrieval helpers. |
| internal/service/relational/system_notification_destination_test.go | Verifies GORM indexes exist for the new destination table. |
| internal/service/relational/system_notification_destination.go | Adds new system destination table/model storing provider + JSON target. |
| internal/service/notification/transport_test.go | Extends provider stubs with metadata and tests provider catalog ordering. |
| internal/service/notification/transport.go | Adds provider catalog + metadata types and sorted provider listing. |
| internal/service/notification/target_configurator.go | Introduces TargetConfigurator interface + lookup helper for admin flows. |
| internal/service/notification/system_destination_repository_test.go | Adds tests for listing/normalizing/deduping configured system targets. |
| internal/service/notification/system_destination_repository.go | Adds GORM repository to list normalized configured system targets. |
| internal/service/notification/providers/slack/provider_test.go | Adds tests for Slack target building/display and provider metadata. |
| internal/service/notification/providers/slack/provider.go | Adds provider metadata + target configurator implementation for Slack destinations. |
| internal/service/notification/providers/lookup.go | Provides a centralized provider lookup/registry builder (email + slack). |
| internal/service/notification/providers/email/provider_test.go | Adds tests for email target building/validation and provider metadata. |
| internal/service/notification/providers/email/provider.go | Adds provider metadata + target configurator implementation for email destinations. |
| internal/service/notification/gorm_configured_destination_resolver_test.go | Adds tests for resolving legacy configured destinations from DB. |
| internal/service/notification/gorm_configured_destination_resolver.go | Adds DB-backed resolver for configured destinations. |
| internal/service/notification/constants_test.go | Adds test coverage for SystemNotificationTypes(). |
| internal/service/notification/constants.go | Adds SystemNotificationTypes() and baseline system notification list. |
| internal/service/migrator_test.go | Adds tests for backfilling system destinations from legacy Slack digest channel config. |
| internal/service/migrator.go | Adds MigrateUpWithConfig and legacy system destination backfill migration. |
| internal/service/digest/service_test.go | Updates digest tests to use system destinations; adds multi-destination dispatch test. |
| internal/service/digest/service.go | Switches digest optional-send gating from Slack config to configured system destinations. |
| internal/service/digest/notifications.go | Switches digest configured destination resolution to DB + expands to multiple destinations. |
| internal/config/slack.go | Deprecates Slack DigestChannel as a one-time migration source. |
| internal/api/handler/notifications_integration_test.go | Adds integration tests for provider listing and system destination CRUD. |
| internal/api/handler/notifications.go | Adds admin/public notification provider & system destination endpoints. |
| internal/api/handler/api.go | Registers new notification routes (admin + authenticated public). |
| docs/swagger.yaml | Documents new notifications endpoints and response/request schemas. |
| docs/swagger.json | Generated swagger JSON updates for new notifications endpoints. |
| docs/docs.go | Generated swagger Go template updates for new notifications endpoints. |
| cmd/run.go | Switches to MigrateUpWithConfig to enable legacy backfill. |
| cmd/migrate.go | Switches to MigrateUpWithConfig to enable legacy backfill. |
| .env.example | Marks CCF_SLACK_DIGEST_CHANNEL as deprecated legacy migration input. |
Comments suppressed due to low confidence (2)
internal/service/digest/notifications.go:255
SendGlobalDigestcallshasGlobalDigestDestinations(which queries configured digest targets) and thendispatchEvidenceDigestNotificationsqueries the same targets again whenincludeConfiguredDestinationsis true. This results in duplicate DB work per digest run. Consider fetching the targets once and reusing them (e.g., return the targets fromhasGlobalDigestDestinationsor pass them intodispatchEvidenceDigestNotifications).
if includeConfiguredDestinations {
targets, err := s.configuredDigestTargets(ctx)
if err != nil {
return err
}
audiences := audiencesForTargets(targets)
if len(audiences) > 0 {
request.Requests = append(request.Requests, notification.Request{
Kind: evidenceDigestKind,
Audiences: audiences,
Model: newEvidenceDigestNotificationModel(summary, "", webBaseURL, generatedAt),
Options: dispatchOptions,
})
}
}
if includeSubscribedUsers {
request.SubscribedUsers = append(request.SubscribedUsers, notification.SubscribedUsersRequest{
Kind: evidenceDigestKind,
Options: dispatchOptions,
BuildModel: func(_ context.Context, user notification.User) (any, error) {
return newEvidenceDigestNotificationModel(summary, user.FirstName, webBaseURL, generatedAt), nil
},
})
}
if len(request.Requests) == 0 && len(request.SubscribedUsers) == 0 {
return nil
}
return s.notifier.DispatchFanout(ctx, request)
}
internal/service/slack/service.go:45
NewServiceinitializes the Slack client withcfg.Tokenuntrimmed, but later logic compares/normalizes usingstrings.TrimSpace. If the configured token contains leading/trailing whitespace,clientForTokenwill keep returning the pre-initialized client (built with the untrimmed token), which can cause authentication failures. Initialize the client with the trimmed token (or avoid pre-initializing and always go throughclientForToken).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| func (s *Service) GetConfigurationForToken(ctx context.Context, token string) (WorkspaceConfiguration, error) { | ||
| if !s.config.Enabled { | ||
| return WorkspaceConfiguration{}, fmt.Errorf("slack service is not enabled") | ||
| } |
There was a problem hiding this comment.
GetConfigurationForToken dereferences s.config without guarding against s == nil or s.config == nil. Since this is a public method (not just called via GetConfiguration), it can panic if invoked on an unconfigured service. Add the same nil checks used in GetConfiguration (or return a configured error) before accessing s.config.Enabled.
| func (s *Service) clientForToken(token string) apiClient { | ||
| trimmedToken := strings.TrimSpace(token) | ||
| if s != nil && s.client != nil && s.config != nil && trimmedToken == strings.TrimSpace(s.config.Token) { | ||
| return s.client | ||
| } | ||
|
|
||
| api := slack.New(trimmedToken) | ||
| if s != nil && s.config != nil && trimmedToken == strings.TrimSpace(s.config.Token) { | ||
| s.client = api | ||
| } |
There was a problem hiding this comment.
clientForToken reads and writes s.client without any synchronization. Service is likely used concurrently (e.g., multiple requests sending Slack messages), so this can introduce data races under the Go race detector. Protect s.client with a mutex/atomic value, or avoid mutating shared state by not caching the client on the struct.
| func (p *Provider) workspaceConfigurationDetails() slacksvc.WorkspaceConfiguration { | ||
| if p == nil || p.workspaceConfigurationResolver == nil { | ||
| return slacksvc.WorkspaceConfiguration{} | ||
| } | ||
|
|
||
| p.workspaceConfigurationOnce.Do(func() { | ||
| ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) | ||
| defer cancel() | ||
|
|
||
| configuration, err := p.workspaceConfigurationResolver(ctx) | ||
| if err != nil { | ||
| return | ||
| } | ||
| p.workspaceConfiguration = configuration | ||
| }) |
There was a problem hiding this comment.
workspaceConfigurationDetails uses context.Background() (with a timeout) rather than a caller/request context. Since this can be triggered by an HTTP request (provider catalog endpoints), it will continue the Slack lookup even if the request is canceled, and the sync.Once will also cache a transient failure forever. Prefer injecting a context from the caller (or resolving/caching workspace metadata at startup), and consider allowing retries on failure instead of permanently caching the zero value.
9094176 to
adbdff0
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 34 out of 34 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (1)
internal/service/slack/service.go:45
- NewService checks strings.TrimSpace(cfg.Token) to decide whether to initialize the Slack client, but then passes the untrimmed cfg.Token into slack.New(). If the token has leading/trailing whitespace, the client will be initialized with an invalid token. Consider passing the trimmed token to slack.New() to match the check.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| p.workspaceConfigurationOnce.Do(func() { | ||
| ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) | ||
| defer cancel() | ||
|
|
||
| configuration, err := p.workspaceConfigurationResolver(ctx) | ||
| if err != nil { | ||
| return | ||
| } | ||
| p.workspaceConfiguration = configuration | ||
| }) | ||
|
|
There was a problem hiding this comment.
workspaceConfigurationDetails uses sync.Once, but if workspaceConfigurationResolver returns an error, the Once still marks the call as done and the provider will never retry, leaving workspace metadata permanently empty for the process lifetime (even if the transient failure clears). Consider either caching the error and allowing retry (e.g., via sync.Mutex + TTL) or only marking the result cached when the resolver succeeds.
| p.workspaceConfigurationOnce.Do(func() { | |
| ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) | |
| defer cancel() | |
| configuration, err := p.workspaceConfigurationResolver(ctx) | |
| if err != nil { | |
| return | |
| } | |
| p.workspaceConfiguration = configuration | |
| }) | |
| ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) | |
| defer cancel() | |
| configuration, err := p.workspaceConfigurationResolver(ctx) | |
| if err != nil { | |
| return p.workspaceConfiguration | |
| } | |
| p.workspaceConfiguration = configuration |
| configsByName := make(map[string][]configuredSystemDestinationResponse) | ||
| seenDestinations := make(map[string]struct{}) | ||
|
|
||
| for i := range rows { | ||
| name, ok := notification.NormalizeNotificationType(rows[i].NotificationType) | ||
| if !ok { | ||
| err := fmt.Errorf("unsupported notification type %q", rows[i].NotificationType) | ||
| h.sugar.Errorw("Invalid configured system notification type", "error", err, "notificationType", rows[i].NotificationType) | ||
| return ctx.JSON(http.StatusInternalServerError, api.NewError(err)) | ||
| } | ||
|
|
||
| destination, err := h.destinationResponseForRecord(rows[i]) | ||
| if err != nil { | ||
| h.sugar.Errorw( | ||
| "Invalid configured system notification destination", | ||
| "error", err, | ||
| "notificationType", name, | ||
| "provider", rows[i].Provider, | ||
| ) | ||
| return ctx.JSON(http.StatusInternalServerError, api.NewError(err)) | ||
| } | ||
|
|
||
| destinationKey := name + ":" + destination.ProviderType + ":" + destination.DestinationTarget | ||
| if _, exists := seenDestinations[destinationKey]; exists { | ||
| continue | ||
| } | ||
| seenDestinations[destinationKey] = struct{}{} |
There was a problem hiding this comment.
ListSystemNotifications deduplicates destinations using an exact-string key (name + provider + destinationTarget). Because destination targets are compared case-insensitively elsewhere (see targetsMatch using strings.EqualFold), this list endpoint can still return duplicate entries when targets differ only by case/whitespace normalization. Consider normalizing the dedupe key (e.g., strings.ToLower(destination.DestinationTarget)) or reusing targetsMatch/destinationResponseForTarget normalization semantics for deduping.
| func (s *Service) GetConfigurationForToken(ctx context.Context, token string) (WorkspaceConfiguration, error) { | ||
| if !s.config.Enabled { | ||
| return WorkspaceConfiguration{}, fmt.Errorf("slack service is not enabled") | ||
| } | ||
|
|
||
| trimmedToken := strings.TrimSpace(token) | ||
| if trimmedToken == "" { | ||
| return WorkspaceConfiguration{}, fmt.Errorf("slack token is required") | ||
| } |
There was a problem hiding this comment.
GetConfigurationForToken dereferences s.config without checking for nil (it only checks s.config in GetConfiguration). If a caller invokes GetConfigurationForToken on a Service with nil config, this will panic. Add a guard at the start similar to SendMessage/GetConfiguration (e.g., return a "not configured" error when s==nil || s.config==nil).
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 37 out of 37 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return slackEnabledFromConfig(cfg) | ||
| }), | ||
| WithWorkspaceConfigurationResolver(func(ctx context.Context) (slacksvc.WorkspaceConfiguration, error) { | ||
| if cfg == nil || cfg.Slack == nil || strings.TrimSpace(cfg.Slack.Token) == "" { |
There was a problem hiding this comment.
NewCatalogProvider's workspace configuration resolver only checks for a non-empty token; when Slack is configured but disabled, service.GetConfiguration will always return "slack service is not enabled" and workspaceConfigurationDetails() will retry on every ProviderMetadata() call. Consider short-circuiting here when Slack is disabled (return empty configuration with nil error) so metadata lookups don't repeatedly do work and repeatedly hit the error path.
| if cfg == nil || cfg.Slack == nil || strings.TrimSpace(cfg.Slack.Token) == "" { | |
| if cfg == nil || cfg.Slack == nil || strings.TrimSpace(cfg.Slack.Token) == "" || !slackEnabledFromConfig(cfg) { |
| var record relational.SystemNotificationDestination | ||
| result := r.db.WithContext(ctx). | ||
| Where("notification_type = ? AND provider = ?", lookup.NotificationType, lookup.Provider). | ||
| Limit(1). | ||
| Find(&record) | ||
| if result.Error != nil { | ||
| return ConfiguredDestination{}, fmt.Errorf("failed to fetch configured destination %q: %w", trimmedKey, result.Error) | ||
| } | ||
| if result.RowsAffected == 0 { | ||
| return ConfiguredDestination{}, fmt.Errorf("%w: %q", ErrConfiguredDestinationNotFound, key) | ||
| } | ||
|
|
There was a problem hiding this comment.
The resolver query uses Limit(1) without an ORDER BY, so if multiple rows exist for the same (notification_type, provider) the selected destination is nondeterministic. Consider adding an explicit order (e.g., newest created_at/updated_at) or enforcing uniqueness/deduplication so resolution is stable and predictable.
| var record relational.SystemNotificationDestination | |
| result := r.db.WithContext(ctx). | |
| Where("notification_type = ? AND provider = ?", lookup.NotificationType, lookup.Provider). | |
| Limit(1). | |
| Find(&record) | |
| if result.Error != nil { | |
| return ConfiguredDestination{}, fmt.Errorf("failed to fetch configured destination %q: %w", trimmedKey, result.Error) | |
| } | |
| if result.RowsAffected == 0 { | |
| return ConfiguredDestination{}, fmt.Errorf("%w: %q", ErrConfiguredDestinationNotFound, key) | |
| } | |
| var records []relational.SystemNotificationDestination | |
| result := r.db.WithContext(ctx). | |
| Where("notification_type = ? AND provider = ?", lookup.NotificationType, lookup.Provider). | |
| Limit(2). | |
| Find(&records) | |
| if result.Error != nil { | |
| return ConfiguredDestination{}, fmt.Errorf("failed to fetch configured destination %q: %w", trimmedKey, result.Error) | |
| } | |
| if result.RowsAffected == 0 { | |
| return ConfiguredDestination{}, fmt.Errorf("%w: %q", ErrConfiguredDestinationNotFound, key) | |
| } | |
| if len(records) > 1 { | |
| return ConfiguredDestination{}, fmt.Errorf("configured destination %q is ambiguous: multiple destinations found for notification_type=%q provider=%q", trimmedKey, lookup.NotificationType, lookup.Provider) | |
| } | |
| record := records[0] |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 37 out of 37 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (1)
internal/service/slack/service.go:45
NewServiceinitializes the cached Slack client withcfg.Tokenwithout trimming whitespace. BecauseclientForTokencompares trimmed tokens and will return the cached client when they match, a config token like " xoxb... " will result in using a client created with the untrimmed token, which can cause Slack API calls (AuthTest/SendMessage) to fail unexpectedly. Consider trimming the token before creating/caching the client (and/or storing the trimmed token alongside the cached client).
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
No description provided.