OLS-2882: Add spec files to the projects for AI-assisted development#1536
OLS-2882: Add spec files to the projects for AI-assisted development#1536joshuawilson wants to merge 2 commits intoopenshift:mainfrom
Conversation
|
@joshuawilson: This pull request references OLS-2882 which is a valid jira issue. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
| 3. The operator is fully event-driven. It does not use periodic/timer-based reconciliation. All changes are detected via Kubernetes watches on owned resources and annotated external resources. | ||
| 4. The operator selects between two mutually exclusive backend implementations at startup via the `--use-lcore` flag: AppServer (legacy, direct LLM proxy) or LCore (new, agent-based with Llama Stack). Both implement the same Lightspeed API surface. | ||
|
|
||
| ### Component Inventory |
There was a problem hiding this comment.
shouldn't we expand this to match the list of components from https://konflux-ui.apps.stone-prd-rh01.pg1f.p1.openshiftapps.com/ns/crt-nshift-lightspeed-tenant/applications/ols/components?
There was a problem hiding this comment.
We can but the spec files are specific to the repo.
Could create a higher level set of specs that cover all repos and konflux.
There was a problem hiding this comment.
The spec's Component Inventory currently lists the operands, the things the operator deploys and manages. Konflux components are the build artifacts, the container images that get built in the CI/CD pipeline.
Those are two different concerns. The build/image inventory is useful context, but it belongs in a different place than the Component Inventory section, which describes runtime behavior. It would fit better as a reference in the how/project-structure.md spec under something like "Container Images" mapping each logical component to its image name and build source. Mapping "which Konflux component produces which image that the operator deploys" -- but that's a convenience reference, not a behavioral rule.
| 1. The Llama Stack database name is hardcoded by the Llama Stack project and must not be changed. | ||
| 2. Llama Stack Generic mode cannot be mixed with legacy provider-specific fields (deploymentName, projectID, url, apiVersion). | ||
| 3. The Lightspeed Stack always connects to Llama Stack via localhost, even in server mode (they share a pod). | ||
| 4. Vector database IDs are sanitized from RAG image names if indexID is not explicitly provided. |
There was a problem hiding this comment.
This whole thing is going to be removed in this sprint
There was a problem hiding this comment.
not exactly fair to hold this up on something that didn't get merged but I'll remove it since I'm updating it
| |---|---|---|---| | ||
| | `--use-lcore` | bool | `false` | Select LCore backend instead of AppServer | | ||
| | `--lcore-server` | bool | `true` | LCore server mode (two containers) vs library mode (one container) | | ||
| | `--namespace` | string | `WATCH_NAMESPACE` env or `openshift-lightspeed` | Operator namespace | |
There was a problem hiding this comment.
Lcore is going away this sprint
| | OLS-2322 | Streamline OLSConfig CR deployment configuration | | ||
| | OLS-2323 | Extend OLSConfig CR to report specific deployment errors | | ||
| | OLS-2325 | Create type-safe log-level definition in the operator CR | | ||
| | OLS-2140 | Remove time-based operator reconciliation (completed -- now fully event-driven) | |
There was a problem hiding this comment.
Maybe add here “delivery map” subsection: one short table or bullet list that maps repo components → Konflux application names (or “see Konflux UI → ols app → components”) with a disclaimer: operator repo spec describes operator-managed workloads; Konflux may list additional CI/catalog components. That answers JoaoFula’s question without duplicating Konflux in every spec.
There was a problem hiding this comment.
That is something you can add to the spec as a separate PR.
|
|
||
| ### Operator Role | ||
|
|
||
| 1. The operator manages exactly one OLSConfig CR per cluster, named "cluster". CRs with any other name must be ignored. |
There was a problem hiding this comment.
OLSConfig is treated as a singleton per cluster: the operator only reconciles the cluster-scoped instance named cluster. Any other OLSConfig objects are ignored. Reconciled workloads are created in the openshift-lightspeed namespace.
There was a problem hiding this comment.
Do you want that added?
I replaced the line with that text. If you wanted something else put it in quotes.
| ### Operator Role | ||
|
|
||
| 1. The operator manages exactly one OLSConfig CR per cluster, named "cluster". CRs with any other name must be ignored. | ||
| 2. The operator deploys and manages four components: an application backend (AppServer or LCore), a PostgreSQL database, a Console UI plugin, and operator-level monitoring/networking resources. |
There was a problem hiding this comment.
mixing external resources with the operator's own infrastructure
There was a problem hiding this comment.
how is this wrong?
What do you think it should say?
There was a problem hiding this comment.
resource changes management is different. See below
| 1. The operator manages exactly one OLSConfig CR per cluster, named "cluster". CRs with any other name must be ignored. | ||
| 2. The operator deploys and manages four components: an application backend (AppServer or LCore), a PostgreSQL database, a Console UI plugin, and operator-level monitoring/networking resources. | ||
| 3. The operator is fully event-driven. It does not use periodic/timer-based reconciliation. All changes are detected via Kubernetes watches on owned resources and annotated external resources. | ||
| 4. The operator selects between two mutually exclusive backend implementations at startup via the `--use-lcore` flag: AppServer (legacy, direct LLM proxy) or LCore (new, agent-based with Llama Stack). Both implement the same Lightspeed API surface. |
There was a problem hiding this comment.
this is going away in this sprint
| 6. Console UI Plugin: OpenShift console extension that provides the Lightspeed chat interface. Integrates via ConsolePlugin CR and proxies requests to the backend. | ||
| 7. AppServer backend: Python/FastAPI application that handles LLM queries, RAG retrieval, conversation management, and tool execution. Talks to LLM providers directly. | ||
| 8. LCore backend: Dual-container deployment (Llama Stack + Lightspeed Stack) that provides the same API but routes through Llama Stack for LLM communication, enabling agent-based tool use and provider abstraction. | ||
| 9. Operator-level resources: ServiceMonitor for operator metrics, NetworkPolicy restricting operator pod access. |
There was a problem hiding this comment.
I would suggest separating external with operator-level resources (observability support), and also add a cross-reference here to the specific docs
There was a problem hiding this comment.
I don't understand what you want here. What is external?
What cross-reference are you looking for?
There was a problem hiding this comment.
For change detection, the operator differentiates owned (created by the operator) and external (provided by the user) resources and does change detection differently.
- For owned resources (described above), change detection is based on the resource version
- For external resources, change detection is implemented using watchers described in external resources
| 6. Console UI Plugin: OpenShift console extension that provides the Lightspeed chat interface. Integrates via ConsolePlugin CR and proxies requests to the backend. | ||
| 7. AppServer backend: Python/FastAPI application that handles LLM queries, RAG retrieval, conversation management, and tool execution. Talks to LLM providers directly. | ||
| 8. LCore backend: Dual-container deployment (Llama Stack + Lightspeed Stack) that provides the same API but routes through Llama Stack for LLM communication, enabling agent-based tool use and provider abstraction. | ||
| 9. Operator-level resources: ServiceMonitor for operator metrics, NetworkPolicy restricting operator pod access. |
There was a problem hiding this comment.
For change detection, the operator differentiates owned (created by the operator) and external (provided by the user) resources and does change detection differently.
- For owned resources (described above), change detection is based on the resource version
- For external resources, change detection is implemented using watchers described in external resources
| ### Operator Role | ||
|
|
||
| 1. The operator manages exactly one OLSConfig CR per cluster, named "cluster". CRs with any other name must be ignored. | ||
| 2. The operator deploys and manages four components: an application backend (AppServer or LCore), a PostgreSQL database, a Console UI plugin, and operator-level monitoring/networking resources. |
There was a problem hiding this comment.
resource changes management is different. See below
| 3. Compare content hashes (proxy CA cert hash) via annotations | ||
| 4. If any differ: update spec + annotations, call RestartX() function | ||
| - RestartX() sets `ols.openshift.io/force-reload` annotation to `time.Now().Format(time.RFC3339Nano)` | ||
| - This triggers a rolling restart by changing the pod template |
There was a problem hiding this comment.
For change detection, the operator differentiates owned (created by the operator) and external (provided by the user) resources and does change detection differently.
- For owned resources (described above), change detection is based on the resource version
- For external resources, change detection is implemented using watchers described in external resources
Two-layer spec structure under .ai/spec/: - what/ (10 files): behavioral rules for system-overview, CRD API, reconciliation, app-server, postgres, console-ui, TLS, security, resource-lifecycle, and observability - how/ (4 files): architecture specs for project-structure, reconciliation, deployment-generation, and config-generation Includes comprehensive testing section in project-structure covering unit tests (envtest + Ginkgo) and E2E test suite (12 test areas, Makefile targets, required environment variables). Specs are optimized for AI agent consumption and document the operator thoroughly enough to enable a from-scratch rewrite. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
I have also expanded the reference to E2E tests. To be clear, the spec is about behavior and not implementation so it should not list the tests. |
|
@joshuawilson: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/lgtm |
Description
Initial set of spec files to enable Agentic SDLC.
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing