This guide covers development workflows for multiple personas in the Kagenti ecosystem. Depending on your role, different sections will be more relevant:
- Agent Developers → Focus on agent development and A2A protocol integration
- Tool Developers → Emphasize MCP tool creation and gateway integration
- Extensions Developers → Custom operators and platform extensions
- MCP Gateway Operators → Protocol routing and Envoy configuration
👥 Review Complete Personas Documentation to identify your primary role.
-
Create a fork of kagenti
-
Clone your fork – command only shown for HTTPS; adjust the URL if you prefer SSH
git clone https://github.com/<your-username>/kagenti.git
cd kagenti- Add the upstream repository as a remote (adjust the URL if you prefer SSH)
git remote add upstream https://github.com/kagenti/kagenti.git- Fetch all tags from upstream
git fetch upstream --tagsThis project leverages pre-commit to enforce consistency in code style and run checks prior to commits with linters and formatters.
Installation can be done via directions here or brew install pre-commit on MacOS.
From the project base, install both the pre-commit and commit-msg hooks:
pre-commit install --hook-type pre-commit --hook-type commit-msgThe pre-commit hooks run linters and formatters before each commit. The
commit-msg hook automatically rewrites AI Co-authored-by trailers to
Assisted-By so that AI tools don't inflate GitHub contributor stats (see
Commit Attribution Policy).
To run the pre-commit hooks against all files manually:
pre-commit run --all-filesVSCode extensions such as this pre-commit-helper can be configured to run directly when files are saved in VSCode.
Work on your local repo cloned from your fork. Create a branch:
git checkout -b <name-of-your-branch>When ready to make your PR, make sure first to rebase from upstream (things may have changed while you have been working on the PR):
git checkout main; git fetch upstream; git merge --ff-only upstream/main
git checkout <name-of-your-branch>
git rebase mainResolve any conflict if needed, then you can make your PR by doing:
git commit -am "<your commit message>" -sNote that commits must be all signed off to pass DCO checks. It is reccomended (but not enforced) to follow best practices for commits comments such as conventional commits.
Push the PR:
git push --set-upstream origin <name-of-your-branch>Open the URL printed by the git push command for the PR and complete the PR by entering all the required info - pay attention to the type of PR indicator that goes at the start of the title, a meaningful description of what the PR does and possibly which issue is neing fixed.
Note - this is only enabled for maintainers for the project.
Checkout main and make sure it equals main in the upstream repo as follows:
if working on a fork and "upstream" is the name of the upstream remote (commmon convention)
git checkout main; git fetch upstream; git merge --ff-only upstream/mainif a maintainer using a branch upstream directly (not reccomended)
git checkout main; git pullcheck existing tags e.g.,
git tag
v0.0.1-alpha.1
v0.0.2-alpha.1
...
v0.0.4-alpha.9create a new tag e.g.
git tag v0.0.4-alpha.10Push the tag upstream
git push upstream v0.0.4-alpha.10The Kagenti UI v2 is a modern web application consisting of two components:
- Frontend: React + TypeScript application with PatternFly components
- Backend: FastAPI REST API that interfaces with Kubernetes
- Frontend: Node.js 20+ and npm
- Backend: Python 3.11+ and uv (package manager)
- Access to a Kubernetes cluster with kubeconfig properly configured
-
Navigate to the backend directory:
cd kagenti/backend -
Create virtual environment and install dependencies:
uv venv source .venv/bin/activate uv pip install -e .
-
Run the development server:
uvicorn app.main:app --reload --port 8000
The backend API will be available at http://localhost:8000 with:
- Swagger UI docs:
http://localhost:8000/api/docs - ReDoc:
http://localhost:8000/api/redoc
-
Navigate to the frontend directory:
cd kagenti/ui-v2 -
Install dependencies:
npm install
-
Start the development server:
npm run dev
The frontend will be available at http://localhost:3000. It automatically proxies API requests to the backend at http://localhost:8000.
Note: When running locally, you can explore UI features. To connect to agents or tools, you'll need to expose them via HTTPRoutes in your Kubernetes cluster.
The project Makefile provides convenient targets for building and loading images into your Kind cluster for testing.
make build-load-uiThis command will:
- Build both frontend and backend Docker images with auto-generated tags
- Load them into your Kind cluster (default:
kagenti) - Display the Helm upgrade command to deploy your images
Build only the frontend:
make build-load-ui-frontendBuild only the backend:
make build-load-ui-backendOverride default values:
make build-load-ui UI_FRONTEND_TAG=my-feature UI_BACKEND_TAG=my-feature KIND_CLUSTER_NAME=my-clusterAfter building and loading your images, update your Kagenti installation with the new image tags:
helm upgrade --install kagenti charts/kagenti \
--namespace kagenti-system \
--set openshift=false \
--set ui.frontend.image=ghcr.io/kagenti/kagenti-ui-v2 \
--set ui.frontend.tag=<your-frontend-tag> \
--set ui.backend.image=ghcr.io/kagenti/kagenti-backend \
--set ui.backend.tag=<your-backend-tag> \
-f <your-values-file>Tip: The make build-load-ui command displays the exact Helm command with your generated tags. Copy and paste it from the output.
Once the upgrade completes, access the UI at http://kagenti-ui.localtest.me:8080.
- Make changes to frontend or backend code
- Run
make build-load-uito build and load both images - Copy the displayed Helm upgrade command and run it
- Wait for pods to restart with new images
- Test your changes at
http://kagenti-ui.localtest.me:8080
The Kagenti UI supports importing environment variables from local .env files or remote URLs when creating agents. This feature simplifies agent configuration by allowing reuse of standardized environment variable definitions.
Standard .env Format:
MCP_URL=http://weather-tool:8080/mcp
LLM_MODEL=llama3.2
PORT=8000Extended Format with Kubernetes References:
When referencing values from Kubernetes Secrets or ConfigMaps, use JSON format enclosed in single quotes:
# Standard direct values
PORT=8000
MCP_URL=http://weather-tool:8080/mcp
LOG_LEVEL=INFO
# Secret reference - JSON format in single quotes
OPENAI_API_KEY='{"valueFrom": {"secretKeyRef": {"name": "openai-secret", "key": "apikey"}}}'
# ConfigMap reference - JSON format in single quotes
APP_CONFIG='{"valueFrom": {"configMapKeyRef": {"name": "app-settings", "key": "config.json"}}}'Format Requirements for JSON References:
- Entire JSON must be in single quotes (
'...') - Use double quotes for JSON keys and values
- No spaces around the
=sign - Valid JSON structure:
{"valueFrom": {"secretKeyRef": {"name": "...", "key": "..."}}} - Or for ConfigMaps:
{"valueFrom": {"configMapKeyRef": {"name": "...", "key": "..."}}}
Important:
- The Secret/ConfigMap must exist in the agent's namespace
- The agent needs permission to read the referenced resources
- Mix standard values and references in the same file
- Navigate to the Import New Agent page
- Expand the "Environment Variables" section
- Click "Import from File/URL" button
- Choose import method:
- Upload File: Drag and drop or browse for a local
.envfile - From URL: Enter a URL to a remote
.envfile (e.g., from GitHub)
- Upload File: Drag and drop or browse for a local
- Review the parsed variables in the preview
- Click "Import" to add variables to your agent configuration
- Edit or delete variables as needed before creating the agent
When adding or editing environment variables, you can choose from three types:
- Direct Value: Simple key-value pair (e.g.,
PORT=8000) - Secret: Reference to a Kubernetes Secret (requires secret name and key)
- ConfigMap: Reference to a Kubernetes ConfigMap (requires configMap name and key)
The UI provides conditional form fields based on the selected type, making it easy to configure the appropriate values.
Import environment variables directly from agent example repositories:
https://raw.githubusercontent.com/kagenti/agent-examples/main/a2a/git_issue_agent/.env.openai
https://raw.githubusercontent.com/kagenti/agent-examples/main/a2a/weather_service/.env
- Frontend README:
kagenti/ui-v2/README.md - Backend README:
kagenti/backend/README.md - Makefile UI targets: Run
make help-uifor details - Environment Variables Import Design:
docs/env-import-feature-design.md
HyperShift enables rapid testing of Kagenti on OpenShift by creating ephemeral hosted clusters (sandboxes) where the control plane runs as pods on a management cluster and workers run in AWS. This approach is faster and more cost-effective than deploying full OpenShift clusters.
- Fast cluster creation: 10-15 minutes vs. 45 minutes for IPI
- Cost efficient: Ephemeral clusters for testing, destroyed after use
- OpenShift validation: Test platform features on real OpenShift (OCP 4.20, 4.21)
- Parallel testing: Multiple developers can create isolated test clusters
- CI/CD integration: Automated E2E testing on OpenShift via GitHub Actions
┌─────────────────────────────────────────────┐
│ HyperShift Management Cluster (OpenShift) │
│ - Runs hosted cluster control planes │
│ - MCE 2.10 + HyperShift operator │
│ - Managed by platform team │
│ └─────────────────────────────────────────┘
│
├─► Hosted Cluster 1 (Your Test Cluster)
│ - Control plane: Pods on mgmt cluster
│ - Workers: EC2 instances in AWS
│
└─► Hosted Cluster 2 (Another Developer)
- Isolated namespace and AWS resources
Before creating hosted clusters, you need:
-
Credentials Package from the platform team containing:
- AWS credentials (IAM user for cluster creation)
- Management cluster kubeconfig (base64-encoded)
- Base domain and OIDC S3 bucket details
-
Required Tools:
- AWS CLI configured
- kubectl and oc CLI
- jq for JSON processing
- Ansible (installed automatically by setup script)
If you received a credential package (.tar.gz file) from the platform team:
# Extract the package
tar -xzf <package-name>.tar.gz
cd <package-name>
# Source the credentials
source .env.<tag>
# Verify setup
echo $MANAGED_BY_TAG
oc whoami --show-server # Should show management cluster APIThe .env file automatically creates the management cluster kubeconfig at ~/.kube/<tag>-mgmt.kubeconfig if it doesn't exist.
If setting up from scratch, run the credential setup script:
cd kagenti
# Set your managed-by tag (identifies your resources)
export MANAGED_BY_TAG="your-name-dev"
# Run setup (requires logged in to AWS and management cluster)
./.github/scripts/hypershift/setup-hypershift-ci-credentials.sh
# Source the generated credentials
source .env.${MANAGED_BY_TAG}This creates:
- AWS IAM users with scoped permissions
- OpenShift service account on management cluster
- Local
.env.${MANAGED_BY_TAG}file with all credentials
Once credentials are configured, create a test cluster:
# Navigate to repository root
cd kagenti
# Create cluster with random suffix
./.github/scripts/hypershift/create-cluster.sh
# Or specify a custom suffix
./.github/scripts/hypershift/create-cluster.sh my-test
# Keep cluster after creation (skip auto-destroy)
./.github/scripts/hypershift/create-cluster.sh my-test --skip-cluster-destroyWhat happens:
- Creates AWS infrastructure (VPC, subnets, security groups)
- Creates HostedCluster CR on management cluster
- Waits for control plane pods to be ready
- Creates worker NodePool (2x m5.xlarge by default)
- Waits for cluster to become accessible
- Outputs kubeconfig path
Cluster naming:
- Format:
${MANAGED_BY_TAG}-<suffix> - Example:
my-team-my-test - Must be ≤32 characters (AWS IAM role name limit)
Default configuration:
- OpenShift version: 4.20 (or
OCP_VERSIONenv var) - Workers: 2 nodes (m5.xlarge)
- Region: us-east-1
- Namespace:
${MANAGED_BY_TAG}on management cluster
After creating a cluster, run the full E2E test suite:
# Run full test (deploy Kagenti + E2E tests)
./.github/scripts/local-setup/hypershift-full-test.sh <cluster-suffix> --skip-cluster-destroy
# Re-run tests on existing cluster (skip cluster creation)
./.github/scripts/local-setup/hypershift-full-test.sh <cluster-suffix> --skip-cluster-create --skip-cluster-destroy
# Deploy Kagenti only (no tests)
./.github/scripts/hypershift/deploy-kagenti.sh <cluster-suffix>Test workflow:
- Sets up kubeconfig for hosted cluster
- Deploys Kagenti platform via Ansible
- Runs E2E test suite (
kagenti/tests/e2e/) - Optionally destroys cluster after tests
Working with kubeconfig:
# Kubeconfig is saved to standard location
export KUBECONFIG=~/hypershift-clusters/${MANAGED_BY_TAG}-<suffix>/kubeconfig
# Or use the helper to find it
source .env.${MANAGED_BY_TAG}
export KUBECONFIG=$(find ~/hypershift-clusters -name "kubeconfig" | grep ${MANAGED_BY_TAG}-<suffix>)
# Verify access
oc get nodes
oc get co # Check cluster operatorsWhen finished testing, clean up resources:
# Destroy specific cluster
./.github/scripts/hypershift/destroy-cluster.sh <cluster-suffix>
# Or use full-test script with destroy flag
./.github/scripts/local-setup/hypershift-full-test.sh <cluster-suffix> --include-cluster-destroyWhat gets deleted:
- HostedCluster CR and control plane pods
- Worker NodePool and EC2 instances
- AWS VPC, subnets, NAT gateways, load balancers
- IAM roles and instance profiles
Important: Cluster destruction can take 10-20 minutes. Monitor progress:
# Watch HostedCluster deletion
oc get hostedcluster -n ${MANAGED_BY_TAG} -w
# Check for stuck resources
oc get hostedcluster -n ${MANAGED_BY_TAG} -o yaml | grep finalizers -A 5Hosted clusters incur AWS costs while running:
Per cluster (running):
- 2x m5.xlarge workers:
$0.38/hour ($277/month if left running) - VPC/networking: ~$0.10/hour
- EBS volumes: ~$0.10/GB/month
Best practices:
- Destroy clusters when not in use (evenings, weekends)
- Use
--skip-cluster-destroyonly during active development - Monitor AWS costs via
aws ce get-cost-and-usageor AWS Console - Tag all resources with
kagenti.io/managed-byfor tracking
# Check HostedCluster status
oc get hostedcluster -n ${MANAGED_BY_TAG}
oc describe hostedcluster ${MANAGED_BY_TAG}-<suffix> -n ${MANAGED_BY_TAG}
# Check control plane pods
oc get pods -n ${MANAGED_BY_TAG}-<suffix>
# Check HyperShift operator logs
oc logs -n hypershift deployment/operator -f# Check NodePool status
oc get nodepool -n ${MANAGED_BY_TAG}
oc describe nodepool ${MANAGED_BY_TAG}-<suffix> -n ${MANAGED_BY_TAG}
# Check AWS EC2 instances (requires AWS credentials)
aws ec2 describe-instances --filters "Name=tag:kubernetes.io/cluster/${MANAGED_BY_TAG}-<suffix>,Values=owned"If cluster destruction hangs, manually remove finalizers:
# Remove finalizers from HostedCluster
oc patch hostedcluster ${MANAGED_BY_TAG}-<suffix> -n ${MANAGED_BY_TAG} \
--type=merge -p '{"metadata":{"finalizers":[]}}'
# Force delete if needed
oc delete hostedcluster ${MANAGED_BY_TAG}-<suffix> -n ${MANAGED_BY_TAG} --force --grace-period=0See CLEANUP-TEST-RESULTS.md for detailed cleanup findings.
The repository includes GitHub Actions workflows for automated HyperShift testing:
.github/workflows/e2e-hypershift.yaml- OpenShift 4.20 testing.github/workflows/e2e-hypershift-4.21.yaml- OpenShift 4.21 testing (manual trigger)
Workflow features:
- Automatic cluster creation and cleanup
- Slot-based parallelism (max 3 concurrent clusters)
- Full E2E test suite execution
- Artifact upload for debugging
For platform administrators deploying new management clusters, see:
- Terraform Management Cluster Guide - Complete deployment workflow
- Architecture Overview - Dual cluster strategy and design decisions
- Quick Start Guide - Step-by-step deployment (90 minutes)
Modify cluster configuration by setting environment variables before creation:
# Use different OpenShift version
export OCP_VERSION=4.21
# Change worker instance type and count
export INSTANCE_TYPE=m5.2xlarge
export WORKER_COUNT=3
# Use different AWS region
export AWS_REGION=us-west-2
# Create cluster with custom settings
./.github/scripts/hypershift/create-cluster.sh customYou can run multiple hosted clusters simultaneously:
# Create first cluster
./.github/scripts/hypershift/create-cluster.sh dev1 --skip-cluster-destroy
# Create second cluster (different suffix)
./.github/scripts/hypershift/create-cluster.sh dev2 --skip-cluster-destroy
# List all your clusters
oc get hostedcluster -n ${MANAGED_BY_TAG}Each cluster gets isolated AWS resources and namespace on the management cluster.
- HyperShift Documentation - Official HyperShift docs
- MCE Documentation - MultiCluster Engine overview
- Credential Setup Script - Detailed script documentation
- Cleanup Test Results - Known cleanup issues and workarounds