You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DCM is a microservice platform for managing service providers, catalogs, policies, and placement across infrastructure targets. This repo provides the canonical Ansible role for deploying all DCM services as individual quadlet containers with systemd integration — automatic restarts, dependency ordering, journal logging, and lifecycle management. It is the recommended production deployment path for running DCM on a RHEL 9 host without Kubernetes.
All services run as standalone containers on a shared bridge network (dcm-network). Each container is an independent systemd unit with explicit dependency ordering:
Container names match the upstream compose.yaml exactly (no prefix). Systemd unit names use a dcm- prefix. Services resolve each other by container name via Podman DNS.
Configuration files (Traefik routes, PostgreSQL init SQL) are sourced from the api-gateway repository, cloned at deploy time. This keeps api-gateway as the single source of truth.
Prerequisites
RHEL 9 or Fedora target host with Podman 4.4+ (quadlet support)
Network access to pull container images from docker.io and quay.io
Dedicated host (container names like postgres and nats assume no collisions)
Deployment Phases
The dcm_deploy role executes in six phases:
Prerequisites — installs container tools, firewalld, and git; opens the gateway and UI ports; creates config directories
Generate configs — clones the api-gateway repo, copies Traefik config and PostgreSQL init SQL, templates the shared environment file, then cleans up the clone
Deploy quadlet files — places .container, .network, and .volume unit files into /etc/containers/systemd/ and reloads systemd
Initialize database — starts PostgreSQL, creates all service databases if they don't exist
Start services — phased startup: NATS, then all four managers (with health checks), then the gateway, then the UI, then optional providers
Validate — checks the Traefik /ping endpoint, verifies all manager health endpoints through the gateway, and asserts all expected containers are running
DCM UI port published to host (also used in APP_BASE_URL)
dcm_postgres_port
5432
PostgreSQL port published to host
dcm_nats_port
4222
NATS client port published to host
dcm_nats_monitor_port
8222
NATS monitoring port published to host
The DCM UI's APP_BASE_URL defaults to http://<ansible_host>:<dcm_ui_port>. If the UI is accessed via a different hostname, domain, or behind a reverse proxy, override ansible_host in your inventory or set APP_BASE_URL directly in a custom vars file.
Note: three-tier-demo shares dcm_k8s_container_sp_kubeconfig — set dcm_provider_k8s_container: true alongside this provider.
Firewall
Variable
Default
Description
dcm_firewall_zone
public
Firewalld zone for published ports
Verification
After deployment, verify the stack is healthy:
# Traefik gateway responds
curl http://<host>:9080/ping
# Manager health endpoints (through gateway)
curl http://<host>:9080/api/v1alpha1/health/providers
curl http://<host>:9080/api/v1alpha1/health/catalog
curl http://<host>:9080/api/v1alpha1/health/policies
curl http://<host>:9080/api/v1alpha1/health/placement
# DCM UI accessible
curl http://<host>:7007
# All systemd units active
systemctl status dcm-*.service
# All containers running
podman ps
# Databases created
podman exec postgres psql -U admin -l
Compose Alignment
A standalone playbook verify_compose_alignment.yml checks that every service in the upstream compose.yaml has a corresponding quadlet template. Run it in CI to detect drift: