| layout | default |
|---|---|
| title | Getting Started |
| nav_order | 2 |
| description | Create and run your first ColdBrew service in 5 minutes with cookiecutter or manual setup |
| permalink | /getting-started |
{: .no_toc }
{: .no_toc .text-delta }
- TOC {:toc}
Before you begin, install:
- Go 1.25+ —
go versionshould show 1.25 or later - cookiecutter —
brew install cookiecutterorpip install cookiecutter - buf — for protobuf code generation
- Docker Compose — for the local dev stack (Step 7). Works with Docker Desktop, Colima, or any OCI runtime
cookiecutter gh:go-coldbrew/cookiecutter-coldbrewAnswer the prompts:
source_path [github.com/ankurs]: github.com/yourname
name [MyApp]: EchoServer
grpc_package [com.github.ankurs]: com.github.yourname
service_name [MySvc]: EchoSvc
project_short_description [EchoServer is a Golang project.]:
goprivate []:
docker_image [alpine:latest]:
docker_build_image [golang]:
Select docker_build_image_version:
1 - 1.26
2 - 1.25
Choose from 1, 2 [1]: 1
include_docker_compose [y/n] (y):
local_services (postgres,mysql,...,adminer) [postgres,redis]:
{: .note }
The exact Go image versions listed in this menu may vary depending on the cookiecutter template version you are using. Follow the options shown when you run cookiecutter.
cd EchoServer/Here's what was generated:
EchoServer/
├── main.go # Entry point — wires ColdBrew framework
├── config/
│ └── config.go # Configuration via environment variables
├── service/
│ ├── service.go # Your business logic goes here
│ ├── service_test.go # Tests and benchmarks
│ ├── healthcheck.go # Kubernetes liveness/readiness probes
│ ├── healthcheck_test.go
│ └── metrics/ # Application metrics (counter, histogram)
│ ├── types.go # Metrics interface (mockable)
│ ├── metrics.go # Prometheus implementation (promauto)
│ ├── labels.go # Label constants
│ └── metrics_test.go
├── proto/
│ └── echoserver.proto # API definition (source of truth)
├── version/
│ └── version.go # Build-time version info
├── deploy/local/ # Local dev infrastructure
│ ├── prometheus.yml # Prometheus scrape config
│ └── grafana/ # Grafana provisioning + dashboard
├── misc/loadtest/
│ └── echo.json # ghz gRPC load test config
├── third_party/OpenAPI/ # Swagger UI assets (embedded)
├── .github/workflows/
│ └── go.yml # GitHub Actions CI pipeline
├── .gitlab-ci.yml # GitLab CI pipeline
├── docker-compose.local.yml # Local dev stack (per-service profiles: postgres, redis, kafka, obs, etc.)
├── Makefile # Build, test, lint, run, Docker, local-stack targets
├── Dockerfile # Multi-stage production build
├── .golangci.yml # Linter configuration
├── .mockery.yaml # Mock generation config
├── buf.yaml # Protobuf linting config
├── buf.gen.yaml # Code generation config
└── local.env.example # Environment variable template
Key insight: Your API is defined in proto/echoserver.proto. ColdBrew generates both gRPC handlers and REST endpoints from this single source.
make runThis compiles and starts your service. You should see log output indicating:
- gRPC server listening on
:9090 - HTTP gateway listening on
:9091
Open a new terminal and test each endpoint:
$ curl -s localhost:9091/healthcheck
{"git_commit":"f470560c0a361839763c2abdac8a01b495bfd908","version":"0.1.0","build_date":"2026-03-24-09:44:05","go_version":"go1.26.1","os_arch":"darwin arm64","app":"myapp","branch":"main"}The healthcheck returns build and version information as JSON — useful for quickly identifying which version of your service is running in any environment.
$ curl -s localhost:9091/readycheck
{"git_commit":"f470560c0a361839763c2abdac8a01b495bfd908","version":"0.1.0","build_date":"2026-03-24-09:44:05","go_version":"go1.26.1","os_arch":"darwin arm64","app":"myapp","branch":"main"}Returns the same version JSON when the service is ready to receive traffic. Returns an error if the service hasn't called SetReady() yet.
curl -s -X POST localhost:9091/api/v1/example/echo \
-H "Content-Type: application/json" \
-d '{"msg": "hello coldbrew"}'Expected: {"msg":"hello coldbrew"}
curl -s localhost:9091/metrics | head -20You should see Prometheus metrics including grpc_server_handled_total, grpc_server_handling_seconds, and more.
Open http://localhost:9091/swagger/ in your browser. You'll see interactive API documentation with all your endpoints.
curl -s localhost:9091/debug/pprof/ | head -5Go's built-in profiler is available for debugging performance issues.
If you have grpcurl installed:
grpcurl -plaintext localhost:9090 listThis lists all registered gRPC services.
The entry point creates a ColdBrew instance and wires everything together:
cb := core.New(cfg) // Create ColdBrew with config
cb.SetOpenAPIHandler(...) // Enable Swagger UI
cb.SetService(&cbSvc{}) // Register your service
cb.Run() // Start (blocks until shutdown signal)Your service struct implements core.CBService with three methods:
InitGRPC()— Registers gRPC handlersInitHTTP()— Registers HTTP gateway handlers (auto-generated from proto)Stop()— Cleanup on graceful shutdown
This is where you write your application logic. The template includes demo Echo and Error endpoints:
func (s *SvcNameImpl) Echo(ctx context.Context, req *pb.EchoRequest) (*pb.EchoResponse, error) {
return &pb.EchoResponse{Msg: req.GetMsg()}, nil
}Your API is defined as a protobuf service. Each RPC method has an HTTP annotation that creates a REST endpoint automatically:
rpc Echo(EchoRequest) returns (EchoResponse) {
option (google.api.http) = {
post: "/api/v1/example/echo"
body: "*"
};
}Let's add a Greet endpoint to your service.
Add to proto/echoserver.proto:
import "buf/validate/validate.proto";
message GreetRequest {
string name = 1 [(buf.validate.field).string.min_len = 1];
}
message GreetResponse {
string greeting = 1;
}The min_len = 1 annotation ensures the name is not empty. ColdBrew validates this automatically on both gRPC and HTTP requests — sending an empty name returns InvalidArgument.
Add the RPC method to your service block:
rpc Greet(GreetRequest) returns (GreetResponse) {
option (google.api.http) = {
get: "/api/v1/greet/{name}"
};
}make generateThis runs buf generate and creates the Go code for your new message types and service interface.
{: .note }
After regenerating, the Go compiler will report an error until you implement the new Greet method — this is by design. Your proto file is the contract, and the compiler enforces it. You can't forget an endpoint or deploy a half-implemented API.
Add to service/service.go:
func (s *SvcNameImpl) Greet(ctx context.Context, req *pb.GreetRequest) (*pb.GreetResponse, error) {
return &pb.GreetResponse{
Greeting: "Hello, " + req.GetName() + "!",
}, nil
}make runIn another terminal:
# REST endpoint (auto-generated from proto)
curl -s localhost:9091/api/v1/greet/World
# => {"greeting":"Hello, World!"}You defined the API once in protobuf and got both gRPC and REST for free.
During project generation, you chose which services to include (default: postgres,redis). Start them with docker-compose, then run your app locally:
# Start your selected services (default profiles from generation)
make local-stack
# Add observability (Prometheus, Grafana, Jaeger)
make local-stack-obs
# Override profiles for a specific run
make local-stack PROFILES="postgres kafka"
# Run the app (fast native build, no Docker)
make runAvailable profiles: postgres, mysql, cockroachdb, mongodb, redis, valkey, memcached, kafka, nats, elasticsearch, ministack, dynamodb, spanner, pubsub, bigtable, firestore, alloydb, adminer, obs
The cookiecutter template generates local.env from local.env.example automatically (with OTLP_ENDPOINT=localhost:4317 pre-configured). With the obs profile, you get a pre-built Grafana dashboard showing request rate, error rate, latency percentiles, and Go runtime metrics. Traces flow to Jaeger automatically.
make loadtest # Run a 10s gRPC load test to generate traffic
make local-exec SVC=postgres CMD="psql -U postgres" # Exec into any serviceOpen http://localhost:3000 (Grafana, admin/admin) and http://localhost:16686 (Jaeger) to see metrics and traces in real-time.
{: .note }
The local stack is infra-only — your app runs natively via make run for fast iteration. Use make local-stack-down to stop everything. See the Local Development How-To for all available profiles, connection strings, Grafana dashboard customization, and troubleshooting.
# Build the Docker image
make build-docker
# Run the container
make run-dockerThe Dockerfile uses a multi-stage build: compiles a static Go binary in the builder stage, then copies it to a minimal Alpine image. Ports 9090 (gRPC) and 9091 (HTTP) are exposed.
make test # Tests with race detector + coverage
make lint # golangci-lint + govulncheck
make mock # Generate mocks for interfaces (via mockery)Both test and lint should pass out of the box. See the Testing How-To for details on mocks, benchmarks, and coverage reports.
Your project includes ready-to-use CI pipelines for both GitHub and GitLab. Delete whichever you don't need.
Runs on push to main/master and on pull requests. Four parallel jobs:
| Job | What it does |
|---|---|
| build | Compiles with make build |
| test | Runs make test (race detector + coverage) |
| benchmark | Runs make bench |
| lint | Runs govulncheck + golangci-lint v2 |
Each job has concurrency control so duplicate runs on the same branch are cancelled automatically.
Three jobs in a single test stage:
| Job | What it does |
|---|---|
| unit-test | Runs make test, generates Cobertura coverage report |
| lint | Runs make lint (golangci-lint + govulncheck) |
| benchmark | Runs make bench |
Go module caching is enabled for faster builds.
ColdBrew uses environment variables for configuration. Common settings:
| Variable | Default | Description |
|---|---|---|
GRPC_PORT |
9090 |
gRPC server port |
HTTP_PORT |
9091 |
HTTP gateway port |
LOG_LEVEL |
info |
Log level (debug, info, warn, error) |
JSON_LOGS |
true |
JSON formatted logs |
ENVIRONMENT |
"" |
Environment name |
TRACE_HEADER_NAME |
x-trace-id |
Header name for trace propagation |
NEW_RELIC_APPNAME |
"" |
New Relic application name |
NEW_RELIC_LICENSE_KEY |
"" |
New Relic license key |
SENTRY_DSN |
"" |
Sentry DSN for error tracking |
See the Configuration Reference for the complete list of 40+ environment variables including gRPC keepalive, TLS, OpenTelemetry OTLP, Prometheus histogram buckets, and graceful shutdown tuning.
ColdBrew comes with a comprehensive set of interceptors pre-configured. To add custom interceptors:
import "github.com/go-coldbrew/interceptors"
func init() {
interceptors.AddUnaryServerInterceptor(myCustomInterceptor)
}{: .warning }
Interceptor configuration functions must be called during init() — they are not safe for concurrent use.
If your service depends on private Go modules, GOPRIVATE is pre-configured in the Makefile, Dockerfile, and CI workflows from the goprivate value you set during project creation. You just need to set up authentication for your platform. See the Private Modules guide for GitHub, GitLab, Docker, and CI setup.
Everything below was set up automatically by ColdBrew:
- Structured JSON logging with trace ID propagation
- Distributed tracing support (OpenTelemetry, Jaeger, New Relic)
- Prometheus metrics for every gRPC method (latency, error rate, in-flight)
- gRPC interceptors for logging, tracing, metrics, error notification, panic recovery
- Request validation via protovalidate — define rules in proto, enforced automatically on both gRPC and HTTP requests
- Health checks for Kubernetes liveness/readiness probes
- Graceful shutdown on SIGTERM/SIGINT (Kubernetes pod termination)
- pprof profiling endpoints for debugging
- Swagger UI for interactive API exploration
- Race-detected tests via
make test - Vulnerability scanning via
make lint(includes govulncheck) - CI/CD pipelines for GitHub Actions and GitLab CI (build, test, lint, benchmark)
- Local dev stack — docker-compose with 20+ services (databases, caches, message brokers, AWS/GCP emulators, observability) selectable via per-service profiles
- Application metrics pattern — interface-based
service/metrics/package with counter and histogram examples - Load testing — ghz gRPC load test config with
make loadtest
If you prefer to set up a project manually without cookiecutter, here's the minimal path:
mkdir myservice && cd myservice
go mod init github.com/yourname/myservice
go get github.com/go-coldbrew/coreCreate proto/myservice.proto:
syntax = "proto3";
package myservice;
option go_package = "github.com/yourname/myservice/proto";
import "google/api/annotations.proto";
service MyService {
rpc Echo(EchoRequest) returns (EchoResponse) {
option (google.api.http) = {
post: "/api/v1/echo"
body: "*"
};
}
}
message EchoRequest {
string msg = 1;
}
message EchoResponse {
string msg = 1;
}Create buf.yaml:
version: v2
modules:
- path: proto
deps:
- buf.build/googleapis/googleapisCreate buf.gen.yaml:
version: v2
plugins:
- remote: buf.build/protocolbuffers/go
out: proto
opt: paths=source_relative
- remote: buf.build/grpc/go
out: proto
opt: paths=source_relative
- remote: buf.build/grpc-ecosystem/gateway
out: proto
opt: paths=source_relativeThen generate:
buf dep update
buf generatepackage main
import (
"context"
"github.com/go-coldbrew/core"
"github.com/go-coldbrew/core/config"
"github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"google.golang.org/grpc"
pb "github.com/yourname/myservice/proto"
)
type myService struct {
pb.UnimplementedMyServiceServer
}
func (s *myService) Echo(ctx context.Context, req *pb.EchoRequest) (*pb.EchoResponse, error) {
return &pb.EchoResponse{Msg: req.GetMsg()}, nil
}
func (s *myService) InitGRPC(ctx context.Context, server *grpc.Server) error {
pb.RegisterMyServiceServer(server, s)
return nil
}
func (s *myService) InitHTTP(ctx context.Context, mux *runtime.ServeMux, endpoint string, opts []grpc.DialOption) error {
return pb.RegisterMyServiceHandlerFromEndpoint(ctx, mux, endpoint, opts)
}
func main() {
cfg := config.GetColdBrewConfig()
cb := core.New(cfg)
cb.SetService(&myService{})
cb.Run()
}go mod tidy
go run .Your service starts on :9090 (gRPC) and :9091 (HTTP) with metrics, health checks, and profiling endpoints — all wired automatically.
- How-To Guides — Tracing, logging, metrics, error handling, and more
- Production Deployment — Kubernetes manifests, health probes, tracing, and graceful shutdown
- Integrations — Connect New Relic, Prometheus, Sentry, Jaeger
- FAQ — Common questions and gotchas