golang

How to Integrate Chi Router with OpenTelemetry in Go for Production-Ready Observability and Distributed Tracing

Learn how to integrate Chi Router with OpenTelemetry in Go for powerful observability. Build traceable microservices with minimal code changes. Start monitoring today.

How to Integrate Chi Router with OpenTelemetry in Go for Production-Ready Observability and Distributed Tracing

Lately, I’ve been thinking a lot about what happens after I deploy my Go services. The code works, the tests pass, but once it’s running in production, it can feel like I’ve sent it into a quiet room and closed the door. How are my APIs performing? Where are the slow parts? When a user reports an issue, where do I even start looking? This need for clarity, for insight into the living system, is what brought me to combine two powerful tools: the Chi router and OpenTelemetry.

If you build web services in Go, you probably know Chi. It’s that reliable, fast router that stays out of your way. OpenTelemetry, on the other hand, is the open-source standard for generating telemetry data—traces, metrics, and logs. By itself, Chi handles requests. By itself, OpenTelemetry creates data. But together, they give you a map of your application’s behavior in real time.

The magic happens through middleware. Instead of manually adding instrumentation to every single route handler—a tedious and error-prone task—you wrap your Chi router with OpenTelemetry’s HTTP instrumentation. This automatically creates spans for incoming requests. A span is simply a record of an operation, with a start time, an end time, and useful metadata. Think of it as a breadcrumb for every request that comes through your door.

Setting this up is straightforward. First, you initialize a tracer provider, which is the engine that creates and manages your traces. You’ll typically configure where the traces should be sent, like to a Jaeger or Zipkin backend.

package main

import (
    "go.opentelemetry.io/otel"
    "go.opentelemetry.io/otel/exporters/jaeger"
    "go.opentelemetry.io/otel/sdk/trace"
)

func initTracer() *trace.TracerProvider {
    exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint("http://localhost:14268/api/traces")))
    if err != nil {
        panic(err)
    }
    tp := trace.NewTracerProvider(
        trace.WithBatcher(exp),
        trace.WithResource(resource.NewWithAttributes(
            semconv.SchemaURL,
            semconv.ServiceNameKey.String("my-chi-service"),
        )),
    )
    otel.SetTracerProvider(tp)
    return tp
}

Next, you integrate it with Chi using the official otelchi middleware. This middleware attaches the tracing context to the request and starts a new span automatically.

package main

import (
    "github.com/go-chi/chi/v5"
    "github.com/go-chi/chi/v5/middleware"
    "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp"
    "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/filters"
)

func main() {
    tp := initTracer()
    defer tp.Shutdown(context.Background())

    r := chi.NewRouter()
    // Use the OpenTelemetry middleware before other middleware
    r.Use(otelchi.Middleware("my-service"))
    r.Use(middleware.Logger)

    r.Get("/users/{id}", func(w http.ResponseWriter, r *http.Request) {
        // Your handler logic here.
        // The span for this request is already active.
        userId := chi.URLParam(r, "id")
        w.Write([]byte("Fetching user: " + userId))
    })

    http.ListenAndServe(":8080", r)
}

Just like that, every request to /users/123 is now traced. But what about calls this handler makes to a database or another service? This is where context becomes crucial. The trace context is stored in the request’s context.Context. You pass this context to any downstream operation, and OpenTelemetry can link those child spans to the initial request span, creating a complete story.

This is the powerful part. You aren’t just timing your HTTP handler; you’re creating a connected chain of events. When a single user request triggers a database query and a call to a payment service, you can see the entire flow in your tracing backend. Suddenly, finding a bottleneck isn’t about guessing—it’s about looking at the span that took 2 seconds and seeing it was a particular SQL query.

Does this add overhead? The middleware is designed to be lightweight. The sampling and batching of spans for export are highly configurable, letting you balance detail with performance. For most applications, the value of the insight far outweighs the minimal cost.

The result is code that remains clean and idiomatic. Your business logic isn’t cluttered with observability boilerplate. You use standard middleware patterns, and you get a professional-grade observability setup. It respects the Go ethos: simple, composable, and effective. You can start small, on a single service, and expand as your system grows.

For anyone building services that need to be understood, not just run, this combination is a practical necessity. It turns the quiet room of production into a well-instrumented stage, where you can see the performance of every part of your application. You stop wondering what’s happening and start knowing.

I hope this walkthrough helps you add this powerful visibility to your own projects. If you’ve tried similar integrations or have questions about specific configurations, share your thoughts in the comments below. Don’t forget to like and share this if you found it useful for your next Go build.

Keywords: Chi Router OpenTelemetry integration, Go HTTP middleware observability, distributed tracing Go applications, OpenTelemetry Go instrumentation, Chi Router middleware setup, microservices monitoring Golang, Jaeger Zipkin integration Go, context propagation distributed systems, lightweight Go web framework observability, API gateway telemetry implementation



Similar Posts
Blog Image
How to Build a Production-Ready Worker Pool with Graceful Shutdown in Go: Complete Guide

Learn to build a robust Go worker pool with graceful shutdown, context handling, and production-grade concurrency patterns for scalable applications.

Blog Image
Build Event-Driven Microservices with NATS, Go, and Distributed Tracing: Complete Professional Guide

Learn to build scalable event-driven microservices using NATS messaging, Go, and OpenTelemetry tracing. Complete guide with code examples, deployment, and best practices.

Blog Image
Advanced CLI Configuration: Integrating Cobra and Viper for Flexible Go Applications

Learn to integrate Cobra and Viper for powerful Go CLI apps with flexible config management from files, env vars, and flags. Build better DevOps tools today!

Blog Image
How to Integrate Fiber with Redis Using go-redis for High-Performance Web Applications

Learn how to integrate Fiber with Redis using go-redis for high-performance web apps. Boost speed with caching, sessions & real-time features. Get started now!

Blog Image
Building Production-Ready Event-Driven Microservices with NATS, Go, and Kubernetes: Complete Tutorial

Learn to build scalable event-driven microservices with NATS, Go & Kubernetes. Complete guide with resilience patterns, observability & production deployment.

Blog Image
Event-Driven Microservices with Go, NATS JetStream, and Kubernetes: Production-Ready Architecture Guide

Learn to build production-ready event-driven microservices with Go, NATS JetStream & Kubernetes. Master concurrency patterns, error handling & monitoring.