golang

Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry

Learn to build scalable event-driven microservices with Go, NATS JetStream & OpenTelemetry. Master async messaging, observability & production deployment.

Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry

I’ve been thinking a lot about how modern systems handle scale and resilience. Recently, while designing an e-commerce platform that needed real-time inventory updates and instant notifications, I realized traditional request-reply architectures wouldn’t cut it. That’s when event-driven microservices became the clear solution. Today, I’ll show you how I built production-ready services using Go, NATS JetStream, and OpenTelemetry – tools that handle massive scale while keeping things observable.

Let’s start with our event definition. Clear event schemas prevent so many downstream issues. Here’s how I structure events:

type Event struct {
    ID          string                 `json:"id"`
    Type        string                 `json:"type"` // e.g., "order.created"
    AggregateID string                 `json:"aggregate_id"`
    Data        map[string]interface{} `json:"data"`
    Metadata    struct {
        CorrelationID string            `json:"correlation_id"`
        Source        string            `json:"source"`
    } `json:"metadata"`
    Timestamp time.Time `json:"timestamp"`
}

Notice the correlation_id? That’s our golden thread for tracing distributed workflows. What happens when services process events at different speeds? That’s where JetStream shines. Setting up our publisher:

func NewJetStreamPublisher(nc *nats.Conn) (Publisher, error) {
    js, err := nc.JetStream()
    if err != nil {
        return nil, err
    }
    
    // Ensure stream exists
    _, err = js.AddStream(&nats.StreamConfig{
        Name:     "EVENTS",
        Subjects: []string{"events.>"},
    })
    
    return &jetStreamPublisher{js: js}, nil
}

func (p *jetStreamPublisher) Publish(ctx context.Context, event Event) error {
    data, _ := json.Marshal(event)
    _, err := p.js.Publish(event.Subject(), data)
    return err
}

For consumers, we need durability. JetStream’s pull subscribers handle backpressure gracefully:

sub, _ := js.PullSubscribe("events.orders.>", "inventory-group")
for {
    msgs, _ := sub.Fetch(10) // Batch fetch
    for _, msg := range msgs {
        var event Event
        json.Unmarshal(msg.Data, &event)
        
        ctx := otel.GetTextMapPropagator().Extract(
            context.Background(),
            propagation.MapCarrier(msg.Header),
        )
        
        // Process event
        msg.Ack()
    }
}

See how we’re extracting tracing headers? That links processes across services. Now, what about failures? I use circuit breakers to prevent cascading issues:

cb := gobreaker.NewCircuitBreaker(gobreaker.Settings{
    Name: "inventory-service",
    ReadyToTrip: func(counts gobreaker.Counts) bool {
        return counts.ConsecutiveFailures > 5
    },
})

_, err := cb.Execute(func() (interface{}, error) {
    return inventory.Reserve(ctx, event)
})

For tracing, OpenTelemetry stitches everything together. Initialization looks like this:

func InitTracing() func() {
    exporter, _ := jaeger.New(jaeger.WithCollectorEndpoint(
        jaeger.WithEndpoint("http://jaeger:14268/api/traces"),
    ))
    
    tp := trace.NewTracerProvider(
        trace.WithBatcher(exporter),
        trace.WithResource(resource.NewWithAttributes(
            semconv.SchemaURL,
            semconv.ServiceNameKey.String("order-service"),
        )),
    )
    
    otel.SetTracerProvider(tp)
    return func() { tp.Shutdown(context.Background()) }
}

In handlers, we create spans that follow our events:

func HandleOrder(ctx context.Context, event Event) error {
    ctx, span := tracer.Start(ctx, "HandleOrder")
    defer span.End()
    
    span.SetAttributes(
        attribute.String("order.id", event.AggregateID),
        attribute.Int("item.count", len(event.Data["items"])),
    )
    
    // Business logic
}

Schema evolution is crucial. I version events using subject naming: events.order.v1.created. When introducing changes:

js.AddStream(&nats.StreamConfig{
    Name: "ORDERS",
    Subjects: []string{
        "events.order.v1.>",
        "events.order.v2.>", // New version
    },
})

For deployment, I run everything in Docker with health checks:

# Order service
FROM golang:1.19
WORKDIR /app
COPY go.mod ./
RUN go mod download
COPY . .
RUN go build -o order-service ./cmd/order

HEALTHCHECK --interval=10s CMD curl --fail http://localhost:8080/health || exit 1
CMD ["./order-service"]

What separates production-ready from proof-of-concept? Three things:

  1. Message deduplication using event IDs
  2. Dead-letter queues for poison pills
  3. Automated schema compatibility checks

I’ve seen systems handle 50K events/sec with this setup. The real magic? When inventory updates and notifications happen without blocking orders.

This approach transformed how we build systems at scale. Give it a try in your next project. Have questions about specific implementation details? Share your thoughts below – I’d love to hear what challenges you’re facing with event-driven systems. If this helped you, consider sharing it with others facing similar architecture decisions.

Keywords: event-driven microservices, Go microservices architecture, NATS JetStream tutorial, OpenTelemetry distributed tracing, production microservices Go, event sourcing patterns, microservices observability, Go concurrent programming, message streaming architecture, distributed systems Go



Similar Posts
Blog Image
How to Build Production-Ready Event-Driven Microservices with NATS, Go, and Kubernetes

Learn to build production-ready event-driven microservices with NATS, Go & Kubernetes. Master resilient architecture, observability & deployment patterns.

Blog Image
Complete Guide to Integrating Cobra CLI Framework with Viper Configuration Management in Go Applications

Learn to integrate Cobra CLI framework with Viper configuration management in Go. Build robust command-line apps with flexible config handling from multiple sources.

Blog Image
Building Production-Ready gRPC Microservices: Go Service Discovery, Load Balancing, and Observability Guide

Learn to build production-ready gRPC microservices with Go using advanced service discovery, load balancing, and observability patterns. Complete guide included.

Blog Image
Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and Complete Observability

Learn to build production-ready event-driven microservices with Go, NATS JetStream, and OpenTelemetry. Master resilience patterns, observability, and deployment strategies.

Blog Image
Integrating Cobra with Viper: Build Advanced CLI Tools with Flexible Configuration Management in Go

Learn how to integrate Cobra with Viper for powerful CLI configuration management in Go. Build flexible command-line tools with multi-source config support.

Blog Image
Build Production-Ready Event-Driven Microservices with NATS, Go, and Kubernetes

Learn to build production-ready event-driven microservices using NATS, Go & Kubernetes. Master JetStream, concurrency patterns, resilience & deployment.