golang

Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry

Learn to build scalable event-driven microservices with Go, NATS JetStream & OpenTelemetry. Complete tutorial covering resilience patterns, monitoring & deployment.

Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry

Have you ever wondered how modern systems handle thousands of transactions without collapsing? I recently faced this challenge while designing an order processing system, leading me to explore event-driven microservices with Go. Today, I’ll share practical insights on building resilient systems using NATS JetStream and OpenTelemetry.

Getting started requires careful foundation work. We initialize our Go module and key dependencies:

go mod init event-driven-microservices
go get github.com/nats-io/nats.go@v1.28.0
go get go.opentelemetry.io/otel@v1.16.0
go get github.com/sony/gobreaker@v0.5.0

Our event definitions establish a shared language between services. Notice how each event contains essential metadata:

// internal/common/events/events.go
type BaseEvent struct {
    ID            string            `json:"id"`
    Type          EventType         `json:"type"`
    Timestamp     time.Time         `json:"timestamp"`
    Source        string            `json:"source"`
    Version       string            `json:"version"`
    CorrelationID string            `json:"correlation_id"`
}

type OrderCreatedEvent struct {
    BaseEvent
    Data OrderData `json:"data"`
}

Why is correlation ID crucial? It connects related events across services - a lifesaver when debugging distributed transactions.

The NATS JetStream implementation forms our messaging backbone. Here’s our connection handler with built-in resilience:

// pkg/eventbus/nats_eventbus.go
func NewNATSEventBus(natsURL string) (*NATSEventBus, error) {
    opts := []nats.Option{
        nats.ReconnectWait(2 * time.Second),
        nats.DisconnectErrHandler(func(conn *nats.Conn, err error) {
            log.Printf("Disconnected: %v", err)
        }),
        nats.ReconnectHandler(func(conn *nats.Conn) {
            log.Println("Reconnected to NATS")
        }),
    }
    
    conn, err := nats.Connect(natsURL, opts...)
    if err != nil {
        return nil, err
    }
    
    js, err := conn.JetStream()
    if err != nil {
        return nil, err
    }
    
    return &NATSEventBus{conn: conn, js: js}, nil
}

For the Order Service, we implement OpenTelemetry tracing directly into our Gin middleware:

// internal/order/api.go
func StartOrderAPI(tracer trace.Tracer) *gin.Engine {
    r := gin.Default()
    r.Use(otelgin.Middleware("order-service"))
    
    r.POST("/orders", func(c *gin.Context) {
        ctx := c.Request.Context()
        _, span := tracer.Start(ctx, "CreateOrder")
        defer span.End()
        
        // Order processing logic
    })
    
    return r
}

What happens when payments fail? Our circuit breaker prevents cascading failures:

// internal/payment/processor.go
func ProcessPayment(order events.OrderData) error {
    cb := gobreaker.NewCircuitBreaker(gobreaker.Settings{
        Name:    "payment-processor",
        Timeout: 30 * time.Second,
    })
    
    _, err := cb.Execute(func() (interface{}, error) {
        // Payment processing logic
        return nil, processPayment(order)
    })
    
    return err
}

For dead letter handling, we configure JetStream consumers to manage failures:

// pkg/eventbus/nats_eventbus.go
func (n *NATSEventBus) SetupDeadLetterQueue(stream, subject, durable string) {
    _, err := n.js.AddConsumer(stream, &nats.ConsumerConfig{
        Durable:        durable,
        DeliverSubject: subject,
        AckPolicy:      nats.AckExplicitPolicy,
        MaxDeliver:     3,
        BackOff:        []time.Duration{1 * time.Second, 5 * time.Second, 10 * time.Second},
    })
}

The Docker Compose configuration ties our ecosystem together:

# deployments/docker-compose.yml
services:
  nats:
    image: nats:jetstream
    ports:
      - "4222:4222"
      
  jaeger:
    image: jaegertracing/all-in-one
    ports:
      - "16686:16686"

  order-service:
    build: ./cmd/order-service
    environment:
      NATS_URL: "nats://nats:4222"

Can you visualize how these components interact during order processing? When a customer places an order:

  1. Order Service publishes OrderCreated event
  2. Inventory Service reserves stock
  3. Payment Service processes payment
  4. Notification Service sends confirmation
  5. Analytics Service records transaction

Each step generates traces visible in Jaeger, while JetStream ensures no event gets lost. If payment fails, the circuit breaker activates after three attempts, redirecting to a dead letter queue for investigation.

Building this taught me valuable lessons. Always version events - schema changes are inevitable. Use correlation IDs religiously. Implement backoff strategies in consumers. And never underestimate the power of proper tracing.

What challenges have you faced with microservices? Try this approach with your next project. Share your experiences below - I’d love to hear what works for you. If this helped, consider sharing it with others facing similar architecture decisions.

Keywords: event-driven microservices go, NATS JetStream tutorial, OpenTelemetry Go implementation, production microservices architecture, Go distributed tracing, microservices observability patterns, NATS messaging Go, event sourcing Go tutorial, circuit breaker microservices, Docker microservices deployment



Similar Posts
Blog Image
Building Production-Ready Event-Driven Microservices: Go, NATS, PostgreSQL Tutorial

Learn to build production-ready event-driven microservices with Go, NATS JetStream & PostgreSQL. Complete tutorial with testing, monitoring & deployment.

Blog Image
Build Production-Ready Event-Driven Microservices with NATS, Go, and Kubernetes: Complete Tutorial

Learn to build production-ready event-driven microservices with NATS, Go & Kubernetes. Covers resilient architecture, monitoring, testing & deployment patterns.

Blog Image
Master Cobra CLI and Viper Integration: Build Professional Go Command-Line Tools with Advanced Configuration Management

Learn to integrate Cobra CLI framework with Viper configuration management in Go. Build robust command-line apps with flexible config handling from multiple sources.

Blog Image
Building Scalable CLI Tools with Cobra and CockroachDB for Modern DevOps

Learn how to create powerful, user-friendly command-line tools using Cobra and CockroachDB to streamline database operations.

Blog Image
How to Integrate Echo Framework with OpenTelemetry for Go Microservices Observability and Distributed Tracing

Learn how to integrate Echo Framework with OpenTelemetry for powerful distributed tracing in Go applications. Boost observability and performance today.

Blog Image
Fiber Redis Integration: Build Lightning-Fast Go Web Applications with In-Memory Caching

Learn how to integrate Fiber with Redis for lightning-fast Go web applications. Boost performance with caching, sessions & real-time features. Build scalable apps today!