golang

Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry Tutorial

Build production-ready event-driven microservices with Go, NATS JetStream & OpenTelemetry. Learn messaging patterns, observability & failure handling.

Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry Tutorial

I’ve been building distributed systems for years, and one truth stands out: without careful design, microservices can quickly become tangled and opaque. Recently, I tackled an order processing system where events needed reliable delivery across services - a perfect case for Go, NATS JetStream, and OpenTelemetry. Why these tools? Go’s efficiency handles high throughput, JetStream ensures no event gets lost, and OpenTelemetry provides the visibility we desperately need in distributed systems. Stick with me to see how these pieces fit together.

Setting up our foundation starts with structure. Our project organizes services clearly:

event-driven-microservices/
├── cmd/
   ├── order-service/
   ├── payment-service/
   └── inventory-service/
├── pkg/events/ # Core messaging logic
├── deployments/ # Docker configs
└── go.mod # Dependencies

The real magic begins with our event bus. Notice how we embed tracing directly into events:

// pkg/events/event.go
func (e *Event) WithTracing(ctx context.Context) {
    span := trace.SpanFromContext(ctx)
    if span.SpanContext().IsValid() {
        e.TraceID = span.SpanContext().TraceID().String()
        e.SpanID = span.SpanContext().SpanID().String()
    }
}

Why does this matter? Because when a payment fails hours later, we can trace its entire journey. Our NATS integration handles persistence elegantly:

// pkg/events/nats_eventbus.go
func (b *NATSEventBus) Publish(ctx context.Context, event *Event) error {
    data, _ := json.Marshal(event)
    _, err := b.js.Publish(event.Type, data, nats.MsgId(event.ID))
    return err
}

This ensures exactly-once delivery - critical for financial operations. But what happens when services disconnect? JetStream’s durable streams save messages until consumers reconnect.

For the order service, event sourcing captures every state change:

// internal/order-service/service.go
func (s *OrderService) CreateOrder(ctx context.Context, userID string, items []Item) (*Order, error) {
    order := NewOrder(userID, items)
    event, _ := events.NewEvent("OrderCreated", order.ID, 1, order)
    event.WithTracing(ctx) // Attach trace IDs
    s.eventBus.Publish(ctx, event)
    return order, nil
}

Notice how we attach tracing context before publishing. When the payment service processes this:

// internal/payment-service/service.go
func (s *PaymentService) handleOrderCreated(ctx context.Context, event *events.Event) error {
    var order Order
    json.Unmarshal(event.Data, &order)
    
    // Start tracing span using event's trace ID
    ctx = s.tracer.Start(ctx, "ProcessPayment", trace.WithLinks(
        trace.LinkFromContext(trace.ContextWithSpanContext(
            ctx, s.extractTraceContext(event)),
        )),
    )
    defer span.End()
    
    // Payment logic...
}

This connects spans across services visually in Jaeger. But what about failures? We implement layered resilience:

// Payment service with circuit breaker
breaker := gobreaker.NewCircuitBreaker(gobreaker.Settings{
    Name:    "PaymentProcessor",
    Timeout: 10 * time.Second,
    ReadyToTrip: func(counts gobreaker.Counts) bool {
        return counts.ConsecutiveFailures > 5
    },
})

_, err := breaker.Execute(func() (interface{}, error) {
    return s.processPayment(order) 
})

When failures exceed threshold, the breaker opens, preventing cascading failures. Failed messages go to a dead-letter stream for analysis.

For deployment, Docker Compose orchestrates everything:

# deployments/docker-compose.yml
services:
  nats:
    image: nats:jetstream
    ports: ["4222:4222"]
  jaeger:
    image: jaegertracing/all-in-one:1.48
    ports: ["16686:16686"]

Health checks ensure services restart cleanly. I’ve found that adding latency injection in testing catches many race conditions early.

Building this changed how I view distributed systems. The combination of Go’s simplicity, JetStream’s reliability, and OpenTelemetry’s clarity creates truly maintainable microservices. What patterns have you found effective? Share your experiences below - and if this helped, pass it along to others facing similar challenges!

Keywords: event-driven microservices Go, NATS JetStream messaging, OpenTelemetry distributed tracing, Go microservices architecture, persistent messaging patterns, saga orchestration Go, event sourcing microservices, distributed system observability, Go NATS tutorial, production microservices deployment



Similar Posts
Blog Image
Production-Ready gRPC Microservices with Go: Server Streaming, JWT Authentication, and OpenTelemetry Observability Guide

Learn to build production-ready gRPC microservices with Go. Master server streaming, JWT authentication, observability, and deployment best practices.

Blog Image
Echo Redis Integration: Building Lightning-Fast Go Web Apps with In-Memory Caching

Boost web app performance with Echo and Redis integration. Learn caching, session management, and scalable architecture for high-traffic Go applications.

Blog Image
Advanced Go CLI Development: Integrating Cobra and Viper for Enterprise Configuration Management

Build advanced Go CLI apps with Cobra & Viper integration. Learn seamless config management across flags, env vars & files for professional DevOps tools.

Blog Image
Build Production-Ready Event-Driven Microservices with Go, NATS JetStream, and OpenTelemetry

Learn to build production-ready event-driven microservices with Go, NATS JetStream & OpenTelemetry. Complete guide with code examples, best practices & deployment tips.

Blog Image
Master gRPC Microservices with Go: Advanced Concurrency Patterns and Protocol Buffers Guide

Master gRPC microservices with Protocol Buffers, advanced concurrency patterns, circuit breakers & observability in Go. Build production-ready systems.

Blog Image
Building High-Performance Go Web Apps: Complete Echo Redis Integration Guide for Scalable Development

Learn how to integrate Echo with Redis for lightning-fast web applications. Discover caching, session management, and real-time features. Boost your Go app performance today.