Go Best Practices
Go CLI Architecture Standards - Complete Guide
Version: 2.0 (July 2025) Go Version: 1.21+ (with Go 1.18+ generics patterns) Target: Production-ready Go CLI applications
This is the complete, definitive guide to Go CLI architecture standards, combining all best practices, modern patterns, and production wisdom into a single comprehensive document.
🎯 What This Guide Provides
Core Architecture Standards
- ✅ Error Handling: Domain-specific error types from day one
- ✅ Service Architecture: Interface-first design with dependency injection
- ✅ Generic Patterns: Modern Go 1.18+ generics for reusable components
- ✅ Database Patterns: Repository pattern with caching strategies
- ✅ Concurrency: Worker pools, pipelines, and errgroup patterns
- ✅ Testing: Table-driven tests, mocking, and fuzz testing
- ✅ CLI Design: Interactive commands with modern TUI libraries
- ✅ Battle-tested patterns from real Go applications
- ✅ Complete examples with CLIFoundation skeleton
- ✅ Performance optimization strategies
- ✅ Security best practices built-in
- ✅ Observability with structured logging and metrics
- ✅ Generics: Services, repositories, validators, event buses
- ✅ Error Joining: Modern error handling with errors.Join
- ✅ Fuzz Testing: Built-in fuzzing for robust validation
- ✅ Type Safety: Compile-time guarantees with generics
Production-Ready Features
Modern Go Features (1.18+)
📋 Quick Start Checklist
# 1. Initialize project structure
mkdir -p cmd/myapp internal/{domain,service,storage,transport}
# 2. Set up error handling (CRITICAL)
# Create internal/errors/types.go with domain errors - NO fmt.Errorf!
# 3. Configure structured logging
# Set up slog with proper sanitization from day one
# 4. Design service interfaces
# Define interfaces in service layer, not storage
# 5. Implement configuration
# Use Viper with clear precedence order
# 6. Add testing infrastructure
# Table-driven tests with builders and mocks
# 7. Set up generic patterns (Go 1.18+)
# Create reusable CRUD services and repositories
🚀 Key Principles
🧠 For AI Assistants
When generating Go code using this guide:
CRITICAL Rules
fmt.Errorf
without %w
- Always use typed domain errors and fmt.Errorf("context: %w", err)
for wrappingprintf/println
- Always use structured logging (slog)Modern Patterns Priority
Architecture Checklist
internal/errors/
internal/service/interfaces.go
📖 Complete Guide Contents
1. Core Principles & Error Handling
Table of Contents
Core Go Principles
The Go Way
Architecture Principles
Error Handling Architecture
CRITICAL: Start With Error Architecture From Day One
The Problem We're Solving:
fmt.Errorf("operation failed")
without error wrappingThe Solution: Structured Error Wrapping + Domain Types From The Start
Base Error Architecture
// internal/errors/types.go
package errors
import (
"errors"
"fmt"
"time"
)
// ErrorCategory defines how errors should be handled
type ErrorCategory int
const (
CategoryValidation ErrorCategory = iota // 400-class, don't retry
CategoryNotFound // 404-class, don't retry
CategoryPermission // 403-class, don't retry
CategoryTemporary // 503-class, retry with backoff
CategoryRateLimit // 429-class, retry after delay
CategoryInternal // 500-class, investigate
CategoryCancelled // Context cancelled, clean exit
CategoryTimeout // Operation timeout
)
// DomainError is our base error type
type DomainError struct {
Code string // Machine-readable code
Message string // Human-readable message
Operation string // What operation failed
Category ErrorCategory // For handling decisions
Cause error // Wrapped error (use fmt.Errorf with %w for chaining)
Context map[string]interface{} // Debugging context
Retryable bool // Can this be retried?
RetryAfter time.Duration // When to retry (for rate limits)
}
// Error implements the error interface
func (e *DomainError) Error() string {
if e.Cause != nil {
return fmt.Sprintf("%s: %s: %v", e.Operation, e.Message, e.Cause)
}
return fmt.Sprintf("%s: %s", e.Operation, e.Message)
}
// Unwrap supports errors.Is/As
func (e *DomainError) Unwrap() error {
return e.Cause
}
// Builder methods for fluent API
func (e *DomainError) WithContext(key string, value interface{}) *DomainError {
if e.Context == nil {
e.Context = make(map[string]interface{})
}
e.Context[key] = value
return e
}
func (e *DomainError) WithRetryable(retryable bool) *DomainError {
e.Retryable = retryable
return e
}
func (e *DomainError) WithRetryAfter(duration time.Duration) *DomainError {
e.RetryAfter = duration
e.Retryable = true
return e
}
// Is returns true if this error matches the target error type
func (e *DomainError) Is(target error) bool {
if te, ok := target.(*DomainError); ok {
return e.Code == te.Code
}
return false
}
Domain-Specific Error Constructors
// internal/errors/constructors.go
package errors
// Validation errors
func NewValidationError(field, reason string) *DomainError {
return &DomainError{
Code: "VALIDATION_FAILED",
Message: fmt.Sprintf("%s validation failed: %s", field, reason),
Operation: "validation",
Category: CategoryValidation,
Context: map[string]interface{}{"field": field, "reason": reason},
Retryable: false,
}
}
// Not found errors
func NewNotFoundError(resource, identifier string) *DomainError {
return &DomainError{
Code: "RESOURCE_NOT_FOUND",
Message: fmt.Sprintf("%s not found: %s", resource, identifier),
Operation: "lookup",
Category: CategoryNotFound,
Context: map[string]interface{}{"resource": resource, "identifier": identifier},
Retryable: false,
}
}
// Database errors - use standard error wrapping with %w
func NewDatabaseError(operation string, err error) *DomainError {
// Wrap the error with context using fmt.Errorf and %w
wrappedErr := fmt.Errorf("database %s failed: %w", operation, err)
return &DomainError{
Code: "DATABASE_ERROR",
Message: fmt.Sprintf("database %s failed", operation),
Operation: operation,
Category: CategoryInternal,
Cause: wrappedErr, // Properly wrapped for errors.Is/As
Retryable: isRetryableDBError(err),
}
}
// Usage example showing standard error wrapping
func ExampleWithErrorWrapping() error {
// Simulate a database error
dbErr := sql.ErrNoRows
// Create domain error with proper wrapping
domainErr := NewDatabaseError("user lookup", dbErr)
// Standard error unwrapping works
if errors.Is(domainErr.Cause, sql.ErrNoRows) {
fmt.Println("Can detect original error type")
}
return domainErr
}
// Logging structured errors
func LogStructuredError(logger *slog.Logger, err error) {
if domainErr, ok := err.(*DomainError); ok {
logger.Error("operation failed",
"error", err.Error(),
"code", domainErr.Code,
"operation", domainErr.Operation,
"category", domainErr.Category,
"retryable", domainErr.Retryable,
"context", domainErr.Context,
)
} else {
logger.Error("operation failed",
"error", err.Error(),
)
}
}
Per-Package Error Types
// internal/storage/errors.go
package storage
import "myapp/internal/errors"
type StorageError struct {
*errors.DomainError
Query string
TableName string
Duration time.Duration
}
func NewStorageError(op string, err error) *StorageError {
return &StorageError{
DomainError: errors.NewDatabaseError(op, err),
}
}
func (e *StorageError) WithQuery(query string) *StorageError {
e.Query = query
e.WithContext("query", query)
return e
}
Error Handling Patterns
// internal/errors/handler.go
package errors
// ErrorHandler centralizes error handling logic
type ErrorHandler struct {
logger Logger
metrics Metrics
notifier Notifier
}
// Handle processes errors consistently
func (h *ErrorHandler) Handle(ctx context.Context, err error) ErrorResponse {
if err == nil {
return ErrorResponse{OK: true}
}
// Check for context cancellation
if errors.Is(err, context.Canceled) {
h.logger.Info("operation cancelled")
return ErrorResponse{
Code: "CANCELLED",
Message: "Operation cancelled",
}
}
// Extract domain error
var domainErr *DomainError
if !errors.As(err, &domainErr) {
// Unexpected error - log with full stack
h.logger.Error("unexpected error",
slog.Error(err),
slog.String("type", fmt.Sprintf("%T", err)))
return ErrorResponse{
Code: "INTERNAL_ERROR",
Message: "An unexpected error occurred",
}
}
// Log with appropriate level
switch domainErr.Category {
case CategoryValidation, CategoryNotFound:
h.logger.Info("client error",
slog.String("code", domainErr.Code),
slog.String("operation", domainErr.Operation),
slog.Any("context", domainErr.Context))
default:
h.logger.Error("operation failed",
slog.String("code", domainErr.Code),
slog.String("operation", domainErr.Operation),
slog.Any("context", domainErr.Context),
slog.Error(domainErr.Cause))
}
// Build response
return ErrorResponse{
Code: domainErr.Code,
Message: domainErr.Message,
Retryable: domainErr.IsRetryable(),
RetryAfter: domainErr.RetryAfter,
}
}
Testing Error Paths
func TestDatabaseError_Retryable(t *testing.T) {
tests := []struct {
name string
err error
wantRetry bool
}{
{
name: "connection error",
err: errors.New("connection refused"),
wantRetry: true,
},
{
name: "syntax error",
err: errors.New("syntax error at position 42"),
wantRetry: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := NewDatabaseError("query", tt.err)
assert.Equal(t, tt.wantRetry, err.IsRetryable())
})
}
}
Linter Configuration
# .golangci.yml
linters-settings:
forbidigo:
forbid:
# Ban fmt.Errorf without %w - force proper error wrapping
- pattern: 'fmt\.Errorf\([^,]*[^w]"[^,]*\)'
msg: "Use fmt.Errorf with %w verb for error wrapping: fmt.Errorf(\"context: %w\", err)"
# Allow errors.New only for package-level sentinel errors
- pattern: 'errors\.New'
msg: "Use typed domain errors instead of ad-hoc errors.New. Only use errors.New for package-level sentinel errors (var ErrFoo = errors.New(...))"
exclude_godoc_examples: false
exclude_files:
# Allow in test files for mocking
- ".*_test.go"
Best Practices
fmt.Errorf("context: %w", err)
to preserve error chainsvar ErrFoo = errors.New(...)
). For all other errors, use typed domain errorsError Handling Decision Tree
When to Use Which Error Pattern
Use this decision tree to determine the appropriate error handling approach for your situation.
graph TD
A[Error Occurred] --> B{Is it expected?}
B -->|Yes| C{Client or Server error?}
B -->|No| D[Log as ERROR with stack trace]
C -->|Client| E{What type?}
C -->|Server| F{Is it transient?}
E -->|Validation| G[Return ValidationError<br/>Log as INFO<br/>HTTP 400]
E -->|Not Found| H[Return NotFoundError<br/>Log as INFO<br/>HTTP 404]
E -->|Auth/Permission| I[Return PermissionError<br/>Log as WARN<br/>HTTP 401/403]
F -->|Yes| J[Return with Retryable=true<br/>Log as WARN<br/>HTTP 503]
F -->|No| K[Return InternalError<br/>Log as ERROR<br/>HTTP 500]
D --> L[Return InternalError<br/>Alert on-call<br/>HTTP 500]
Error Pattern Selection Guide
| Scenario | Error Type | Log Level | HTTP Status | Retryable | Example | |----------|------------|-----------|-------------|-----------|----------| | Invalid input | ValidationError | INFO | 400 | No | Email format wrong | | Resource missing | NotFoundError | INFO | 404 | No | User doesn't exist | | No permission | PermissionError | WARN | 403 | No | Can't access resource | | Rate limited | RateLimitError | INFO | 429 | Yes (with delay) | Too many requests | | DB connection lost | DatabaseError | ERROR | 503 | Yes | Connection refused | | External API down | ExternalServiceError | WARN | 502 | Yes | Timeout to payment API | | Bug in code | InternalError | ERROR | 500 | No | Nil pointer panic | | Context cancelled | CancelledError | INFO | 499 | No | Client disconnected |
Error Wrapping Decision Tree
// When to wrap vs return new error
func ProcessOrder(ctx context.Context, orderID string) error {
// Scenario 1: Add context when crossing boundaries
order, err := repo.GetOrder(ctx, orderID)
if err != nil {
// Wrap to add business context
return fmt.Errorf("process order %s: %w", orderID, err)
}
// Scenario 2: Transform technical errors to domain errors
if err := validator.Validate(order); err != nil {
// Don't wrap - create domain error
return NewValidationError("order", err.Error())
}
// Scenario 3: Preserve error type for handling
result, err := externalAPI.Process(order)
if err != nil {
// Check if we need to preserve the error type
var rateLimitErr *RateLimitError
if errors.As(err, &rateLimitErr) {
// Preserve the original error for retry logic
return err
}
// Otherwise wrap with context
return fmt.Errorf("external processing failed: %w", err)
}
return nil
}
Error Handling by Layer
┌─────────────────────────────────────────────────────────┐
│ HTTP Handler │
│ • Catch all errors │
│ • Convert to HTTP status codes │
│ • Log with request context │
│ • Return standardized error response │
└─────────────────────────────────────────────────────────┘
↑
┌─────────────────────────────────────────────────────────┐
│ Service Layer │
│ • Create domain errors │
│ • Add business context │
│ • Decide on retryability │
│ • Orchestrate error recovery │
└─────────────────────────────────────────────────────────┘
↑
┌─────────────────────────────────────────────────────────┐
│ Repository Layer │
│ • Wrap database errors │
│ • Convert to domain errors (NotFound) │
│ • Add query context for debugging │
│ • Handle connection errors │
└─────────────────────────────────────────────────────────┘
↑
┌─────────────────────────────────────────────────────────┐
│ External Services │
│ • Wrap with operation context │
│ • Preserve [error types](go-practices-error-logging.md#error-handling-architecture) for handling │
│ • Add timeout/retry information │
│ • Include request/response data │
└─────────────────────────────────────────────────────────┘
Common Error Handling Patterns
1. Sentinel Errors Pattern
// Define at package level
var (
ErrUserNotFound = errors.New("user not found")
ErrDuplicateEmail = errors.New("email already exists")
ErrInvalidToken = errors.New("invalid token")
)
// Usage
if err == ErrUserNotFound {
return NewNotFoundError("user", userID)
}
2. Error Type Pattern
type ValidationError struct {
Field string
Value interface{}
Rule string
}
func (e *ValidationError) Error() string {
return fmt.Sprintf("validation failed for %s: %s", e.Field, e.Rule)
}
// Check type
var valErr *ValidationError
if errors.As(err, &valErr) {
// Handle validation specifically
}
3. Multi-Error Pattern
Classic Approach (Pre-Go 1.20)
type ValidationErrors []error
func (e ValidationErrors) Error() string {
var msgs []string
for _, err := range e {
msgs = append(msgs, err.Error())
}
return strings.Join(msgs, "; ")
}
// Accumulate errors
var errs ValidationErrors
if user.Email == "" {
errs = append(errs, NewValidationError("email", "required"))
}
if user.Age < 0 {
errs = append(errs, NewValidationError("age", "must be positive"))
}
if len(errs) > 0 {
return errs
}
Modern Approach: errors.Join (Go 1.20+)
Go 1.20 introduced errors.Join
for combining multiple errors into a single error that implements the Unwrap() []error
method.
import "errors"
// Validation with errors.Join
func ValidateUser(user *User) error {
var errs []error
if user.Email == "" {
errs = append(errs, NewValidationError("email", "required"))
}
if !isValidEmail(user.Email) {
errs = append(errs, NewValidationError("email", "invalid format"))
}
if user.Age < 0 {
errs = append(errs, NewValidationError("age", "must be positive"))
}
if user.Age > 150 {
errs = append(errs, NewValidationError("age", "unrealistic value"))
}
// errors.Join returns nil if all errors are nil
return errors.Join(errs...)
}
// Processing multiple operations
func ProcessBatch(items []Item) error {
var errs []error
for i, item := range items {
if err := processItem(item); err != nil {
// Wrap with context
errs = append(errs, fmt.Errorf("item %d: %w", i, err))
}
}
return errors.Join(errs...)
}
Working with Joined Errors
// Check if any error matches
err := ProcessBatch(items)
if err != nil {
// Check for specific error type in joined errors
var validationErr *ValidationError
if errors.As(err, &validationErr) {
// At least one validation error occurred
fmt.Printf("Validation failed: %v\n", validationErr)
}
// Check for specific sentinel error
if errors.Is(err, ErrRateLimit) {
// At least one rate limit error
fmt.Println("Rate limit hit during batch processing")
}
}
// Extract all errors
func extractAllErrors(err error) []error {
if err == nil {
return nil
}
// Check if it's a joined error
var joinedErr interface{ Unwrap() []error }
if errors.As(err, &joinedErr) {
return joinedErr.Unwrap()
}
// Single error
return []error{err}
}
// Custom formatting for joined errors
func formatJoinedErrors(err error) string {
errs := extractAllErrors(err)
if len(errs) == 1 {
return errs[0].Error()
}
var sb strings.Builder
sb.WriteString(fmt.Sprintf("%d errors occurred:\n", len(errs)))
for i, e := range errs {
sb.WriteString(fmt.Sprintf(" %d. %v\n", i+1, e))
}
return sb.String()
}
Comparison: Custom vs errors.Join
// Custom multi-error (more control)
type CustomErrors struct {
Errors []error
Fatal bool // Custom field
}
func (e *CustomErrors) Error() string {
// Custom formatting
if e.Fatal {
return fmt.Sprintf("FATAL: %d errors", len(e.Errors))
}
return fmt.Sprintf("%d errors occurred", len(e.Errors))
}
// vs errors.Join (standard, simpler)
errs := []error{err1, err2, err3}
return errors.Join(errs...) // Standard, works with errors.Is/As
Best Practices with errors.Join
// Good: Independent validation errors
return errors.Join(
validateName(name),
validateEmail(email),
validateAge(age),
)
// Bad: Dependent operations (use early return)
if err := connectDB(); err != nil {
return err
}
if err := authenticate(); err != nil {
return err
}
var errs []error
for _, file := range files {
if err := processFile(file); err != nil {
// Add context before joining
errs = append(errs, fmt.Errorf("file %s: %w", file, err))
}
}
return errors.Join(errs...)
func ProcessWithCategories(items []Item) error {
var (
validationErrs []error
processingErrs []error
)
for _, item := range items {
if err := validate(item); err != nil {
validationErrs = append(validationErrs, err)
} else if err := process(item); err != nil {
processingErrs = append(processingErrs, err)
}
}
// Join by category
if len(validationErrs) > 0 {
return fmt.Errorf("validation failed: %w",
errors.Join(validationErrs...))
}
if len(processingErrs) > 0 {
return fmt.Errorf("processing failed: %w",
errors.Join(processingErrs...))
}
return nil
}
4. Retry Decision Pattern
func shouldRetry(err error) (bool, time.Duration) {
// Check for specific [error types](go-practices-error-logging.md#error-handling-architecture)
var rateLimitErr *RateLimitError
if errors.As(err, &rateLimitErr) {
return true, rateLimitErr.RetryAfter
}
var tempErr *TemporaryError
if errors.As(err, &tempErr) {
return true, tempErr.BackoffDuration()
}
// Check for network errors
var netErr net.Error
if errors.As(err, &netErr) && netErr.Temporary() {
return true, time.Second
}
// Check error messages (last resort)
msg := err.Error()
if strings.Contains(msg, "connection refused") ||
strings.Contains(msg, "i/o timeout") {
return true, 5 * time.Second
}
return false, 0
}
Error Recovery Strategies
| Strategy | When to Use | Example | |----------|-------------|----------| | Retry with backoff | Transient failures | Network timeouts | | Circuit breaker | Protect failing service | External API errors | | Fallback | Degraded service acceptable | Use cache if DB down | | Queue for later | Can be async | Email sending failed | | Fail fast | Critical path | Payment processing | | Log and continue | Non-critical | Metrics collection |
Testing Error Paths
func TestServiceErrorHandling(t *testing.T) {
tests := []struct {
name string
setupMock func(*MockRepo)
wantErr bool
wantErrType error
wantRetryable bool
}{
{
name: "database connection error",
setupMock: func(m *MockRepo) {
m.GetFunc = func(ctx context.Context, id string) (*User, error) {
return nil, errors.New("connection refused")
}
},
wantErr: true,
wantErrType: &DatabaseError{},
wantRetryable: true,
},
{
name: "not found error",
setupMock: func(m *MockRepo) {
m.GetFunc = func(ctx context.Context, id string) (*User, error) {
return nil, sql.ErrNoRows
}
},
wantErr: true,
wantErrType: &NotFoundError{},
wantRetryable: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Test implementation
})
}
}
Structured Logging
CRITICAL: No Printf/Println Allowed - Ever
The Problem We're Solving:
fmt.Println()
scattered everywhereThe Solution: Structured Logging Architecture
Logger Interface and Implementation
// internal/logging/logger.go
package logging
import (
"context"
"io"
"log/slog"
"os"
)
// Logger defines our logging interface
type Logger interface {
Debug(msg string, fields ...slog.Attr)
Info(msg string, fields ...slog.Attr)
Warn(msg string, fields ...slog.Attr)
Error(msg string, fields ...slog.Attr)
With(fields ...slog.Attr) Logger
WithContext(ctx context.Context) Logger
WithError(err error) Logger
}
// Config for logger initialization
type LogConfig struct {
Level slog.Level `env:"LOG_LEVEL" default:"info"`
Format string `env:"LOG_FORMAT" default:"json"`
Output io.Writer `env:"-"`
AddSource bool `env:"LOG_SOURCE" default:"false"`
SampleRate float64 `env:"LOG_SAMPLE_RATE" default:"1.0"`
// Feature flags
HideSensitive bool `env:"LOG_HIDE_SENSITIVE" default:"true"`
AddStackTrace bool `env:"LOG_STACK_TRACE" default:"false"`
}
// NewLogger creates a configured logger
func NewLogger(cfg LogConfig) Logger {
opts := &slog.HandlerOptions{
Level: cfg.Level,
AddSource: cfg.AddSource,
ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
if cfg.HideSensitive {
a = sanitizeAttr(groups, a)
}
return a
},
}
if cfg.Output == nil {
cfg.Output = os.Stdout
}
var handler slog.Handler
switch cfg.Format {
case "json":
handler = slog.NewJSONHandler(cfg.Output, opts)
case "text":
handler = slog.NewTextHandler(cfg.Output, opts)
default:
handler = slog.NewJSONHandler(cfg.Output, opts)
}
return &wrappedLogger{
slog: slog.New(handler),
config: cfg,
}
}
Sensitive Data Protection
// internal/logging/sanitizer.go
package logging
// Sensitive field names to redact
var sensitiveFields = map[string]bool{
"password": true,
"token": true,
"api_key": true,
"secret": true,
"credit_card": true,
"ssn": true,
}
func sanitizeAttr(groups []string, a slog.Attr) slog.Attr {
key := strings.ToLower(a.Key)
if sensitiveFields[key] {
return slog.String(a.Key, "[REDACTED]")
}
// Partial masking for emails
if key == "email" && a.Value.Kind() == slog.KindString {
email := a.Value.String()
parts := strings.Split(email, "@")
if len(parts) == 2 && len(parts[0]) > 2 {
masked := parts[0][:2] + "***@" + parts[1]
return slog.String(a.Key, masked)
}
}
return a
}
Advanced slog Usage: Stack Traces
The standard slog
library doesn't capture stack traces automatically. For error-level logs where stack traces are crucial for debugging, you need to implement custom handling.
// internal/logging/stacktrace.go
package logging
import (
"fmt"
"log/slog"
"runtime"
"strings"
)
// WithStack adds a stack trace to the logger
func WithStack() slog.Attr {
return slog.String("stack", captureStack(3)) // Skip 3 frames
}
// captureStack captures the current stack trace
func captureStack(skip int) string {
var sb strings.Builder
// Capture up to 10 frames
pcs := make([]uintptr, 10)
n := runtime.Callers(skip, pcs)
if n == 0 {
return "no stack available"
}
frames := runtime.CallersFrames(pcs[:n])
for {
frame, more := frames.Next()
// Skip runtime internals
if strings.Contains(frame.Function, "runtime.") {
if !more {
break
}
continue
}
sb.WriteString(fmt.Sprintf("%s\n\t%s:%d\n",
frame.Function,
frame.File,
frame.Line))
if !more {
break
}
}
return sb.String()
}
// ErrorWithStack logs an error with its stack trace
func (l *wrappedLogger) ErrorWithStack(msg string, err error, fields ...slog.Attr) {
attrs := append(fields,
slog.Error(err),
WithStack(),
)
l.Error(msg, attrs...)
}
Error Wrapping in Production: Use Standard Library
For production applications, use Go's built-in error wrapping with fmt.Errorf
and %w
- it's native, well-supported, and integrates perfectly with errors.Is
/errors.As
:
import (
"errors"
"fmt"
)
// PRODUCTION RECOMMENDED: Use standard library wrapping
func ProcessData(data []byte) error {
if err := validate(data); err != nil {
// Standard error wrapping with context
return fmt.Errorf("validation failed: %w", err)
}
result, err := transform(data)
if err != nil {
// Preserve error chain for errors.Is/As
return fmt.Errorf("transform failed: %w", err)
}
if err := store(result); err != nil {
// Formatted wrapping with data
return fmt.Errorf("failed to store %d bytes: %w", len(result), err)
}
return nil
}
// Extract stack trace information
func handleError(err error) {
// Get stack trace if available
type stackTracer interface {
StackTrace() errors.StackTrace
}
if st, ok := err.(stackTracer); ok {
slog.Error("operation failed with stack trace",
"error", err.Error(),
"stack", fmt.Sprintf("%+v", st.StackTrace()),
)
} else {
slog.Error("operation failed", "error", err.Error())
}
}
Alternative: Detailed Stack Traces (Advanced)
For applications requiring detailed stack traces beyond Go's standard error wrapping, you can implement custom tracing. Most production applications don't need this complexity:
// internal/errors/traced.go (EDUCATIONAL EXAMPLE ONLY)
package errors
import (
"fmt"
"runtime"
"strings"
)
// TracedError captures stack trace at error creation
type TracedError struct {
*DomainError
Stack []Frame
}
type Frame struct {
Function string
File string
Line int
}
// NewTracedError creates an error with captured stack trace
func NewTracedError(code, message, operation string) *TracedError {
return &TracedError{
DomainError: &DomainError{
Code: code,
Message: message,
Operation: operation,
Category: CategoryInternal,
},
Stack: captureFrames(2), // Skip this function and caller
}
}
func captureFrames(skip int) []Frame {
const maxFrames = 32
pcs := make([]uintptr, maxFrames)
n := runtime.Callers(skip+1, pcs)
if n == 0 {
return nil
}
frames := runtime.CallersFrames(pcs[:n])
var result []Frame
for {
frame, more := frames.Next()
// Skip runtime and testing frames
if strings.Contains(frame.Function, "runtime.") ||
strings.Contains(frame.Function, "testing.") {
if !more {
break
}
continue
}
result = append(result, Frame{
Function: frame.Function,
File: frame.File,
Line: frame.Line,
})
if !more || len(result) >= 10 {
break
}
}
return result
}
// StackTrace returns formatted stack trace
func (e *TracedError) StackTrace() string {
var sb strings.Builder
for _, frame := range e.Stack {
sb.WriteString(fmt.Sprintf("%s\n\t%s:%d\n",
frame.Function,
frame.File,
frame.Line))
}
return sb.String()
}
Integration with Error Handler
// internal/errors/handler.go - Enhanced version
func (h *ErrorHandler) Handle(ctx context.Context, err error) ErrorResponse {
if err == nil {
return ErrorResponse{OK: true}
}
// Extract traced error for stack information
var tracedErr *TracedError
hasStack := errors.As(err, &tracedErr)
// Extract domain error
var domainErr *DomainError
if !errors.As(err, &domainErr) {
// Unexpected error - create traced error
tracedErr = NewTracedError(
"INTERNAL_ERROR",
"An unexpected error occurred",
"unknown",
)
tracedErr.Cause = err
domainErr = tracedErr.DomainError
hasStack = true
}
// Build log attributes
attrs := []slog.Attr{
slog.String("code", domainErr.Code),
slog.String("operation", domainErr.Operation),
slog.Any("context", domainErr.Context),
}
// Add stack trace for errors
if hasStack && h.config.AddStackTrace {
attrs = append(attrs, slog.String("stack", tracedErr.StackTrace()))
}
// Log with appropriate level
switch domainErr.Category {
case CategoryValidation, CategoryNotFound:
h.logger.Info("client error", attrs...)
case CategoryInternal:
// Always include stack for internal errors
if !hasStack {
attrs = append(attrs, WithStack())
}
h.logger.Error("internal error", attrs...)
default:
h.logger.Error("operation failed", attrs...)
}
return buildResponse(domainErr)
}
Implementation Comparison
| Aspect | Custom TracedError (Educational) | Standard Library (Recommended) |
|--------|----------------------------------|--------------------------------|
| Purpose | Learning how stacks work | Production-ready, native solution |
| Setup | ~100 lines of custom code | Built into Go |
| Usage | NewTracedError(code, msg, op)
| fmt.Errorf("context: %w", err)
|
| Maintenance | You maintain the code | Maintained by Go team |
| Performance | Unoptimized implementation | Optimized native implementation |
| Ecosystem | Custom integration needed | Works with all error tooling |
Quick Migration
// EDUCATIONAL: Custom implementation
func doWork() error {
if err := someOperation(); err != nil {
return NewTracedError("OP_FAILED", "operation failed", "doWork")
}
return nil
}
// PRODUCTION: Use standard library instead
func doWork() error {
if err := someOperation(); err != nil {
return fmt.Errorf("operation failed: %w", err)
}
return nil
}
Production Considerations
// Only capture for unexpected errors
if isExpectedError(err) {
return NewDomainError(...) // No stack
}
return NewTracedError(...) // With stack
// Sample stack traces in high-volume scenarios
if shouldSample(0.1) { // 10% sampling
logger.ErrorWithStack("database query failed", err)
} else {
logger.Error("database query failed", slog.Error(err))
}
// Sanitize stack traces for external consumption
func sanitizeStack(stack string) string {
// Remove internal package paths
// Remove sensitive file paths
return sanitized
}
Testing Stack Trace Capture
func TestErrorStackTrace(t *testing.T) {
err := NewTracedError("TEST_ERROR", "test error", "test_op")
// Verify stack was captured
require.NotEmpty(t, err.Stack)
// Verify it includes this test function
found := false
for _, frame := range err.Stack {
if strings.Contains(frame.Function, "TestErrorStackTrace") {
found = true
break
}
}
require.True(t, found, "Stack should include test function")
// Verify stack trace formatting
stackStr := err.StackTrace()
require.Contains(t, stackStr, "TestErrorStackTrace")
require.Contains(t, stackStr, "errors_test.go")
}
Logging Patterns by Component
// Service layer logging
func (s *UserService) UpdateUser(ctx context.Context, id string, update UserUpdate) error {
logger := s.logger.With(
slog.String("operation", "update_user"),
slog.String("user_id", id),
)
logger.Info("starting user update")
if err := s.validate(update); err != nil {
logger.Error("validation failed",
slog.Error(err))
return err
}
start := time.Now()
if err := s.db.Update(ctx, id, update); err != nil {
logger.Error("database update failed",
slog.Duration("duration", time.Since(start)),
slog.Error(err))
return err
}
logger.Info("user updated successfully",
slog.Duration("duration", time.Since(start)))
return nil
}
Standard Field Names
const (
FieldUserID = "user_id"
FieldRequestID = "request_id"
FieldTraceID = "trace_id"
FieldOperation = "operation"
FieldDuration = "duration"
FieldError = "error"
FieldComponent = "component"
)
Linter Configuration
# .golangci.yml
linters:
enable:
- forbidigo
linters-settings:
forbidigo:
forbid:
- p: 'fmt\.Print.*'
msg: "Use [structured logging](go-practices-error-logging.md#structured-logging) instead of fmt.Print"
- p: 'log\.Print.*'
msg: "Use [structured logging](go-practices-error-logging.md#structured-logging) instead of log.Print"
Context Guidelines
What Goes in Context
Context should ONLY be used for:
- Request ID for distributed tracing - Trace ID for correlation
// GOOD: Request ID in context
type contextKey string
const (
requestIDKey contextKey = "request-id"
traceIDKey contextKey = "trace-id"
)
func WithRequestID(ctx context.Context, id string) context.Context {
return context.WithValue(ctx, requestIDKey, id)
}
// BAD: Business data in context
ctx = context.WithValue(ctx, "userID", userID) // ❌ Never do this
ctx = context.WithValue(ctx, "tenantID", tenantID) // ❌ Use parameters
Context Best Practices
// GOOD: Context flows down
func GoodService(ctx context.Context, id string) error {
user, err := getUser(ctx, id) // Context first parameter
if err != nil {
return err
}
return processUser(ctx, user)
}
// BAD: Creating context at wrong level
func BadService(id string) error {
ctx := context.Background() // Don't create here!
return process(ctx, id)
}
// GOOD: Check context in loops
func ProcessItems(ctx context.Context, items []Item) error {
for i, item := range items {
select {
case <-ctx.Done():
return ctx.Err()
default:
}
if err := process(ctx, item); err != nil {
return err
}
}
return nil
}
Context Testing
// Test context cancellation
func TestProcessWithCancellation(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
// Start processing in goroutine
errChan := make(chan error)
go func() {
errChan <- ProcessLongOperation(ctx, generateTestData(1000))
}()
// Cancel after short delay
time.Sleep(10 * time.Millisecond)
cancel()
// Verify cancellation is respected
err := <-errChan
assert.Error(t, err)
assert.True(t, errors.Is(err, context.Canceled))
}
// Test timeout behavior
func TestProcessWithTimeout(t *testing.T) {
ctx, cancel := context.WithTimeout(context.Background(), 50*time.Millisecond)
defer cancel()
err := SlowOperation(ctx) // Takes 100ms
assert.Error(t, err)
assert.True(t, errors.Is(err, context.DeadlineExceeded))
}
Context Performance Tips
Related Sections
Quick Reference Checklist
Error Handling Checklist
var ErrFoo = errors.New(...)
), never for ad-hoc errorsContext Checklist
Logging Checklist
Context Usage Checklist
2. Service Architecture & Design
Table of Contents
Service Layer Design
Clean Architecture for Business Logic
The Problem We're Solving:
The Solution: Interface-First Service Design
Core Design Principles
// internal/service/interfaces.go
package service
import (
"context"
"time"
)
// GOLDEN RULE: Accept interfaces, return concrete types
// Storage interfaces - defined by service, not storage package
type UserRepository interface {
GetByID(ctx context.Context, id string) (*domain.User, error)
GetByEmail(ctx context.Context, email string) (*domain.User, error)
Create(ctx context.Context, user *domain.User) error
Update(ctx context.Context, user *domain.User) error
Delete(ctx context.Context, id string) error
}
type DocumentRepository interface {
Store(ctx context.Context, doc *domain.Document) error
Retrieve(ctx context.Context, id string) (*domain.Document, error)
List(ctx context.Context, filter DocumentFilter) ([]*domain.Document, error)
}
// External service interfaces
type EmailSender interface {
Send(ctx context.Context, email Email) error
}
type EventPublisher interface {
Publish(ctx context.Context, event Event) error
}
// Cache interface
type Cache interface {
Get(ctx context.Context, key string, value interface{}) error
Set(ctx context.Context, key string, value interface{}, ttl time.Duration) error
Delete(ctx context.Context, key string) error
}
Service Implementation
// internal/service/user_service.go
package service
import (
"fmt"
"strings"
)
// UserService handles user business logic
// Note: Returns concrete type, not interface
type UserService struct {
repo UserRepository
email EmailSender
events EventPublisher
cache Cache
logger Logger
}
// NewUserService creates a new user service
// IMPORTANT: Returns *UserService, not an interface
func NewUserService(
repo UserRepository,
email EmailSender,
events EventPublisher,
cache Cache,
logger Logger,
) *UserService {
return &UserService{
repo: repo,
email: email,
events: events,
cache: cache,
logger: logger,
}
}
// Service-level [error types](go-practices-error-logging.md#error-handling-architecture)
type ServiceError struct {
Code string
Message string
Operation string
Cause error
}
func (e *ServiceError) Error() string {
if e.Cause != nil {
return fmt.Sprintf("%s: %s: %v", e.Code, e.Message, e.Cause)
}
return fmt.Sprintf("%s: %s", e.Code, e.Message)
}
func (e *ServiceError) Unwrap() error {
return e.Cause
}
type ProcessingError struct {
Code string
Message string
Operation string
ItemID string // ID of the item being processed
Stage string // Processing stage where error occurred
Attempt int // Which attempt failed (for retries)
Cause error
}
func (e *ProcessingError) Error() string {
var details []string
if e.ItemID != "" {
details = append(details, fmt.Sprintf("item=%s", e.ItemID))
}
if e.Stage != "" {
details = append(details, fmt.Sprintf("stage=%s", e.Stage))
}
if e.Attempt > 0 {
details = append(details, fmt.Sprintf("attempt=%d", e.Attempt))
}
detailStr := ""
if len(details) > 0 {
detailStr = fmt.Sprintf(" [%s]", strings.Join(details, " "))
}
if e.Cause != nil {
return fmt.Sprintf("%s: %s%s: %v", e.Code, e.Message, detailStr, e.Cause)
}
return fmt.Sprintf("%s: %s%s", e.Code, e.Message, detailStr)
}
func (e *ProcessingError) Unwrap() error {
return e.Cause
}
// Common service errors
var (
ErrNoStrategyAvailable = &ServiceError{
Code: "NO_STRATEGY_AVAILABLE",
Message: "no processing strategy available",
}
)
// CreateUser implements user creation business logic
func (s *UserService) CreateUser(ctx context.Context, input CreateUserInput) (*domain.User, error) {
logger := s.logger.With(
slog.String("operation", "create_user"),
slog.String("email", input.Email),
)
logger.Info("creating user")
// Validate input
if err := s.validate(input); err != nil {
return nil, errors.NewValidationError("input", err.Error())
}
// Check if user exists
existing, err := s.repo.GetByEmail(ctx, input.Email)
if err != nil && !errors.Is(err, ErrNotFound) {
return nil, &ServiceError{
Code: "USER_LOOKUP_FAILED",
Message: "failed to check if user exists",
Operation: "create_user",
Cause: err,
}
}
if existing != nil {
return nil, errors.NewValidationError("email", "already registered")
}
// Create domain object
user := &domain.User{
ID: GenerateID(),
Email: input.Email,
Name: input.Name,
Status: domain.UserStatusPending,
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
}
// Apply business rules
if err := user.SetPassword(input.Password); err != nil {
return nil, errors.NewValidationError("password", err.Error())
}
// Store user
if err := s.repo.Create(ctx, user); err != nil {
return nil, &ServiceError{
Code: "USER_CREATE_FAILED",
Message: "failed to create user in repository",
Operation: "create_user",
Cause: err,
}
}
// Send welcome email (async)
s.sendWelcomeEmail(ctx, user)
// Publish event
event := &UserCreatedEvent{
UserID: user.ID,
Email: user.Email,
Timestamp: time.Now(),
}
if err := s.events.Publish(ctx, event); err != nil {
// Log but don't fail - events are eventually consistent
logger.Error("failed to publish event", slog.Error(err))
}
logger.Info("user created successfully",
slog.String("user_id", user.ID))
return user, nil
}
Interface Design Principles
Accept Interfaces, Return Concrete Types
// ✅ RIGHT: Return concrete type
func NewService(store Storage, client HTTPClient) *Service {
return &Service{store: store, client: client}
}
// ❌ WRONG: Returning interface
func NewService(store Storage, client HTTPClient) ServiceInterface {
return &Service{store: store, client: client}
}
Why: Returning concrete types gives callers maximum flexibility. They can:
Interface Segregation
// ✅ RIGHT: Small, focused interfaces
type Validator interface {
Validate(ctx context.Context, data []byte) error
}
type Processor interface {
Process(ctx context.Context, input Input) (Output, error)
}
// ❌ WRONG: Large, monolithic interface
type Service interface {
Validate(...)
Process(...)
Store(...)
Retrieve(...)
// 20 more methods...
}
Consumer-Defined Interfaces
// ✅ RIGHT: Interface defined where it's used
// internal/service/interfaces.go
package service
type UserRepository interface {
GetByID(ctx context.Context, id string) (*domain.User, error)
}
// internal/storage/postgres/user_repo.go
package postgres
// Implements the interface defined by service
type UserRepository struct {
db *sql.DB
}
// ❌ WRONG: Interface defined by implementer
// internal/storage/interfaces.go
package storage
type UserRepository interface {
// Service package would import storage - wrong direction!
}
Dependency Injection
Container Pattern
// internal/app/container.go
package app
// Container holds all services and their dependencies
type Container struct {
Config *config.Config
Logger Logger
// Repositories
UserRepo UserRepository
DocRepo DocumentRepository
// External services
EmailSender EmailSender
Cache Cache
Events EventPublisher
// Business services
UserService *UserService
DocService *DocumentService
AuthService *AuthService
}
// New creates fully wired container
func New(cfg *config.Config) (*Container, error) {
logger := NewLogger(cfg.Logging)
// Initialize database
db, err := NewDB(cfg.Database)
if err != nil {
return nil, &ServiceError{
Code: "DATABASE_INIT_FAILED",
Message: "failed to initialize database",
Operation: "container_init",
Cause: err,
}
}
// Initialize repositories
userRepo := postgres.NewUserRepository(db, logger)
docRepo := postgres.NewDocumentRepository(db, logger)
// Initialize external services
emailSender := email.NewSender(cfg.Email, logger)
cache := redis.NewCache(cfg.Cache.Redis, logger)
events := NewEventPublisher(cfg.Events, logger)
// Initialize business services
userService := service.NewUserService(userRepo, emailSender, events, cache, logger)
docService := service.NewDocumentService(docRepo, cache, logger)
authService := service.NewAuthService(userRepo, cache, logger)
return &Container{
Config: cfg,
Logger: logger,
UserRepo: userRepo,
DocRepo: docRepo,
EmailSender: emailSender,
Cache: cache,
Events: events,
UserService: userService,
DocService: docService,
AuthService: authService,
}, nil
}
Functional Options Pattern
// ServiceOption configures a service
type ServiceOption func(*ServiceConfig)
// ServiceConfig holds service configuration
type ServiceConfig struct {
Timeout time.Duration
MaxRetries int
CacheEnabled bool
CacheTTL time.Duration
RateLimit int
}
// Default configuration
func defaultServiceConfig() *ServiceConfig {
return &ServiceConfig{
Timeout: 30 * time.Second,
MaxRetries: 3,
CacheEnabled: true,
CacheTTL: 5 * time.Minute,
RateLimit: 100,
}
}
// Option constructors
func WithTimeout(d time.Duration) ServiceOption {
return func(c *ServiceConfig) {
c.Timeout = d
}
}
func WithMaxRetries(n int) ServiceOption {
return func(c *ServiceConfig) {
c.MaxRetries = n
}
}
func WithCache(enabled bool, ttl time.Duration) ServiceOption {
return func(c *ServiceConfig) {
c.CacheEnabled = enabled
c.CacheTTL = ttl
}
}
// Service using options
type APIService struct {
config *ServiceConfig
client HTTPClient
logger Logger
}
func NewAPIService(client HTTPClient, logger Logger, opts ...ServiceOption) *APIService {
config := defaultServiceConfig()
// Apply options
for _, opt := range opts {
opt(config)
}
return &APIService{
config: config,
client: client,
logger: logger,
}
}
// Usage
service := NewAPIService(
httpClient,
logger,
WithTimeout(60*time.Second),
WithMaxRetries(5),
WithCache(true, 10*time.Minute),
)
Compile-Time Dependency Injection with Wire
While the manual dependency injection pattern shown above is clear and easy to understand, it can become boilerplate-heavy as applications grow. Google's Wire provides compile-time dependency injection, generating the wiring code automatically.
When to Use Wire
Use manual DI when:
Consider Wire when:
Wire Example: Before and After
Before: Manual Wiring (Current Approach)
// cmd/api/main.go
func main() {
// Load config
cfg, err := config.Load()
if err != nil {
log.Fatal(err)
}
// Create database
db, err := database.NewDB(cfg.Database)
if err != nil {
log.Fatal(err)
}
// Create repositories
userRepo := postgres.NewUserRepository(db, logger)
orderRepo := postgres.NewOrderRepository(db, logger)
// Create services
userSvc := service.NewUserService(userRepo, logger)
orderSvc := service.NewOrderService(orderRepo, userRepo, logger)
// Create handlers
userHandler := handlers.NewUserHandler(userSvc, logger)
orderHandler := handlers.NewOrderHandler(orderSvc, logger)
// Setup server
server := api.NewServer(cfg.Server, userHandler, orderHandler, logger)
// Run
if err := server.Run(); err != nil {
log.Fatal(err)
}
}
After: Wire-based Injection
// wire.go
//+build wireinject
package main
import (
"github.com/google/wire"
"myapp/internal/config"
"myapp/internal/database"
"myapp/internal/storage/postgres"
"myapp/internal/service"
"myapp/internal/api/handlers"
"myapp/internal/api"
)
// InitializeServer creates a fully wired server
func InitializeServer(configPath string) (*api.Server, error) {
wire.Build(
// Config
config.Load,
// Infrastructure
database.NewDB,
newLogger,
// Repositories
postgres.NewUserRepository,
postgres.NewOrderRepository,
// Services
service.NewUserService,
service.NewOrderService,
// Handlers
handlers.NewUserHandler,
handlers.NewOrderHandler,
// Server
api.NewServer,
)
return nil, nil // Wire will generate this
}
// cmd/api/main.go
func main() {
server, err := InitializeServer("config.yaml")
if err != nil {
log.Fatal(err)
}
if err := server.Run(); err != nil {
log.Fatal(err)
}
}
Wire Provider Sets
Organize providers into logical groups:
// internal/providers/database.go
package providers
import (
"github.com/google/wire"
"myapp/internal/storage/postgres"
)
// DatabaseSet provides all database-related dependencies
var DatabaseSet = wire.NewSet(
database.NewDB,
postgres.NewUserRepository,
postgres.NewOrderRepository,
postgres.NewProductRepository,
)
// internal/providers/service.go
package providers
import (
"github.com/google/wire"
"myapp/internal/service"
)
// ServiceSet provides all business services
var ServiceSet = wire.NewSet(
service.NewUserService,
service.NewOrderService,
service.NewPaymentService,
service.NewNotificationService,
)
// wire.go
func InitializeServer(configPath string) (*api.Server, error) {
wire.Build(
config.Load,
providers.DatabaseSet,
providers.ServiceSet,
providers.HandlerSet,
api.NewServer,
)
return nil, nil
}
Wire with Interfaces
Wire automatically binds implementations to interfaces:
// internal/service/interfaces.go
type UserRepository interface {
GetByID(ctx context.Context, id string) (*User, error)
Create(ctx context.Context, user *User) error
}
// internal/storage/postgres/user_repository.go
type userRepository struct {
db *sql.DB
}
// Wire knows this implements UserRepository
func NewUserRepository(db *sql.DB) UserRepository {
return &userRepository{db: db}
}
// internal/service/user_service.go
type UserService struct {
repo UserRepository // Wire injects the implementation
}
func NewUserService(repo UserRepository) *UserService {
return &UserService{repo: repo}
}
Wire Best Practices
1. Provider Functions
// Good: Simple provider
func NewUserService(repo UserRepository, logger *slog.Logger) *UserService {
return &UserService{
repo: repo,
logger: logger,
}
}
// Good: Provider with error
func NewDatabase(cfg DatabaseConfig) (*sql.DB, error) {
db, err := sql.Open(cfg.Driver, cfg.DSN)
if err != nil {
return nil, err
}
return db, nil
}
// Good: Provider with cleanup
func NewRedisClient(cfg RedisConfig) (*redis.Client, func(), error) {
client := redis.NewClient(&redis.Options{
Addr: cfg.Addr,
})
cleanup := func() {
client.Close()
}
return client, cleanup, nil
}
2. Struct Providers
// For simple configs, provide struct fields directly
type Config struct {
Database DatabaseConfig
Redis RedisConfig
Server ServerConfig
}
var ConfigSet = wire.NewSet(
LoadConfig,
wire.FieldsOf(new(Config), "Database", "Redis", "Server"),
)
3. Interface Binding
// Explicit binding when needed
var RepositorySet = wire.NewSet(
NewUserRepository,
wire.Bind(new(UserRepository), new(*userRepository)),
)
Testing with Wire
// wire_test.go
//+build wireinject
func initTestServer(t *testing.T) *Server {
wire.Build(
newTestConfig,
newTestDB,
providers.RepositorySet,
providers.ServiceSet,
newTestServer,
)
return nil
}
// server_test.go
func TestServer(t *testing.T) {
server := initTestServer(t)
// Wire generates test wiring
}
Wire vs Manual DI Decision Matrix
| Factor | Manual DI | Wire | |--------|-----------|------| | Setup Complexity | Simple | Requires setup | | Debugging | Trivial | Check generated code | | Refactoring | Manual updates | Regenerate | | Type Safety | Compile-time | Compile-time | | Boilerplate | Grows with app | Minimal | | Learning Curve | None | Moderate | | Team Size | Any | Larger teams | | Project Size | Small-Medium | Medium-Large |
Migration Strategy
Common Wire Pitfalls
// PITFALL: Circular dependencies
// Wire will detect and report these at compile time
// PITFALL: Missing providers
// Wire generates clear error messages
// PITFALL: Multiple providers for same type
// Use provider sets to organize
// PITFALL: Forgetting to regenerate
// Add to your build process:
//go:generate wire
Summary
Wire is powerful for large applications but adds complexity. Start with manual DI and consider Wire when boilerplate becomes painful. The manual approach shown earlier in this guide remains the recommended starting point for most Go applications.
Processing Patterns
Stream Processing
// internal/processing/stream.go
package processing
// StreamProcessor handles large data streams efficiently
type StreamProcessor struct {
bufferSize int
workers int
logger Logger
}
// ProcessStream handles data without loading all into memory
func (p *StreamProcessor) ProcessStream(ctx context.Context, input io.Reader, output io.Writer) error {
// Create processing pipeline
chunks := make(chan []byte, p.bufferSize)
results := make(chan ProcessedData, p.bufferSize)
errors := make(chan error, p.workers)
// Start workers
var wg sync.WaitGroup
for i := 0; i < p.workers; i++ {
wg.Add(1)
go func(workerID int) {
defer wg.Done()
p.worker(ctx, workerID, chunks, results, errors)
}(i)
}
// Start result writer
wg.Add(1)
go func() {
defer wg.Done()
p.writeResults(ctx, output, results, errors)
}()
// Read and chunk input
if err := p.readInput(ctx, input, chunks); err != nil {
return &ProcessingError{
Code: "INPUT_READ_FAILED",
Message: "failed to read processing input",
Operation: "batch_process",
Stage: "input_reading",
Cause: err,
}
}
close(chunks)
wg.Wait()
return nil
}
Pipeline Pattern
// Stage represents a pipeline stage
type Stage func(ctx context.Context, in <-chan interface{}) <-chan interface{}
// Pipeline chains multiple processing stages
type Pipeline struct {
stages []Stage
}
// AddStage adds a processing stage
func (p *Pipeline) AddStage(stage Stage) *Pipeline {
p.stages = append(p.stages, stage)
return p
}
// Run executes the pipeline
func (p *Pipeline) Run(ctx context.Context, input <-chan interface{}) <-chan interface{} {
current := input
for _, stage := range p.stages {
current = stage(ctx, current)
}
return current
}
// Example stages
func ValidationStage(validator Validator) Stage {
return func(ctx context.Context, in <-chan interface{}) <-chan interface{} {
out := make(chan interface{}, 100)
go func() {
defer close(out)
for item := range in {
if err := validator.Validate(item); err != nil {
logger.Warn("validation failed", slog.Error(err))
continue
}
select {
case out <- item:
case <-ctx.Done():
return
}
}
}()
return out
}
}
// Usage
pipeline := NewPipeline().
AddStage(ValidationStage(validator)).
AddStage(TransformStage(transformer)).
AddStage(BatchStage(100, 5*time.Second))
output := pipeline.Run(ctx, input)
Progress Tracking
// ProgressTracker tracks processing progress
type ProgressTracker struct {
mu sync.RWMutex
total int64
processed int64
failed int64
startTime time.Time
updateCallbacks []ProgressCallback
}
type ProgressUpdate struct {
Total int64
Processed int64
Failed int64
PercentComplete float64
Rate float64 // items per second
ETA time.Duration
}
func (p *ProgressTracker) IncrementProcessed(delta int64) {
p.mu.Lock()
defer p.mu.Unlock()
p.processed += delta
p.maybeNotify()
}
func (p *ProgressTracker) getUpdate() ProgressUpdate {
elapsed := time.Since(p.startTime)
rate := float64(p.processed) / elapsed.Seconds()
remaining := p.total - p.processed - p.failed
eta := time.Duration(0)
if rate > 0 {
eta = time.Duration(float64(remaining) / rate * float64(time.Second))
}
return ProgressUpdate{
Total: p.total,
Processed: p.processed,
Failed: p.failed,
PercentComplete: float64(p.processed+p.failed) / float64(p.total) * 100,
Rate: rate,
ETA: eta,
}
}
Generic Service Patterns (Go 1.18+)
Modern Go applications can leverage generics to reduce boilerplate and create more reusable service components. However, generics should be used judiciously - they're most beneficial for data structures, collections, and service layers where type safety and reusability provide clear value.\n\n### When to Use Generics\n\n✅ Generics add clear value for:\n- Repository and service patterns with CRUD operations\n- Data structures and collections\n- Type-safe pipeline processing\n- Reusable validation and transformation logic\n\n❓ Consider alternatives for:\n- Simple, single-purpose helper functions\n- Functions that don't benefit from type parameterization\n- Cases where interface{} with type assertions might be clearer\n\nHere's how to apply generics effectively in service architecture.
Generic Repository Interface
// internal/service/generic.go
package service
import (
"context"
)
// Entity represents any domain entity with an ID
type Entity interface {
GetID() string
}
// Repository provides CRUD operations for any entity type
type Repository[T Entity] interface {
Create(ctx context.Context, entity T) error
GetByID(ctx context.Context, id string) (T, error)
Update(ctx context.Context, entity T) error
Delete(ctx context.Context, id string) error
List(ctx context.Context, filter map[string]interface{}) ([]T, error)
}
// CacheRepository adds caching to any repository
type CacheRepository[T Entity] interface {
Repository[T]
InvalidateCache(ctx context.Context, id string) error
WarmCache(ctx context.Context, ids []string) error
}
Cached Repository Decorator (Recommended Pattern)
// internal/repository/cached.go
package repository
import (
"context"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"fmt"
"sort"
"time"
"github.com/yourapp/internal/service"
)
// CachedRepository implements cache-aside pattern for any repository
type CachedRepository[T service.Entity] struct {
repo service.Repository[T]
cache Cache
ttl time.Duration
prefix string
}
// Cache interface for dependency injection
type Cache interface {
Get(ctx context.Context, key string) ([]byte, error)
Set(ctx context.Context, key string, value []byte, ttl time.Duration) error
Delete(ctx context.Context, key string) error
DeletePattern(ctx context.Context, pattern string) error
}
func NewCachedRepository[T service.Entity](
repo service.Repository[T],
cache Cache,
ttl time.Duration,
) *CachedRepository[T] {
return &CachedRepository[T]{
repo: repo,
cache: cache,
ttl: ttl,
prefix: fmt.Sprintf("%T:", *new(T)),
}
}
func (r *CachedRepository[T]) Create(ctx context.Context, entity T) error {
// Create in database
if err := r.repo.Create(ctx, entity); err != nil {
return err
}
// Cache the created entity
key := r.cacheKey(entity.GetID())
if data, err := json.Marshal(entity); err == nil {
r.cache.Set(ctx, key, data, r.ttl) // Best effort
}
// PRODUCTION WARNING: Simple pattern invalidation has limitations
// Consider event-driven invalidation for complex systems
r.cache.DeletePattern(ctx, r.prefix+"list:*")
return nil
}
func (r *CachedRepository[T]) GetByID(ctx context.Context, id string) (T, error) {
var zero T
key := r.cacheKey(id)
// Try cache first
if data, err := r.cache.Get(ctx, key); err == nil {
var entity T
if json.Unmarshal(data, &entity) == nil {
return entity, nil
}
// Cache corruption - delete the bad entry
r.cache.Delete(ctx, key)
}
// Cache miss - get from repository
entity, err := r.repo.GetByID(ctx, id)
if err != nil {
return zero, err
}
// Store in cache for next time
if data, err := json.Marshal(entity); err == nil {
r.cache.Set(ctx, key, data, r.ttl) // Best effort
}
return entity, nil
}
func (r *CachedRepository[T]) Update(ctx context.Context, entity T) error {
// Update in database
if err := r.repo.Update(ctx, entity); err != nil {
return err
}
// Update cache
key := r.cacheKey(entity.GetID())
if data, err := json.Marshal(entity); err == nil {
r.cache.Set(ctx, key, data, r.ttl) // Best effort
}
// Invalidate list caches
r.cache.DeletePattern(ctx, r.prefix+"list:*")
return nil
}
func (r *CachedRepository[T]) Delete(ctx context.Context, id string) error {
// Delete from database
if err := r.repo.Delete(ctx, id); err != nil {
return err
}
// Remove from cache
key := r.cacheKey(id)
r.cache.Delete(ctx, key) // Best effort
// Invalidate list caches
r.cache.DeletePattern(ctx, r.prefix+"list:*")
return nil
}
func (r *CachedRepository[T]) List(ctx context.Context, filter map[string]interface{}) ([]T, error) {
// Create cache key from filter
listKey := r.listCacheKey(filter)
// Try cache first
if data, err := r.cache.Get(ctx, listKey); err == nil {
var entities []T
if json.Unmarshal(data, &entities) == nil {
return entities, nil
}
// Cache corruption - delete
r.cache.Delete(ctx, listKey)
}
// Cache miss - get from repository
entities, err := r.repo.List(ctx, filter)
if err != nil {
return nil, err
}
// Store in cache (shorter TTL for lists)
if data, err := json.Marshal(entities); err == nil {
r.cache.Set(ctx, listKey, data, r.ttl/2) // Lists expire faster
}
return entities, nil
}
// Cache management methods
func (r *CachedRepository[T]) InvalidateCache(ctx context.Context, id string) error {
key := r.cacheKey(id)
return r.cache.Delete(ctx, key)
}
func (r *CachedRepository[T]) WarmCache(ctx context.Context, ids []string) error {
for _, id := range ids {
// This will cache the entity if not already cached
_, err := r.GetByID(ctx, id)
if err != nil {
return fmt.Errorf("failed to warm cache for %s: %w", id, err)
}
}
return nil
}
// Private helper methods
func (r *CachedRepository[T]) cacheKey(id string) string {
return fmt.Sprintf("%s%s", r.prefix, id)
}
func (r *CachedRepository[T]) listCacheKey(filter map[string]interface{}) string {
// Create deterministic key from filter
if len(filter) == 0 {
return r.prefix + "list:all"
}
// IMPORTANT: This implementation works for primitive values and simple structures,
// but may produce non-deterministic keys if filter values contain nested maps.
// For production use with complex nested structures, consider restricting
// filter values to primitives or implement a canonical JSON serializer.
// Create deterministic hash by sorting keys and hashing consistently
keys := make([]string, 0, len(filter))
for k := range filter {
keys = append(keys, k)
}
sort.Strings(keys)
var toHash []byte
for _, k := range keys {
// Marshal key and value to handle different types consistently
keyData, _ := json.Marshal(k)
valData, _ := json.Marshal(filter[k])
toHash = append(toHash, keyData...)
toHash = append(toHash, valData...)
}
hasher := sha256.New()
hasher.Write(toHash)
return fmt.Sprintf("%slist:%s", r.prefix, hex.EncodeToString(hasher.Sum(nil))[:16]) // Use first 16 chars for shorter keys
}
Usage Example
// Wire up cached repository
func setupRepositories(db *sql.DB, cache Cache) *UserService {
// Base repository (talks to database)
baseRepo := repository.NewUserRepository(db)
// Wrap with caching decorator
cachedRepo := repository.NewCachedRepository[*User](
baseRepo,
cache,
15*time.Minute, // Cache TTL
)
// Service uses cached repository transparently
return service.NewUserService(cachedRepo, logger)
}
// Service layer doesn't know about caching
type UserService struct {
repo service.Repository[*User] // Could be cached or not
}
func (s *UserService) GetUser(ctx context.Context, id string) (*User, error) {
// This might hit cache or database - service doesn't care
return s.repo.GetByID(ctx, id)
}
Cache Invalidation Complexity
PRODUCTION REALITY: Cache invalidation is one of the hardest problems in distributed systems. The simple DeletePattern
approach shown above has significant limitations:
Limitations of Pattern-Based Invalidation
// ❌ SIMPLE BUT FLAWED: Pattern deletion doesn't solve complex dependencies
r.cache.DeletePattern(ctx, r.prefix+"list:*")
// Problems:
// 1. What if list depends on other entities too?
// 2. What about derived caches (computed values)?
// 3. Cross-service cache dependencies?
// 4. Race conditions during concurrent updates?
Better Production Strategies
// ✅ EVENT-DRIVEN INVALIDATION: React to domain events
type CacheInvalidationHandler struct {
cache Cache
}
func (h *CacheInvalidationHandler) HandleUserUpdated(ctx context.Context, event UserUpdatedEvent) error {
// Specific invalidation based on what actually changed
userKey := fmt.Sprintf("user:%s", event.UserID)
h.cache.Delete(ctx, userKey)
// Only invalidate relevant list caches based on event details
if event.FieldsChanged.Contains("status") {
h.cache.Delete(ctx, "user:list:active")
h.cache.Delete(ctx, "user:list:inactive")
}
return nil
}
// ✅ WRITE-THROUGH CACHE: Update cache synchronously with database
func (r *CachedRepository[T]) Update(ctx context.Context, entity T) error {
// Start transaction
tx, err := r.db.Begin(ctx)
if err != nil {
return err
}
defer tx.Rollback()
// Update database
if err := r.repo.UpdateTx(ctx, tx, entity); err != nil {
return err
}
// Update cache within transaction semantics
key := r.cacheKey(entity.GetID())
if data, err := json.Marshal(entity); err == nil {
if err := r.cache.Set(ctx, key, data, r.ttl); err != nil {
return fmt.Errorf("cache write failed: %w", err)
}
}
return tx.Commit()
}
// ✅ TTL-BASED STRATEGY: Accept stale data for complexity reduction
type SmartTTLCache struct {
shortTTL time.Duration // 5 minutes for frequently changing data
longTTL time.Duration // 1 hour for stable data
}
func (c *SmartTTLCache) SetWithSmartTTL(key string, value []byte, dataType string) error {
ttl := c.shortTTL
if dataType == "reference_data" { // Rarely changes
ttl = c.longTTL
}
return c.Set(key, value, ttl)
}
Production Cache Invalidation Decision Tree
| Consistency Need | Strategy | Trade-offs | |------------------|----------|------------| | Strong | Write-through + Transactions | High latency, complex | | Eventual | Event-driven invalidation | Temporary stale data | | Weak | TTL-based expiration | Simple, predictable staleness | | Manual | Explicit cache warming | Full control, operational overhead |
### Cache Implementation Examples
go
// Redis cache implementation
type RedisCache struct {
client *redis.Client
}
func (c *RedisCache) Get(ctx context.Context, key string) ([]byte, error) { return c.client.Get(ctx, key).Bytes() }
func (c *RedisCache) Set(ctx context.Context, key string, value []byte, ttl time.Duration) error { return c.client.Set(ctx, key, value, ttl).Err() }
func (c *RedisCache) Delete(ctx context.Context, key string) error { return c.client.Del(ctx, key).Err() }
func (c *RedisCache) DeletePattern(ctx context.Context, pattern string) error { keys, err := c.client.Keys(ctx, pattern).Result() if err != nil || len(keys) == 0 { return err } return c.client.Del(ctx, keys...).Err() }
// In-memory cache for testing type MemoryCache struct { data map[string]cacheEntry mu sync.RWMutex }
type cacheEntry struct { value []byte expiry time.Time }
func NewMemoryCache() *MemoryCache { return &MemoryCache{ data: make(map[string]cacheEntry), } }
func (c *MemoryCache) Get(ctx context.Context, key string) ([]byte, error) { c.mu.RLock() defer c.mu.RUnlock()
entry, exists := c.data[key] if !exists || time.Now().After(entry.expiry) { return nil, errors.New("cache miss") }
return entry.value, nil }
func (c *MemoryCache) Set(ctx context.Context, key string, value []byte, ttl time.Duration) error { c.mu.Lock() defer c.mu.Unlock()
c.data[key] = cacheEntry{ value: value, expiry: time.Now().Add(ttl), }
return nil }
func (c *MemoryCache) Delete(ctx context.Context, key string) error { c.mu.Lock() defer c.mu.Unlock()
delete(c.data, key) return nil }
func (c *MemoryCache) DeletePattern(ctx context.Context, pattern string) error { c.mu.Lock() defer c.mu.Unlock()
// Simple prefix matching (in production, use proper pattern matching) prefix := strings.TrimSuffix(pattern, "*") for key := range c.data { if strings.HasPrefix(key, prefix) { delete(c.data, key) } }
return nil }
**Key Benefits of this Pattern:**
- ✅ **Transparent**: Service layer doesn't know about caching
- ✅ **Composable**: Can stack multiple decorators (metrics, tracing, etc.)
- ✅ **Testable**: Easy to test with in-memory cache
- ✅ **Type-safe**: Full generic type safety
- ✅ **Cache-aside**: Handles cache failures gracefully
- ✅ **Invalidation**: Proper cache invalidation on mutations
### Generic CRUD Service
go
// CRUDService provides basic CRUD operations for any entity
type CRUDService[T Entity] struct {
repo Repository[T]
cache Cache
logger Logger
}
func NewCRUDServiceT Entity *CRUDService[T] { return &CRUDService[T]{ repo: repo, cache: cache, logger: logger, } }
func (s *CRUDService[T]) Create(ctx context.Context, entity T) error { if err := s.validateEntity(entity); err != nil { return NewValidationError("entity validation failed", err) }
if err := s.repo.Create(ctx, entity); err != nil { s.logger.Error("failed to create entity", "entity_id", entity.GetID(), "error", err) return NewServiceError("create failed", err) }
s.logger.Info("entity created", "entity_id", entity.GetID(), "type", fmt.Sprintf("%T", entity))
return nil }
func (s *CRUDService[T]) GetByID(ctx context.Context, id string) (T, error) { var zero T
entity, err := s.repo.GetByID(ctx, id) if err != nil { s.logger.Error("failed to get entity", "entity_id", id, "error", err) return zero, NewServiceError("get failed", err) }
return entity, nil }
func (s *CRUDService[T]) Update(ctx context.Context, entity T) error { if err := s.validateEntity(entity); err != nil { return NewValidationError("entity validation failed", err) }
if err := s.repo.Update(ctx, entity); err != nil { s.logger.Error("failed to update entity", "entity_id", entity.GetID(), "error", err) return NewServiceError("update failed", err) }
s.logger.Info("entity updated", "entity_id", entity.GetID()) return nil }
func (s *CRUDService[T]) validateEntity(entity T) error { if entity.GetID() == "" { return errors.New("entity ID is required") } return nil }
### Generic Validator Pattern
go
// Validator validates any type T
type Validator[T any] interface {
Validate(ctx context.Context, value T) error
}
// ValidationRule represents a single validation rule type ValidationRule[T any] struct { Name string Rule func(T) error }
// CompositeValidator combines multiple validation rules type CompositeValidator[T any] struct { rules []ValidationRule[T] }
func NewCompositeValidatorT any *CompositeValidator[T] { return &CompositeValidator[T]{rules: rules} }
func (v *CompositeValidator[T]) Validate(ctx context.Context, value T) error { var errs []error
for _, rule := range v.rules { if err := rule.Rule(value); err != nil { errs = append(errs, fmt.Errorf("%s: %w", rule.Name, err)) } }
if len(errs) > 0 { return NewValidationError("validation failed", errors.Join(errs...)) }
return nil }
// Example usage with User entity func NewUserValidator() CompositeValidator[domain.User] { return NewCompositeValidator( ValidationRule[*domain.User]{ Name: "email_required", Rule: func(u *domain.User) error { if u.Email == "" { return errors.New("email is required") } return nil }, }, ValidationRule[*domain.User]{ Name: "email_format", Rule: func(u *domain.User) error { if !isValidEmail(u.Email) { return errors.New("invalid email format") } return nil }, }, ) }
### Generic Event Bus
go
// Event represents any event that can be published
type Event interface {
GetType() string
GetTimestamp() time.Time
}
// EventHandler handles events of type T type EventHandler[T Event] interface { Handle(ctx context.Context, event T) error }
// EventBus manages events of any type type EventBus[T Event] struct { handlers map[string][]EventHandler[T] logger Logger mu sync.RWMutex }
func NewEventBusT Event *EventBus[T] { return &EventBus[T]{ handlers: make(map[string][]EventHandler[T]), logger: logger, } }
func (eb *EventBus[T]) Subscribe(eventType string, handler EventHandler[T]) { eb.mu.Lock() defer eb.mu.Unlock()
eb.handlers[eventType] = append(eb.handlers[eventType], handler) }
func (eb *EventBus[T]) Publish(ctx context.Context, event T) error { eb.mu.RLock() handlers := eb.handlers[event.GetType()] eb.mu.RUnlock()
var errs []error for _, handler := range handlers { if err := handler.Handle(ctx, event); err != nil { eb.logger.Error("event handler failed", "event_type", event.GetType(), "error", err) errs = append(errs, err) } }
if len(errs) > 0 { return errors.Join(errs...) }
return nil }
### Generic Observer Pattern
go
// Observer observes changes to entities of type T
type Observer[T any] interface {
OnChanged(ctx context.Context, old, new T) error
}
// Observable manages observers for entity changes type Observable[T any] struct { observers []Observer[T] mu sync.RWMutex }
func NewObservable[T any]() *Observable[T] { return &Observable[T]{} }
func (o *Observable[T]) AddObserver(observer Observer[T]) { o.mu.Lock() defer o.mu.Unlock() o.observers = append(o.observers, observer) }
func (o *Observable[T]) NotifyObservers(ctx context.Context, old, new T) error { o.mu.RLock() observers := make([]Observer[T], len(o.observers)) copy(observers, o.observers) o.mu.RUnlock()
var errs []error for _, observer := range observers { if err := observer.OnChanged(ctx, old, new); err != nil { errs = append(errs, err) } }
return errors.Join(errs...) }
// ObservableService combines CRUD with observation type ObservableService[T Entity] struct { *CRUDService[T] *Observable[T] }
func NewObservableServiceT Entity *ObservableService[T] { return &ObservableService[T]{ CRUDService: NewCRUDService(repo, cache, logger), Observable: NewObservable[T](), } }
func (s *ObservableService[T]) Update(ctx context.Context, entity T) error { // Get old version old, err := s.GetByID(ctx, entity.GetID()) if err != nil { return err }
// Update entity if err := s.CRUDService.Update(ctx, entity); err != nil { return err }
// Notify observers return s.NotifyObservers(ctx, old, entity) }
### Generic Factory Pattern
go
// Factory creates instances of type T from configuration
type Factory[T any, Config any] interface {
Create(config Config) (T, error)
Validate(config Config) error
}
// ServiceFactory creates services with shared dependencies type ServiceFactory[T any] struct { db *sql.DB cache Cache logger Logger }
func NewServiceFactoryT any ServiceFactory[T] { return &ServiceFactory[T]{ db: db, cache: cache, logger: logger, } }
// Usage example for different service types func (f ServiceFactory[T]) CreateUserService() (CRUDService[*domain.User], error) { repo := postgres.NewUserRepository(f.db, f.logger) return NewCRUDService*domain.User, nil }
func (f ServiceFactory[T]) CreateOrderService() (CRUDService[*domain.Order], error) { repo := postgres.NewOrderRepository(f.db, f.logger) return NewCRUDService*domain.Order, nil }
### Best Practices for Generic Services
1. **Use Type Constraints Wisely**
go
// Good: Specific constraint
type Entity interface {
GetID() string
}
// Avoid: Too generic type Entity interface { any }
2. **Prefer Composition Over Generic Inheritance**
go
// Good: Composition
type UserService struct {
CRUDService[domain.User]
emailService EmailSender
}
// Less flexible: Generic base only type UserService CRUDService[*domain.User]
3. **Generic Methods Over Generic Types When Appropriate**
go
// Sometimes better: Generic method
func (s *Service) ProcessT Entity error {
// Process any entity type
}
4. **Document Generic Constraints**
go
// Repository provides CRUD operations for entities.
// T must implement Entity interface with GetID() method.
type Repository[T Entity] interface {
Create(ctx context.Context, entity T) error
}
---
## Common Patterns
### Strategy Pattern
go
// ProcessingStrategy defines how items are processed
type ProcessingStrategy interface {
Name() string
CanProcess(item *domain.Item) bool
Process(ctx context.Context, item domain.Item) (domain.Result, error)
}
// ProcessingService with pluggable strategies type ProcessingService struct { strategies map[string]ProcessingStrategy logger Logger }
func (s *ProcessingService) RegisterStrategy(strategy ProcessingStrategy) { s.strategies[strategy.Name()] = strategy }
func (s ProcessingService) ProcessItem(ctx context.Context, itemID string) (domain.Result, error) { item, err := s.repo.GetItem(ctx, itemID) if err != nil { return nil, err }
// Find matching strategy for _, strategy := range s.strategies { if strategy.CanProcess(item) { return strategy.Process(ctx, item) } }
return nil, ErrNoStrategyAvailable }
### Builder Pattern
go
// RequestBuilder builds complex requests
type RequestBuilder struct {
method string
url string
headers map[string]string
params map[string]string
body interface{}
}
func NewRequest() *RequestBuilder { return &RequestBuilder{ headers: make(map[string]string), params: make(map[string]string), } }
func (b RequestBuilder) Method(method string) RequestBuilder { b.method = method return b }
func (b RequestBuilder) URL(url string) RequestBuilder { b.url = url return b }
func (b RequestBuilder) Header(key, value string) RequestBuilder { b.headers[key] = value return b }
func (b RequestBuilder) Build() (http.Request, error) { // Build the actual request }
// Usage req, err := NewRequest(). Method("POST"). URL("https://api.example.com/users"). Header("Authorization", "Bearer token"). Header("Content-Type", "application/json"). Build()
### Token-Aware Chunking for LLMs
go
// TokenAwareChunker splits text for LLM processing
type TokenAwareChunker struct {
tokenizer Tokenizer
overlap int
}
func (c *TokenAwareChunker) Chunk(text string, maxTokens int) []Chunk { effectiveMax := maxTokens - c.overlap
var chunks []Chunk sentences := c.splitIntoSentences(text)
currentChunk := strings.Builder{} currentTokens := 0
for _, sentence := range sentences { sentenceTokens := c.tokenizer.CountTokens(sentence)
// Check if adding sentence exceeds limit if currentTokens+sentenceTokens > effectiveMax && currentChunk.Len() > 0 { // Save current chunk chunks = append(chunks, Chunk{ Content: currentChunk.String(), TokenCount: currentTokens, })
// Start new chunk with overlap currentChunk.Reset() currentTokens = 0 }
currentChunk.WriteString(sentence) currentChunk.WriteString(" ") currentTokens += sentenceTokens }
// Add final chunk if currentChunk.Len() > 0 { chunks = append(chunks, Chunk{ Content: currentChunk.String(), TokenCount: currentTokens, }) }
return chunks }
---
## Quick Reference Checklist
### Service Layer Design Checklist
- [ ] Define interfaces in the service package (consumer)
- [ ] Accept interfaces, return concrete types
- [ ] Create domain-specific error types
- [ ] Use constructor dependency injection
- [ ] **CRITICAL**: Inject Logger interface via constructor - never create loggers inside services
- [ ] No business logic in handlers or repositories
- [ ] Keep services focused on single responsibility
- [ ] Mock boundaries for testing
- [ ] Use context for cancellation only
- [ ] Implement graceful shutdown hooks
- [ ] Document service contracts clearly
### Interface Design Checklist
- [ ] Interfaces defined by consumer, not provider
- [ ] Small, focused interfaces (3-5 methods max)
- [ ] No generic "Service" interfaces
- [ ] Interface segregation over large contracts
- [ ] Mock interfaces for testing
- [ ] Version interfaces when breaking changes needed
- [ ] Document expected behavior in comments
- [ ] Use interface{} sparingly
- [ ] Prefer multiple small interfaces
- [ ] Name interfaces by what they do (Reader, not IReader)
### Dependency Injection Checklist
- [ ] Constructor injection only (no setter injection)
- [ ] Required dependencies in NewService()
- [ ] Optional dependencies via functional options
- [ ] No global state or singletons
- [ ] Validate dependencies in constructor
- [ ] Return error from constructor if invalid
- [ ] Wire dependencies in main() or setup
- [ ] Use interfaces for all external dependencies
- [ ] Keep dependency count low (max 5-7)
- [ ] Document each dependency's purpose
### Processing Patterns Checklist
- [ ] Use channels for data pipelines
- [ ] Implement backpressure with buffered channels
- [ ] Handle context cancellation in workers
- [ ] Clean shutdown with sync.WaitGroup
- [ ] Error handling without stopping pipeline
- [ ] Monitor goroutine leaks
- [ ] Batch processing for efficiency
- [ ] Stream large data sets
- [ ] Rate limit external calls
- [ ] Add observability (metrics, traces)
---
## Related Sections
- **[Error Handling](go-practices-error-logging.md#error-handling-architecture)** - Domain error types and handling patterns
- **[Code Organization](go-practices-code-organization.md)** - Project structure and interface placement
- **[Testing](go-practices-testing.md#mocking-strategies)** - Mocking service dependencies
- **[Database Patterns](go-practices-database.md#repository-pattern)** - Repository interface design
- **[Common Patterns](go-practices-patterns.md#factory-pattern)** - Factory pattern for service creation
---
---
# 3. Code Organization & Project Structure
## Table of Contents
1. [Standard Go Project Layout](#standard-go-project-layout)
2. [Package Design Principles](#package-design-principles)
3. [Dependency Rules](#dependency-rules)
4. [Import Organization](#import-organization)
5. [Module Management](#module-management)
---
## Standard Go Project Layout
### Directory Structure
myapp/
├── cmd/ # Application entrypoints
│ └── myapp/ # Main application
│ ├── main.go # Entry point, minimal logic
│ └── commands/ # CLI command implementations
│ ├── root.go # Root command setup
│ ├── server.go # Server command
│ ├── migrate.go # Database migrations
│ └── worker.go # Background worker
│
├── internal/ # Private application code
│ ├── domain/ # Core business entities (no dependencies)
│ │ ├── user.go # User entity and methods
│ │ ├── document.go # Document entity
│ │ └── errors.go # Domain-specific errors
│ │
│ ├── service/ # Business logic layer
│ │ ├── interfaces.go # Service interfaces
│ │ ├── user_service.go # User business logic
│ │ ├── auth_service.go # Authentication logic
│ │ └── doc_service.go # Document processing
│ │
│ ├── storage/ # Data persistence layer
│ │ ├── postgres/ # PostgreSQL implementation
│ │ │ ├── user_repo.go
│ │ │ ├── migrations/ # SQL migrations
│ │ │ └── queries/ # SQL queries
│ │ ├── redis/ # Redis implementation
│ │ └── memory/ # In-memory for testing
│ │
│ ├── transport/ # API/RPC layer
│ │ ├── http/ # HTTP handlers
│ │ │ ├── server.go
│ │ │ ├── routes.go
│ │ │ ├── middleware/
│ │ │ └── handlers/
│ │ ├── grpc/ # gRPC services
│ │ └── graphql/ # GraphQL resolvers
│ │
│ ├── config/ # Configuration
│ │ ├── config.go # Config structures
│ │ └── loader.go # Config loading logic
│ │
│ ├── logging/ # Logging setup
│ ├── metrics/ # Metrics collection
│ └── errors/ # Error handling
│
├── pkg/ # Public packages (if any)
│ └── client/ # Client library for your service
│ ├── client.go
│ └── types.go
│
├── migrations/ # Database migrations
│ ├── 001_initial.up.sql
│ └── 001_initial.down.sql
│
├── scripts/ # Build and maintenance scripts
│ ├── build.sh
│ ├── test.sh
│ └── generate.sh
│
├── deployments/ # Deployment configurations
│ ├── docker/
│ │ └── Dockerfile
│ ├── kubernetes/
│ │ ├── deployment.yaml
│ │ └── service.yaml
│ └── terraform/
│
├── docs/ # Documentation
│ ├── api.md # API documentation
│ ├── architecture.md # Architecture decisions
│ └── development.md # Development guide
│
├── test/ # Integration tests
│ ├── integration/
│ └── e2e/
│
├── .github/ # GitHub specific
│ └── workflows/ # GitHub Actions
│
├── go.mod
├── go.sum
├── Makefile
├── README.md
└── .gitignore
### Key Directories Explained
#### `/cmd`
- Contains application entry points
- Each subdirectory is a main package
- Minimal code - just wiring and startup
go
// cmd/myapp/main.go
package main
import ( "context" "os" "os/signal" "syscall"
"github.com/myorg/myapp/internal/cli" )
func main() { ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) defer cancel()
if err := cli.Execute(ctx); err != nil { os.Exit(1) } }
#### `/internal`
- Private application code
- Not importable by other projects
- Contains all business logic
#### `/pkg`
- Public libraries (use sparingly)
- Only if you want to expose APIs
- Most apps don't need this
---
## Package Design Principles
### Domain Layer - Zero Dependencies
go
// internal/domain/user.go
// Domain layer has NO dependencies on other internal packages
package domain
import ( "errors" "time" "unicode" )
// User represents a user in our domain type User struct { ID string Email string Name string PasswordHash string Status UserStatus CreatedAt time.Time UpdatedAt time.Time }
type UserStatus string
const ( UserStatusActive UserStatus = "active" UserStatusPending UserStatus = "pending" UserStatusDisabled UserStatus = "disabled" )
// Business rules as methods func (u *User) CanLogin() bool { return u.Status == UserStatusActive }
func (u *User) SetPassword(plain string) error { if err := u.validatePassword(plain); err != nil { return err }
hash, err := hashPassword(plain) if err != nil { return err }
u.PasswordHash = hash return nil }
func (u *User) validatePassword(password string) error { if len(password) < 8 { return errors.New("password must be at least 8 characters") }
var hasUpper, hasLower, hasDigit bool for _, r := range password { switch { case unicode.IsUpper(r): hasUpper = true case unicode.IsLower(r): hasLower = true case unicode.IsDigit(r): hasDigit = true } }
if !hasUpper || !hasLower || !hasDigit { return errors.New("password must contain uppercase, lowercase, and digit") }
return nil }
### Service Layer - Defines Interfaces
go
// internal/service/interfaces.go
// Service layer defines its own interfaces
package service
import ( "context" "myapp/internal/domain" )
// UserRepository is defined by service, not storage type UserRepository interface { Create(ctx context.Context, user *domain.User) error GetByID(ctx context.Context, id string) (*domain.User, error) GetByEmail(ctx context.Context, email string) (*domain.User, error) Update(ctx context.Context, user *domain.User) error Delete(ctx context.Context, id string) error }
// EmailSender interface for notifications type EmailSender interface { Send(ctx context.Context, to, subject, body string) error }
### Storage Layer - Implements Service Interfaces
go
// internal/storage/postgres/user_repo.go
// Storage layer implements service interfaces
package postgres
import ( "context" "database/sql"
"myapp/internal/domain" "myapp/internal/service" )
// Ensure we implement the interface var _ service.UserRepository = (*UserRepository)(nil)
type UserRepository struct { db *sql.DB }
func NewUserRepository(db sql.DB) UserRepository { return &UserRepository{db: db} }
func (r UserRepository) Create(ctx context.Context, user domain.User) error {
query :=
INSERT INTO users (id, email, name, password_hash, status, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7)
_, err := r.db.ExecContext(ctx, query, user.ID, user.Email, user.Name, user.PasswordHash, user.Status, user.CreatedAt, user.UpdatedAt, )
return err }
### Transport Layer - Handles API Concerns
go
// internal/transport/http/handlers/user_handler.go
// Transport layer handles HTTP concerns only
package handlers
import ( "encoding/json" "net/http"
"myapp/internal/service" )
type UserHandler struct { userService *service.UserService }
func NewUserHandler(userService service.UserService) UserHandler { return &UserHandler{ userService: userService, } }
func (h UserHandler) CreateUser(w http.ResponseWriter, r http.Request) { var req CreateUserRequest if err := json.NewDecoder(r.Body).Decode(&req); err != nil { respondError(w, http.StatusBadRequest, "invalid request") return }
// Convert HTTP request to service input input := service.CreateUserInput{ Email: req.Email, Name: req.Name, Password: req.Password, }
// Call service user, err := h.userService.CreateUser(r.Context(), input) if err != nil { handleServiceError(w, err) return }
// Convert domain object to HTTP response resp := UserResponse{ ID: user.ID, Email: user.Email, Name: user.Name, Status: string(user.Status), CreatedAt: user.CreatedAt, }
respondJSON(w, http.StatusCreated, resp) }
---
## Dependency Rules
### Dependency Flow
┌─────────────┐
│ main.go │ ─── Wires everything
└──────┬──────┘
│
┌──────▼──────┐
│ transport │ ─── HTTP/gRPC handlers
└──────┬──────┘
│
┌──────▼──────┐
│ service │ ─── Business logic
└──────┬──────┘
│
┌──────▼──────┐
│ domain │ ─── Zero dependencies
└─────────────┘
### Import Rules
go
// internal/service/user_service.go
package service
import ( // Standard library "context" "fmt" "time"
// Internal - only domain imports allowed "myapp/internal/domain" "myapp/internal/errors"
// External dependencies "github.com/google/uuid" )
// NEVER import from these packages: // - "myapp/internal/storage/*" - Use interfaces instead // - "myapp/internal/transport/*" - Service doesn't know about transport // - "myapp/internal/config" - Pass config values, not entire config
### Package Cohesion
Each package should have a single, clear purpose:
- **config**: Only configuration loading/validation
- **storage**: Only data persistence
- **service**: Only business logic orchestration
- **transport**: Only API protocol handling
- **domain**: Only business entities and rules
---
## Import Organization
### Standard Import Order
go
package service
import ( // 1. Standard library packages "context" "encoding/json" "fmt" "time"
// 2. External packages "github.com/google/uuid" "github.com/lib/pq" "golang.org/x/sync/errgroup"
// 3. Internal packages "myapp/internal/domain" "myapp/internal/errors" )
### Import Alias Guidelines
go
import (
// Use aliases for clarity when needed
httptransport "myapp/internal/transport/http"
grpctransport "myapp/internal/transport/grpc"
// Avoid dots // . "myapp/internal/utils" // Never do this
// Underscore only for side effects _ "github.com/lib/pq" // Register SQL driver )
---
## Module Management
### go.mod Best Practices
go
// go.mod
module github.com/myorg/myapp
go 1.21
require ( // Direct dependencies only github.com/spf13/cobra v1.7.0 github.com/spf13/viper v1.16.0 github.com/lib/pq v1.10.9 modernc.org/sqlite v1.27.0 )
require ( // Indirect dependencies managed by Go )
// For local development with multiple modules replace github.com/myorg/common => ../common
// Pin problematic dependencies exclude github.com/problem/package v1.0.0
### Dependency Hygiene
bash
Check for updates
go list -m -u all
Analyze dependency tree
go mod graph | grep -v '@' | sort | uniq
Verify and clean
go mod verify go mod tidy
Check licenses
go-licenses check ./...
### Minimal Dependencies
Each dependency increases:
- Attack surface
- Build time
- Binary size
- Maintenance burden
Ask before adding:
1. Do we really need this?
2. Can we implement it simply ourselves?
3. Is it well-maintained?
4. What's the license?
---
## Application Wiring
### Dependency Injection at Main
go
// internal/app/app.go
// Application wiring in one place
package app
import ( "database/sql" "fmt"
"myapp/internal/config" "myapp/internal/service" "myapp/internal/storage/postgres" "myapp/internal/transport/http" )
// App holds all application components type App struct { Config *config.Config DB *sql.DB HTTPServer *http.Server Services *Services }
// Services holds all business services type Services struct { User *service.UserService Auth *service.AuthService Doc *service.DocumentService }
// New creates a fully wired application func New(cfg config.Config) (App, error) { // Initialize database db, err := initDB(cfg.Database) if err != nil { return nil, fmt.Errorf("init db: %w", err) }
// Initialize repositories userRepo := postgres.NewUserRepository(db) docRepo := postgres.NewDocumentRepository(db)
// Initialize external clients emailClient := initEmailClient(cfg.Email)
// Initialize services services := &Services{ User: service.NewUserService(userRepo, emailClient), Auth: service.NewAuthService(userRepo), Doc: service.NewDocumentService(docRepo), }
// Initialize HTTP server httpServer := http.NewServer(cfg.HTTP, services)
return &App{ Config: cfg, DB: db, HTTPServer: httpServer, Services: services, }, nil }
// Run starts the application func (a *App) Run() error { // Run migrations if err := a.migrate(); err != nil { return fmt.Errorf("migrate: %w", err) }
// Start HTTP server return a.HTTPServer.ListenAndServe() }
// Shutdown gracefully shuts down the application func (a *App) Shutdown(ctx context.Context) error { // Shutdown HTTP server if err := a.HTTPServer.Shutdown(ctx); err != nil { return fmt.Errorf("shutdown http: %w", err) }
// Close database if err := a.DB.Close(); err != nil { return fmt.Errorf("close db: %w", err) }
return nil }
### Common Mistakes to Avoid
1. **❌ Putting everything in main.go**
go
// BAD: 1000-line main.go with all logic
2. **❌ Business logic in cmd/ package**
go
// BAD: cmd/process.go with database queries
3. **❌ Circular dependencies between packages**
go
// BAD: service imports storage, storage imports service
4. **❌ Mixing concerns**
go
// BAD: HTTP handler doing database queries directly
5. **❌ Using init() for setup**
go
// BAD: func init() { connectDB() }
6. **❌ Global state/singletons**
go
// BAD: var db *sql.DB at package level
## Quick Reference Checklist
### Project Structure & Layout
- [ ] Use standard Go project layout with `/cmd`, `/internal`, `/pkg`
- [ ] Place main applications in `/cmd/<appname>/`
- [ ] Keep all private code in `/internal/` to prevent external imports
- [ ] Organize code by functional boundaries, not technical layers
- [ ] Use meaningful package names that describe functionality
- [ ] Avoid deeply nested directory structures (max 3-4 levels)
### Package Design Principles
- [ ] Keep domain layer at `/internal/domain/` with zero dependencies
- [ ] Define repository interfaces in service layer (not storage layer)
- [ ] Implement storage interfaces in `/internal/storage/`
- [ ] Place transport concerns in `/internal/transport/`
- [ ] Group related functionality in cohesive packages
- [ ] Follow single responsibility principle for packages
### Dependency Management
- [ ] Enforce dependency flow: transport → service → domain
- [ ] Never import storage packages from service layer
- [ ] Define interfaces where they're used (consumer defines interface)
- [ ] Use dependency injection, avoid global variables
- [ ] Keep external dependencies to a minimum
- [ ] Document architectural decision records (ADRs)
### Import Organization & Standards
- [ ] Group imports: standard library, external, internal
- [ ] Use clear aliases for ambiguous package names
- [ ] Avoid dot imports (except for testing utilities)
- [ ] Use underscore imports only for side effects
- [ ] Keep import groups separated by blank lines
- [ ] Sort imports within each group alphabetically
### Module & Dependency Hygiene
- [ ] Keep go.mod clean with only direct dependencies
- [ ] Use `go mod tidy` regularly to clean unused dependencies
- [ ] Pin problematic dependencies or use replace directives
- [ ] Regularly audit dependencies for security issues
- [ ] Use minimal dependencies for core functionality
- [ ] Document dependency choices and alternatives considered
### Application Wiring & Initialization
- [ ] Wire all dependencies in main() or dedicated app package
- [ ] Use constructor functions for all components
- [ ] Avoid init() functions with side effects
- [ ] Implement graceful startup and shutdown sequences
- [ ] Handle initialization errors explicitly
- [ ] Use explicit dependency injection over service locators
### Interface Design Guidelines
- [ ] Keep interfaces small and focused (1-3 methods ideal)
- [ ] Define interfaces at the point of use (consumer package)
- [ ] Use composition to build larger interfaces from smaller ones
- [ ] Avoid god interfaces with too many methods
- [ ] Name interfaces by what they do, not what they are
- [ ] Use interface segregation principle
### Code Organization Anti-Patterns
- [ ] Avoid putting business logic in main.go
- [ ] Don't create circular dependencies between packages
- [ ] Avoid mixing transport concerns with business logic
- [ ] Don't use global variables for application state
- [ ] Avoid package-level init() functions with side effects
- [ ] Don't create god objects or packages
### Testing Organization
- [ ] Place unit tests next to the code they test
- [ ] Put integration tests in separate `/test/` directory
- [ ] Use separate package for black-box testing (_test suffix)
- [ ] Create test helpers and builders in test packages
- [ ] Mock at service boundaries, not internal components
- [ ] Organize test fixtures and data logically
### Documentation & Maintainability
- [ ] Document package purpose and main types
- [ ] Include usage examples in package documentation
- [ ] Keep README.md updated with build and run instructions
- [ ] Document architectural decisions and trade-offs
- [ ] Use godoc conventions for public APIs
- [ ] Maintain CHANGELOG.md for releases
---
---
# 4. Testing & Quality Assurance
## Table of Contents
1. [Table-Driven Tests](#table-driven-tests)
2. [Test Organization](#test-organization)
3. [Mocking Strategies](#mocking-strategies)
4. [Integration Testing](#integration-testing)
5. [Testing Patterns](#testing-patterns)
6. [Fuzz Testing](#fuzz-testing)
7. [Test Coverage & Quality](#test-coverage--quality)
---
## Table-Driven Tests
Table-driven tests should be your default approach for testing functions with multiple scenarios. However, for functions with a single, trivial test case, a direct test can sometimes be more readable.
### When to Use Table-Driven Tests
✅ **Use table-driven tests for:**
- Functions with multiple input/output scenarios
- Testing edge cases and error conditions
- Validating different configurations or parameters
- When you have 3 or more test cases
✅ **Consider direct tests for:**
- Single, trivial test cases where the table adds boilerplate
- Complex setup that doesn't fit well in a table structure
- Tests that require significantly different mocking per case
### The Standard Pattern
go
func TestValidateEmail(t *testing.T) {
tests := []struct {
name string
email string
wantErr bool
}{
{
name: "valid email",
email: "user@example.com",
wantErr: false,
},
{
name: "missing @",
email: "userexample.com",
wantErr: true,
},
{
name: "missing domain",
email: "user@",
wantErr: true,
},
{
name: "missing local part",
email: "@example.com",
wantErr: true,
},
{
name: "valid with subdomain",
email: "user@mail.example.com",
wantErr: false,
},
{
name: "valid with plus",
email: "user+tag@example.com",
wantErr: false,
},
}
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { err := ValidateEmail(tt.email) if (err != nil) != tt.wantErr { t.Errorf("ValidateEmail() error = %v, wantErr %v", err, tt.wantErr) } }) } }
### Complex Scenarios with Setup
go
func TestUserService_UpdateUser(t *testing.T) {
tests := []struct {
name string
userID string
input UpdateUserInput
setup func(*MockUserRepository)
wantErr bool
checkErr func(t *testing.T, err error)
checkFn func(t testing.T, user domain.User)
}{
{
name: "successful update",
userID: "user-123",
input: UpdateUserInput{
Name: ptr("New Name"),
},
setup: func(m *MockUserRepository) {
m.GetByIDFunc = func(ctx context.Context, id string) (*domain.User, error) {
return &domain.User{
ID: id,
Name: "Old Name",
}, nil
}
m.UpdateFunc = func(ctx context.Context, user *domain.User) error {
return nil
}
},
wantErr: false,
checkFn: func(t testing.T, user domain.User) {
assert.Equal(t, "New Name", user.Name)
},
},
{
name: "user not found",
userID: "nonexistent",
input: UpdateUserInput{},
setup: func(m *MockUserRepository) {
m.GetByIDFunc = func(ctx context.Context, id string) (*domain.User, error) {
return nil, ErrNotFound
}
},
wantErr: true,
checkErr: func(t *testing.T, err error) {
assert.ErrorIs(t, err, ErrNotFound)
},
},
}
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { // Arrange mockRepo := &MockUserRepository{} if tt.setup != nil { tt.setup(mockRepo) }
svc := NewUserService(mockRepo, nil, nil, nil, logger)
// Act user, err := svc.UpdateUser(context.Background(), tt.userID, tt.input)
// Assert if tt.wantErr { assert.Error(t, err) if tt.checkErr != nil { tt.checkErr(t, err) } } else { assert.NoError(t, err) if tt.checkFn != nil { tt.checkFn(t, user) } } }) } }
// Helper for string pointers func ptr(s string) *string { return &s }
---
## Test Organization
### Test File Placement
go
// Unit tests next to code
internal/
├── service/
│ ├── user_service.go
│ └── userservicetest.go # Unit tests
│
├── storage/
│ ├── postgres/
│ │ ├── user_repo.go
│ │ └── userrepotest.go # Unit tests
// Integration tests separate test/ ├── integration/ │ ├── userintegrationtest.go │ └── apiintegrationtest.go └── e2e/ └── userflowtest.go
### Test Package Naming
go
// Same package for white-box testing (access to internals)
package service
func TestInternalHelper(t *testing.T) { // Can test unexported functions }
// Separate package for black-box testing (API only) package service_test
import ( "testing" "myapp/internal/service" )
func TestPublicAPI(t *testing.T) { // Only tests exported API }
### Test Helpers
go
// internal/test/builders/user_builder.go
package builders
import ( "time" "myapp/internal/domain" )
// UserBuilder builds test users type UserBuilder struct { user domain.User }
func NewUser() *UserBuilder { return &UserBuilder{ user: domain.User{ ID: "test-user-123", Email: "test@example.com", Name: "Test User", Status: domain.UserStatusActive, CreatedAt: time.Now(), UpdatedAt: time.Now(), }, } }
func (b UserBuilder) WithID(id string) UserBuilder { b.user.ID = id return b }
func (b UserBuilder) WithEmail(email string) UserBuilder { b.user.Email = email return b }
func (b UserBuilder) WithStatus(status domain.UserStatus) UserBuilder { b.user.Status = status return b }
func (b UserBuilder) Build() domain.User { return &b.user }
// Usage func TestExample(t *testing.T) { user := builders.NewUser(). WithEmail("custom@example.com"). WithStatus(domain.UserStatusPending). Build()
// Use user in test }
---
## Mocking Strategies
### Hand-Written Mocks
go
// internal/service/mocks/user_repository.go
package mocks
import ( "context" "sync"
"myapp/internal/domain" )
type MockUserRepository struct { mu sync.RWMutex
// Function fields for easy stubbing CreateFunc func(ctx context.Context, user *domain.User) error GetByIDFunc func(ctx context.Context, id string) (*domain.User, error) GetByEmailFunc func(ctx context.Context, email string) (*domain.User, error) UpdateFunc func(ctx context.Context, user *domain.User) error DeleteFunc func(ctx context.Context, id string) error
// Call tracking calls []Call }
type Call struct { Method string Args []interface{} }
func (m MockUserRepository) Create(ctx context.Context, user domain.User) error { m.recordCall("Create", ctx, user) if m.CreateFunc != nil { return m.CreateFunc(ctx, user) } return nil }
func (m MockUserRepository) GetByID(ctx context.Context, id string) (domain.User, error) { m.recordCall("GetByID", ctx, id) if m.GetByIDFunc != nil { return m.GetByIDFunc(ctx, id) } return nil, nil }
func (m *MockUserRepository) recordCall(method string, args ...interface{}) { m.mu.Lock() defer m.mu.Unlock() m.calls = append(m.calls, Call{Method: method, Args: args}) }
func (m *MockUserRepository) CallsTo(method string) int { m.mu.RLock() defer m.mu.RUnlock()
count := 0 for _, call := range m.calls { if call.Method == method { count++ } } return count }
### Interface Test Doubles
go
// Test-specific implementations
type StubEmailSender struct {
SentEmails []Email
SendError error
}
func (s *StubEmailSender) Send(ctx context.Context, email Email) error { if s.SendError != nil { return s.SendError } s.SentEmails = append(s.SentEmails, email) return nil }
// Usage in tests func TestSendWelcomeEmail(t *testing.T) { emailStub := &StubEmailSender{} service := NewUserService(repo, emailStub, events, cache, logger)
_, err := service.CreateUser(ctx, input) require.NoError(t, err)
// Verify email was sent assert.Len(t, emailStub.SentEmails, 1) assert.Equal(t, "Welcome!", emailStub.SentEmails[0].Subject) }
### Mocking Libraries Comparison
While hand-written mocks provide full control and transparency, mocking libraries can reduce boilerplate for large test suites. Here's a comprehensive comparison:
#### Popular Go Mocking Libraries
##### 1. testify/mock
The most popular mocking library, part of the testify suite.
go
import (
"testing"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/assert"
)
// Define mock using testify type MockUserRepository struct { mock.Mock }
func (m MockUserRepository) GetByID(ctx context.Context, id string) (domain.User, error) { args := m.Called(ctx, id) if args.Get(0) == nil { return nil, args.Error(1) } return args.Get(0).(*domain.User), args.Error(1) }
func (m MockUserRepository) Create(ctx context.Context, user domain.User) error { args := m.Called(ctx, user) return args.Error(0) }
// Usage in tests func TestUserService_CreateUser(t *testing.T) { mockRepo := new(MockUserRepository)
// Set expectations mockRepo.On("GetByEmail", mock.Anything, "test@example.com"). Return(nil, sql.ErrNoRows). Once()
mockRepo.On("Create", mock.Anything, mock.MatchedBy(func(u *domain.User) bool { return u.Email == "test@example.com" })).Return(nil).Once()
service := NewUserService(mockRepo)
// Execute test user, err := service.CreateUser(ctx, CreateUserInput{ Email: "test@example.com", Name: "Test User", })
assert.NoError(t, err) assert.NotNil(t, user)
// Verify all expectations were met mockRepo.AssertExpectations(t) }
##### 2. golang/mock (gomock)
Google's official mocking framework with code generation.
bash
Install mockgen
go install go.uber.org/mock/mockgen@latest
Generate mocks
mockgen -source=internal/service/interfaces.go -destination=mocks/mock_repository.go -package=mocks
Generated mock usage:
go
import (
"testing"
"go.uber.org/mock/gomock"
"myapp/mocks"
)
func TestWithGoMock(t *testing.T) { ctrl := gomock.NewController(t) defer ctrl.Finish()
mockRepo := mocks.NewMockUserRepository(ctrl)
// Set expectations mockRepo.EXPECT(). GetByEmail(gomock.Any(), "test@example.com"). Return(nil, sql.ErrNoRows). Times(1)
mockRepo.EXPECT(). Create(gomock.Any(), gomock.Any()). DoAndReturn(func(ctx context.Context, user *domain.User) error { assert.Equal(t, "test@example.com", user.Email) return nil }). Times(1)
service := NewUserService(mockRepo)
// Execute test _, err := service.CreateUser(ctx, CreateUserInput{ Email: "test@example.com", Name: "Test User", })
assert.NoError(t, err) }
##### 3. vektra/mockery
Generates testify/mock compatible mocks with more features.
bash
Install
go install github.com/vektra/mockery/v2@latest
Generate with config file
.mockery.yaml
with-expecter: true filename: "mock_{{.InterfaceName}}.go" dir: "{{.InterfaceDir}}/mocks" mockname: "Mock{{.InterfaceName}}" outpkg: "mocks"
Usage with expecter pattern:
go
func TestWithMockery(t *testing.T) {
mockRepo := mocks.NewMockUserRepository(t)
// Type-safe expectations mockRepo.EXPECT(). GetByEmail(mock.Anything, "test@example.com"). Return(nil, sql.ErrNoRows). Once()
mockRepo.EXPECT(). Create(mock.Anything, mock.AnythingOfType("*domain.User")). Return(nil). Once()
service := NewUserService(mockRepo)
// Test execution... }
#### Comparison Matrix
| Feature | Hand-Written | testify/mock | gomock | mockery |
|---------|--------------|--------------|---------|----------|
| **Setup Complexity** | Low | Low | Medium | Low |
| **Boilerplate** | High | Medium | Low (generated) | Low (generated) |
| **Type Safety** | ✅ Full | ⚠️ Runtime | ✅ Compile-time | ⚠️ Runtime |
| **IDE Support** | ✅ Excellent | ✅ Good | ✅ Excellent | ✅ Good |
| **Debugging** | ✅ Easy | 🔶 Medium | 🔶 Medium | 🔶 Medium |
| **Flexibility** | ✅ Maximum | ✅ High | 🔶 Medium | ✅ High |
| **Learning Curve** | ✅ Minimal | 🔶 Low | ⚠️ Medium | 🔶 Low |
| **Maintenance** | ⚠️ Manual | ✅ Low | ✅ Generated | ✅ Generated |
| **Test Readability** | ✅ Clear | 🔶 Good | 🔶 Good | 🔶 Good |
| **Magic Strings** | ✅ None | ⚠️ Some | ✅ None | ⚠️ Some |
#### Decision Guide
##### Use Hand-Written Mocks When:
- You have a small number of interfaces
- You want full control and transparency
- You prefer no magic or reflection
- Your team is new to Go
- You need custom behavior in mocks
##### Use testify/mock When:
- You're already using testify for assertions
- You want a balance of control and convenience
- You don't mind runtime type checking
- You need powerful matchers
##### Use gomock When:
- You want compile-time type safety
- You have many interfaces to mock
- You prefer generated code
- You want strict expectation ordering
##### Use mockery When:
- You want the best of testify with code generation
- You need advanced features (expecter pattern)
- You want configuration-driven generation
- You're migrating from hand-written to generated
#### Best Practices for Mock Libraries
1. **Generate into separate package**
internal/
├── service/
│ ├── interfaces.go
│ └── mocks/
│ └── mock_repository.go
2. **Use go:generate directives**
go
//go:generate mockgen -source=interfaces.go -destination=mocks/mock_repository.go -package=mocks
package service
3. **Version control generated mocks**
- Pros: No generation step in CI
- Cons: Merge conflicts, large diffs
4. **Or generate in CI/build**
makefile
.PHONY: mocks
mocks:
mockery --all --dir internal/service --output internal/service/mocks
test: mocks go test ./...
5. **Combine approaches**
go
// Hand-written mock with testify helpers
type MockCache struct {
mock.Mock
// Custom fields for complex behavior
data map[string]interface{}
}
func (m *MockCache) Get(key string) (interface{}, error) { // Custom logic if m.data != nil { if val, ok := m.data[key]; ok { return val, nil } }
// Fall back to mock expectations args := m.Called(key) return args.Get(0), args.Error(1) }
#### Common Pitfalls
1. **Over-mocking**
go
// Bad: Mocking standard library
type MockWriter struct {
mock.Mock
}
// Good: Use bytes.Buffer or real implementation
2. **Brittle test expectations**
go
// Bad: Too specific
mockRepo.On("Create", user).Return(nil)
// Good: Focus on important parts mockRepo.On("Create", mock.MatchedBy(func(u *User) bool { return u.Email == expectedEmail })).Return(nil)
3. **Not cleaning up**
go
// With gomock
ctrl := gomock.NewController(t)
defer ctrl.Finish() // Always cleanup
// With testify defer mockRepo.AssertExpectations(t)
---
## Integration Testing
### Using Testcontainers
go
// test/integration/setup_test.go
package integration_test
import ( "context" "testing"
"github.com/testcontainers/testcontainers-go" "github.com/testcontainers/testcontainers-go/wait" )
func setupPostgres(t *testing.T) (testcontainers.Container, string) { ctx := context.Background()
req := testcontainers.ContainerRequest{ Image: "postgres:15-alpine", ExposedPorts: []string{"5432/tcp"}, Env: map[string]string{ "POSTGRES_USER": "test", "POSTGRES_PASSWORD": "test", "POSTGRES_DB": "testdb", }, WaitingFor: wait.ForListeningPort("5432/tcp"), }
postgres, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{ ContainerRequest: req, Started: true, }) require.NoError(t, err)
host, err := postgres.Host(ctx) require.NoError(t, err)
port, err := postgres.MappedPort(ctx, "5432") require.NoError(t, err)
dsn := fmt.Sprintf("postgres://test:test@%s:%s/testdb?sslmode=disable", host, port.Port())
return postgres, dsn }
### Integration Test Example
go
// test/integration/userintegrationtest.go
package integration_test
import ( "context" "testing"
"github.com/stretchr/testify/require" "myapp/internal/app" )
func TestUserLifecycle(t *testing.T) { if testing.Short() { t.Skip("skipping integration test in short mode") }
// Setup database postgres, dsn := setupPostgres(t) defer postgres.Terminate(context.Background())
// Initialize app cfg := testConfig() cfg.Database.DSN = dsn
app, err := app.New(cfg) require.NoError(t, err) defer app.Shutdown(context.Background())
// Run migrations require.NoError(t, app.Migrate())
ctx := context.Background()
t.Run("create user", func(t *testing.T) { input := CreateUserInput{ Email: "test@example.com", Name: "Test User", Password: "SecurePass123!", }
user, err := app.Services.User.CreateUser(ctx, input) require.NoError(t, err) require.NotEmpty(t, user.ID) assert.Equal(t, input.Email, user.Email) })
t.Run("get user", func(t *testing.T) { user, err := app.Services.User.GetUser(ctx, user.ID) require.NoError(t, err) assert.Equal(t, "test@example.com", user.Email) }) }
---
## Testing Patterns
### Testing Error Paths
go
func TestDatabaseErrors(t *testing.T) {
tests := []struct {
name string
mockError error
expectedCode string
shouldRetry bool
}{
{
name: "connection error",
mockError: errors.New("connection refused"),
expectedCode: "DATABASE_ERROR",
shouldRetry: true,
},
{
name: "unique constraint",
mockError: errors.New("duplicate key value"),
expectedCode: "VALIDATION_FAILED",
shouldRetry: false,
},
}
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { repo := &MockRepository{ CreateFunc: func(ctx context.Context, item interface{}) error { return tt.mockError }, }
service := NewService(repo) err := service.Create(ctx, item)
var domainErr *DomainError require.ErrorAs(t, err, &domainErr) assert.Equal(t, tt.expectedCode, domainErr.Code) assert.Equal(t, tt.shouldRetry, domainErr.IsRetryable()) }) } }
### Testing Concurrent Operations
go
func TestConcurrentAccess(t *testing.T) {
service := NewService()
ctx := context.Background()
// Run operations concurrently var wg sync.WaitGroup errors := make(chan error, 10)
for i := 0; i < 10; i++ { wg.Add(1) go func(id int) { defer wg.Done()
if err := service.Process(ctx, fmt.Sprintf("item-%d", id)); err != nil { errors <- err } }(i) }
wg.Wait() close(errors)
// Check for errors for err := range errors { t.Errorf("concurrent operation failed: %v", err) } }
### Testing with Time
go
// internal/test/helpers/time.go
package helpers
import "time"
// Clock interface for time operations type Clock interface { Now() time.Time }
// RealClock uses actual time type RealClock struct{}
func (RealClock) Now() time.Time { return time.Now() }
// MockClock for testing type MockClock struct { CurrentTime time.Time }
func (m *MockClock) Now() time.Time { return m.CurrentTime }
func (m *MockClock) Advance(d time.Duration) { m.CurrentTime = m.CurrentTime.Add(d) }
// Usage in service type Service struct { clock Clock }
func (s *Service) CreateToken() string { return fmt.Sprintf("token_%d", s.clock.Now().Unix()) }
// Usage in test func TestCreateToken(t *testing.T) { clock := &MockClock{ CurrentTime: time.Date(2024, 1, 1, 0, 0, 0, 0, time.UTC), }
service := &Service{clock: clock} token := service.CreateToken()
assert.Equal(t, "token_1704067200", token) }
### Golden File Testing
go
// Define the update flag
var update = flag.Bool("update", false, "update golden files")
// TestMain is required to parse custom flags func TestMain(m *testing.M) { flag.Parse() os.Exit(m.Run()) }
func TestTemplateGeneration(t *testing.T) { tests := []struct { name string input TemplateInput golden string }{ { name: "basic template", input: TemplateInput{ Name: "test", Type: "basic", }, golden: "basic_output.golden", }, }
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { output := GenerateTemplate(tt.input)
goldenPath := filepath.Join("testdata", tt.golden)
if *update { require.NoError(t, os.WriteFile(goldenPath, output, 0644)) }
expected, err := os.ReadFile(goldenPath) require.NoError(t, err)
assert.Equal(t, string(expected), string(output)) }) } }
// Run with: go test -update to update golden files
---
## Fuzz Testing
### Built-in Go Fuzzing (Go 1.18+)
Go's native fuzzing automatically generates test inputs to find edge cases, panics, and bugs that traditional tests might miss.
### Basic Fuzz Test
go
// parser_test.go
package parser
import ( "testing" "unicode/utf8" )
// FuzzParseQuery finds inputs that cause panics or invalid behavior func FuzzParseQuery(f *testing.F) { // Seed corpus with interesting test cases f.Add("SELECT * FROM users") f.Add("SELECT id, name FROM users WHERE age > 18") f.Add("DROP TABLE users; --") f.Add("SELECT * FROM users WHERE name = 'O''Brien'") f.Add("") f.Add("SELECT \x00 FROM users")
f.Fuzz(func(t *testing.T, input string) { // Skip invalid UTF-8 if !utf8.ValidString(input) { t.Skip() }
// Function should not panic result, err := ParseQuery(input)
// Property-based assertions if err == nil { // Valid query should produce non-nil result if result == nil { t.Error("ParseQuery returned nil result with nil error") }
// Parsed query should be serializable serialized := result.String() if serialized == "" { t.Error("Valid query produced empty string") }
// Re-parsing should produce same result reparsed, err := ParseQuery(serialized) if err != nil { t.Errorf("Failed to reparse serialized query: %v", err) } if !result.Equal(reparsed) { t.Error("Reparsed query doesn't match original") } } }) }
### Fuzzing Complex Structures
go
// config_test.go
func FuzzConfigParser(f *testing.F) {
// Seed with various config formats
f.Add({"port": 8080, "host": "localhost"}
)
f.Add(port: 8080\nhost: localhost
)
f.Add([server]\nport = 8080\nhost = "localhost"
)
f.Add({"nested": {"deep": {"value": 42}}}
)
f.Add({}
)
f.Fuzz(func(t *testing.T, data string) { config, err := ParseConfig([]byte(data))
if err != nil { // Error cases should not panic return }
// Valid config should have sensible defaults if config.Port < 1 || config.Port > 65535 { t.Errorf("Invalid port: %d", config.Port) }
// Config should be serializable serialized, err := config.Marshal() if err != nil { t.Errorf("Failed to marshal config: %v", err) }
// Round-trip test config2, err := ParseConfig(serialized) if err != nil { t.Errorf("Failed to parse marshaled config: %v", err) }
if !config.Equal(config2) { t.Error("Config doesn't survive round-trip") } }) }
### Fuzzing Binary Protocols
go
// protocol_test.go
func FuzzProtocolDecoder(f *testing.F) {
// Seed with valid protocol messages
f.Add([]byte{0x01, 0x00, 0x00, 0x00, 0x04, 'p', 'i', 'n', 'g'})
f.Add([]byte{0x02, 0x00, 0x00, 0x00, 0x08, 'r', 'e', 's', 'p', 'o', 'n', 's', 'e'})
f.Add([]byte{0xFF}) // Invalid message
f.Add([]byte{}) // Empty input
f.Fuzz(func(t *testing.T, data []byte) { decoder := NewDecoder(bytes.NewReader(data))
msg, err := decoder.Decode() if err != nil { // Decoder should handle invalid input gracefully if err == ErrInvalidMessage || err == io.EOF { return // Expected errors } // Unexpected error types might indicate a bug t.Logf("Unexpected error type: %T: %v", err, err) return }
// Valid message invariants if msg.Type < 1 || msg.Type > 10 { t.Errorf("Invalid message type: %d", msg.Type) }
if len(msg.Payload) != int(msg.Length) { t.Errorf("Payload length mismatch: got %d, header says %d", len(msg.Payload), msg.Length) }
// Message should be re-encodable encoded := msg.Encode() msg2, err := NewDecoder(bytes.NewReader(encoded)).Decode() if err != nil { t.Errorf("Failed to decode re-encoded message: %v", err) }
if !msg.Equal(msg2) { t.Error("Message doesn't survive encode/decode cycle") } }) }
### Fuzzing Security-Critical Functions
go
// auth_test.go
func FuzzPasswordValidation(f *testing.F) {
// Seed with edge cases
f.Add("password123")
f.Add("correct horse battery staple")
f.Add("пароль") // Unicode
f.Add("")
f.Add(strings.Repeat("a", 1000)) // Long password
f.Add("password\x00null")
f.Add("admin' OR '1'='1") // SQL injection attempt
f.Fuzz(func(t *testing.T, password string) { // Hash should never panic hash, err := HashPassword(password) if err != nil { // Some passwords might be rejected if err == ErrPasswordTooLong || err == ErrPasswordTooShort { return } t.Errorf("Unexpected error: %v", err) return }
// Hash should be verifiable valid := VerifyPassword(password, hash) if !valid { t.Error("Failed to verify hashed password") }
// Different password should not verify if password != "different" { wrongValid := VerifyPassword("different", hash) if wrongValid { t.Error("Different password verified against hash") } }
// Hash format validation if !strings.HasPrefix(hash, "$argon2id$") { t.Errorf("Invalid hash format: %s", hash) } }) }
### Fuzzing with Multiple Inputs
go
// calculator_test.go
func FuzzCalculator(f *testing.F) {
// Seed with various operations
f.Add("2 + 2", 4.0)
f.Add("10 / 2", 5.0)
f.Add("3.14 * 2", 6.28)
f.Add("10 / 0", 0.0) // Division by zero
f.Fuzz(func(t *testing.T, expression string, expectedHint float64) { result, err := Calculate(expression)
if err != nil { // Some errors are expected if err == ErrDivisionByZero || err == ErrInvalidExpression { return } t.Logf("Unexpected error for %q: %v", expression, err) return }
// Check for NaN or Inf if math.IsNaN(result) || math.IsInf(result, 0) { t.Errorf("Invalid result for %q: %v", expression, result) }
// Property: parsing and evaluating again gives same result result2, err2 := Calculate(expression) if err2 != nil { t.Errorf("Inconsistent error on reparse: %v", err2) } if result != result2 { t.Errorf("Inconsistent results: %v vs %v", result, result2) } }) }
### Running Fuzz Tests
bash
Run fuzzing for a specific test (runs indefinitely)
go test -fuzz=FuzzParseQuery
Run for limited time
go test -fuzz=FuzzParseQuery -fuzztime=30s
Run with more workers
go test -fuzz=FuzzParseQuery -parallel=8
Just run the seed corpus (no fuzzing)
go test -run=FuzzParseQuery
Run fuzzing and save interesting inputs
go test -fuzz=FuzzParseQuery -fuzzminimizetime=10s
### Corpus Management
bash
Fuzz corpus is stored in:
testdata/fuzz/FuzzTestName/
Structure:
testdata/ └── fuzz/ └── FuzzParseQuery/ ├── 0a7f3b2d4e6f8a9c # Automatically found test case ├── 1b8g4c3e5f7h9a0d # Another interesting input └── seed/ ├── case1 # Manual seed corpus └── case2
### Best Practices for Fuzz Testing
1. **Always Check Properties, Not Specific Values**
go
// Bad: Checking specific output
if result != 42 {
t.Error("Expected 42")
}
// Good: Checking properties if result < 0 && input > 0 { t.Error("Positive input produced negative result") }
2. **Add Regression Tests from Fuzz Findings**
go
// When fuzzer finds a bug, add it as a regular test
func TestRegressionFuzzBug1(t *testing.T) {
// This input was found by fuzzer to cause a panic
input := "\x00\x01\x02\x03"
_, err := Parse(input)
if err == nil {
t.Error("Expected error for malformed input")
}
}
3. **Use Type-Specific Fuzzing**
go
func FuzzWithTypes(f *testing.F) {
f.Add(10, "hello", true)
f.Add(-5, "", false)
f.Fuzz(func(t *testing.T, n int, s string, b bool) { result := ProcessInputs(n, s, b) // Test properties with multiple typed inputs }) }
4. **Combine with Property-Based Testing**
go
func FuzzPropertyBased(f *testing.F) {
f.Fuzz(func(t *testing.T, data []byte) {
// Decode-encode roundtrip
decoded, err := Decode(data)
if err != nil {
return
}
encoded := Encode(decoded) decoded2, err := Decode(encoded) if err != nil { t.Fatalf("Failed to decode encoded data: %v", err) }
if !reflect.DeepEqual(decoded, decoded2) { t.Error("Data doesn't survive roundtrip") } }) }
5. **Fuzz State Machines**
go
func FuzzStateMachine(f *testing.F) {
f.Add([]byte{1, 2, 3, 1, 2})
f.Fuzz(func(t *testing.T, actions []byte) { sm := NewStateMachine()
for _, action := range actions { oldState := sm.State() err := sm.Process(Action(action % 4))
if err != nil { // Some transitions might be invalid continue }
// Verify state machine invariants if sm.State() == StateError && oldState != StateError { // Error state should be terminal if sm.CanRecover() { t.Error("Error state claims to be recoverable") } } } }) }
### Integration with CI/CD
yaml
.github/workflows/fuzz.yml
name: Fuzz Tests
on: schedule: - cron: '0 2 *' # Run nightly workflow_dispatch:
jobs: fuzz: runs-on: ubuntu-latest strategy: matrix: test: [FuzzParseQuery, FuzzConfigParser, FuzzProtocolDecoder]
steps: - uses: actions/checkout@v3
- uses: actions/setup-go@v4 with: go-version: '1.21'
- name: Run Fuzz Test run: | go test -fuzz=${{ matrix.test }} -fuzztime=10m ./...
- name: Upload crash artifacts if: failure() uses: actions/upload-artifact@v3 with: name: fuzz-crashes path: testdata/fuzz/
---
## Test Coverage & Quality
### Makefile Targets
makefile
.PHONY: test
test:
go test -race -coverprofile=coverage.out ./...
.PHONY: test-unit test-unit: go test -short -race ./...
.PHONY: test-integration test-integration: go test -race -tags=integration ./test/integration/...
.PHONY: coverage coverage: test go tool cover -html=coverage.out -o coverage.html open coverage.html
.PHONY: coverage-report coverage-report: test go tool cover -func=coverage.out
.PHONY: bench bench: go test -bench=. -benchmem ./...
### Coverage Guidelines
- **Business Logic**: 80%+ coverage required
- **Error Paths**: Must be tested
- **Edge Cases**: Cover boundary conditions
- **Skip Coverage** for:
- Generated code
- Simple getters/setters
- Wire-up code in main.go
### Benchmarking
go
func BenchmarkUserService_CreateUser(b *testing.B) {
service := setupBenchmarkService(b)
ctx := context.Background()
b.ResetTimer()
for i := 0; i < b.N; i++ { input := CreateUserInput{ Email: fmt.Sprintf("user%d@example.com", i), Name: fmt.Sprintf("User %d", i), Password: "BenchPass123!", }
_, err := service.CreateUser(ctx, input) if err != nil { b.Fatal(err) } } }
func BenchmarkParallelProcessing(b *testing.B) { service := setupBenchmarkService(b)
b.RunParallel(func(pb *testing.PB) { for pb.Next() { service.Process(context.Background(), generateInput()) } }) }
### Testing Best Practices
1. **Test behavior, not implementation**
2. **Use descriptive test names**
3. **Follow AAA pattern**: Arrange, Act, Assert
4. **One assertion per test** (when practical)
5. **Test edge cases** and error conditions
6. **Use t.Parallel()** for independent tests
7. **Mock at boundaries**, not internally
8. **Prefer real implementations** when fast
9. **Test the public API**
10. **Keep tests maintainable** and readable
---
## Related Sections
- **[Error Handling](go-practices-error-logging.md#testing-error-paths)** - Testing error scenarios and domain errors
- **[Service Architecture](go-practices-service-architecture.md#service-layer-design)** - Testing service layer components
- **[Database Patterns](go-practices-database.md#testing-database-code)** - Database testing with testcontainers
- **[HTTP Patterns](go-practices-http.md#testing-http-handlers)** - Testing HTTP handlers and middleware
- **[Concurrency](go-practices-concurrency.md#testing-concurrent-code)** - Testing concurrent patterns safely
## Quick Reference Checklist
### Test Structure & Organization
- [ ] Use table-driven tests for multiple scenarios
- [ ] Place unit tests in same package (`service_test.go`)
- [ ] Place integration tests in separate directories (`test/integration/`)
- [ ] Use black-box testing for public API validation
- [ ] Create test builders for complex domain objects
- [ ] Follow AAA pattern: Arrange, Act, Assert
### Test Implementation
- [ ] Write descriptive test names explaining what's being tested
- [ ] Use `t.Run()` for subtests in table-driven tests
- [ ] Check both success and error conditions
- [ ] Test edge cases and boundary conditions
- [ ] Use `testify/assert` for readable assertions
- [ ] Clean up resources with `defer` or test cleanup
### Mocking & Dependencies
- [ ] Mock at service boundaries, not internal components
- [ ] Create hand-written mocks with function fields
- [ ] Track method calls and arguments in mocks
- [ ] Use interfaces for testable dependencies
- [ ] Stub external services (email, HTTP clients)
- [ ] Use test doubles for databases when appropriate
### Integration Testing
- [ ] Use testcontainers for real database testing
- [ ] Test complete workflows end-to-end
- [ ] Set up clean test data for each test
- [ ] Test with realistic production-like data
- [ ] Verify side effects (emails sent, events published)
- [ ] Use `testing.Short()` to skip slow tests
### Advanced Testing Patterns
- [ ] Test concurrent operations with proper synchronization
- [ ] Mock time for time-dependent logic
- [ ] Use golden files for complex output validation
- [ ] Test panic recovery and error handling
- [ ] Implement property-based testing for critical algorithms
- [ ] Test resource cleanup and graceful shutdown
### Fuzz Testing
- [ ] Add fuzz tests for parsers and input validation
- [ ] Test with invalid UTF-8 and edge case inputs
- [ ] Verify roundtrip properties (encode/decode)
- [ ] Test invariants rather than specific outputs
- [ ] Add regression tests for fuzz findings
- [ ] Use structured fuzzing for complex types
### Test Quality & Coverage
- [ ] Achieve 80%+ coverage for business logic
- [ ] Test all error paths and edge cases
- [ ] Use `go test -race` for concurrency issues
- [ ] Run benchmarks for performance-critical code
- [ ] Use `t.Parallel()` for independent tests
- [ ] Keep tests fast and focused
### Interactive CLI Testing
Testing interactive CLI components (TUIs, prompts, forms) requires special approaches since they involve user input simulation and terminal output validation.
#### Testing Bubble Tea Applications
go
// Testing a Bubble Tea model
func TestBubbleTeaModel(t *testing.T) {
tests := []struct {
name string
model tea.Model
msgs []tea.Msg
wantView string
}{
{
name: "initial state",
model: NewUserFormModel(),
msgs: []tea.Msg{},
wantView: "Enter your name:",
},
{
name: "typing input",
model: NewUserFormModel(),
msgs: []tea.Msg{
tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune("John")},
},
wantView: "Enter your name: John",
},
{
name: "form submission",
model: NewUserFormModel(),
msgs: []tea.Msg{
tea.KeyMsg{Type: tea.KeyRunes, Runes: []rune("John")},
tea.KeyMsg{Type: tea.KeyEnter},
},
wantView: "Thank you, John!",
},
}
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { model := tt.model
// Send messages to model for _, msg := range tt.msgs { model, _ = model.Update(msg) }
// Verify the view output got := model.View() if !strings.Contains(got, tt.wantView) { t.Errorf("View() = %q, want to contain %q", got, tt.wantView) } }) } }
#### Testing CLI Interactions
go
// Test CLI interactions by simulating stdin/stdout
func TestCLIInteraction(t *testing.T) {
tests := []struct {
name string
input string
args []string
wantOutput []string
wantExitCode int
}{
{
name: "successful user creation",
input: "John Doe\njohn@example.com\ny\n",
args: []string{"user", "create", "--interactive"},
wantOutput: []string{
"Enter name:",
"User created successfully",
},
wantExitCode: 0,
},
}
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { cmd := exec.Command("./myapp", tt.args...) cmd.Stdin = strings.NewReader(tt.input)
output, err := cmd.CombinedOutput()
// Check exit code if exitErr, ok := err.(*exec.ExitError); ok { assert.Equal(t, tt.wantExitCode, exitErr.ExitCode()) } else if err == nil { assert.Equal(t, 0, tt.wantExitCode) }
// Verify output contains expected strings outputStr := string(output) for _, want := range tt.wantOutput { assert.Contains(t, outputStr, want) } }) } }
#### Testing Best Practices for Interactive CLIs
1. **Separate UI from Logic**: Test business logic separately from UI components
2. **Mock External Dependencies**: Use dependency injection for testable components
3. **Test State Transitions**: Verify TUI models handle state changes correctly
4. **Use Build Tags**: Separate interactive tests from unit tests
go
//go:build integration
// Interactive tests that require a TTY func TestInteractiveFeatures(t *testing.T) { // Tests that need real terminal interaction }
### CI/CD Integration
- [ ] Run tests on all supported Go versions
- [ ] Include race detection in CI pipeline
- [ ] Generate and publish coverage reports
- [ ] Run fuzz tests in nightly builds
- [ ] Fail builds on test failures or coverage drops
- [ ] Cache test dependencies for faster builds
---
---
# 5. Database & Storage Patterns
## Table of Contents
1. [Connection Management](#connection-management)
2. [Query Patterns](#query-patterns)
3. [ORM vs Query Builder Trade-offs](#orm-vs-query-builder-trade-offs)
4. [Migration Management](#migration-management)
5. [Repository Pattern](#repository-pattern)
6. [Transaction Handling](#transaction-handling)
7. [Performance Optimization](#performance-optimization)
---
## Connection Management
### Database Configuration
go
// internal/database/connection.go
package database
import ( "context" "database/sql" "fmt" "time"
_ "github.com/lib/pq" // PostgreSQL _ "modernc.org/sqlite" // SQLite - pure Go, no CGO )
// Config for database connections type Config struct { Driver string DSN string MaxOpenConns int MaxIdleConns int ConnMaxLifetime time.Duration ConnMaxIdleTime time.Duration }
// NewDB creates a properly configured database connection func NewDB(cfg Config) (*sql.DB, error) { db, err := sql.Open(cfg.Driver, cfg.DSN) if err != nil { return nil, fmt.Errorf("open database: %w", err) }
// Configure connection pool - database specific configureConnectionPool(db, cfg)
// Verify connection ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel()
if err := db.PingContext(ctx); err != nil { db.Close() return nil, fmt.Errorf("ping database: %w", err) }
return db, nil }
// configureConnectionPool applies database-specific connection settings func configureConnectionPool(db *sql.DB, cfg Config) { switch cfg.Driver { case "sqlite", "sqlite3": // SQLite: single writer, single connection db.SetMaxOpenConns(1) db.SetMaxIdleConns(1) db.SetConnMaxLifetime(0) // Don't close connections db.SetConnMaxIdleTime(0)
case "postgres": // PostgreSQL: can handle many connections efficiently db.SetMaxOpenConns(max(cfg.MaxOpenConns, 25)) db.SetMaxIdleConns(max(cfg.MaxIdleConns, 5)) db.SetConnMaxLifetime(cfg.ConnMaxLifetime) db.SetConnMaxIdleTime(cfg.ConnMaxIdleTime)
case "mysql": // MySQL: moderate connection pooling db.SetMaxOpenConns(max(cfg.MaxOpenConns, 20)) db.SetMaxIdleConns(max(cfg.MaxIdleConns, 4)) db.SetConnMaxLifetime(cfg.ConnMaxLifetime) db.SetConnMaxIdleTime(cfg.ConnMaxIdleTime)
default: // Generic settings for unknown drivers db.SetMaxOpenConns(cfg.MaxOpenConns) db.SetMaxIdleConns(cfg.MaxIdleConns) db.SetConnMaxLifetime(cfg.ConnMaxLifetime) db.SetConnMaxIdleTime(cfg.ConnMaxIdleTime) } }
func max(a, b int) int { if a > b { return a } return b }
### Database-Specific Configuration Examples
**NOTE**: The `NewDB()` function above automatically handles database-specific configuration. The examples below show manual configuration if needed.
go
// internal/database/sqlite.go
package database
import ( "database/sql" "fmt"
_ "modernc.org/sqlite" )
// NewSQLiteDB creates SQLite connection with manual settings // (Use this if you need custom SQLite configuration beyond NewDB()) func NewSQLiteDB(path string) (*sql.DB, error) { // Enable WAL mode for better concurrency dsn := fmt.Sprintf("%s?journalmode=WAL&busytimeout=5000", path)
db, err := sql.Open("sqlite", dsn) if err != nil { return nil, fmt.Errorf("open sqlite: %w", err) }
// SQLite only allows one writer db.SetMaxOpenConns(1) db.SetMaxIdleConns(1) db.SetConnMaxLifetime(0) // Don't close connections
// Enable foreign keys if , err := db.Exec("PRAGMA foreignkeys = ON"); err != nil { db.Close() return nil, fmt.Errorf("enable foreign keys: %w", err) }
return db, nil }
### Database Configuration Decision Matrix
| Database | Max Open Conns | Max Idle Conns | Conn Lifetime | Use Case |
|----------|---------------|---------------|---------------|-----------|
| **SQLite** | 1 | 1 | 0 (forever) | Single writer, embedded apps |
| **PostgreSQL** | 25+ | 5+ | 1 hour | High concurrency, web apps |
| **MySQL** | 20+ | 4+ | 1 hour | Moderate concurrency |
| **Testing** | 1 | 1 | Short | Avoid test interference |
**Key Rules:**
- **SQLite**: Always use single connection (handled automatically by `NewDB()`)
- **PostgreSQL**: Can handle many connections efficiently
- **MySQL**: Moderate connection pooling prevents server overload
- **Development**: Lower connection counts for easier debugging
- **Production**: Higher connection counts for better performance
### Health Checks
go
// internal/database/health.go
package database
type HealthChecker struct { db *sql.DB logger Logger metrics Metrics }
func (h *HealthChecker) Check(ctx context.Context) error { start := time.Now()
if err := h.db.PingContext(ctx); err != nil { h.metrics.Counter("db.health.failures", 1) return fmt.Errorf("database ping failed: %w", err) }
// Check pool stats stats := h.db.Stats()
h.metrics.Gauge("db.connections.open", float64(stats.OpenConnections)) h.metrics.Gauge("db.connections.idle", float64(stats.Idle)) h.metrics.Gauge("db.connections.in_use", float64(stats.InUse)) h.metrics.Gauge("db.connections.wait_count", float64(stats.WaitCount))
h.logger.Debug("database health check", slog.Duration("duration", time.Since(start)))
return nil }
---
## Query Patterns
### Prepared Statements
go
// internal/storage/postgres/statements.go
package postgres
// Define queries as constants
const (
getUserByID =
SELECT id, email, name, password_hash, status, created_at, updated_at
FROM users
WHERE id = $1 AND deleted_at IS NULL
createUser =
INSERT INTO users (id, email, name, password_hash, status, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7)
RETURNING id
updateUser =
UPDATE users
SET email = $2, name = $3, updated_at = $4
WHERE id = $1 AND deleted_at IS NULL
)
### Query Builder Pattern
go
// internal/database/query/builder.go
package query
import ( "fmt" "strings" )
// Builder constructs SQL queries safely type Builder struct { parts []string args []interface{} }
func Select(columns ...string) *Builder { return &Builder{ parts: []string{"SELECT", strings.Join(columns, ", ")}, } }
func (b Builder) From(table string) Builder { b.parts = append(b.parts, "FROM", table) return b }
func (b Builder) Where(condition string, args ...interface{}) Builder { if strings.Contains(strings.Join(b.parts, " "), "WHERE") { b.parts = append(b.parts, "AND") } else { b.parts = append(b.parts, "WHERE") } b.parts = append(b.parts, condition) b.args = append(b.args, args...) return b }
func (b Builder) OrderBy(column string, desc bool) Builder { dir := "ASC" if desc { dir = "DESC" } b.parts = append(b.parts, "ORDER BY", column, dir) return b }
func (b Builder) Limit(n int) Builder { b.parts = append(b.parts, "LIMIT", fmt.Sprintf("%d", n)) return b }
func (b *Builder) Build() (string, []interface{}) { return strings.Join(b.parts, " "), b.args }
// Usage query, args := Select("id", "name", "email"). From("users"). Where("status = ?", "active"). Where("created_at > ?", time.Now().Add(-3024time.Hour)). OrderBy("created_at", true). Limit(10). Build()
### Batch Operations
go
// internal/storage/postgres/batch.go
package postgres
// BatchInsert efficiently inserts multiple records func (r Repository) BatchInsert(ctx context.Context, users []domain.User) error { if len(users) == 0 { return nil }
// Build query with placeholders valueStrings := make([]string, 0, len(users)) valueArgs := make([]interface{}, 0, len(users)*7)
for i, user := range users { valueStrings = append(valueStrings, fmt.Sprintf( "($%d, $%d, $%d, $%d, $%d, $%d, $%d)", i7+1, i7+2, i7+3, i7+4, i7+5, i7+6, i*7+7, ))
valueArgs = append(valueArgs, user.ID, user.Email, user.Name, user.PasswordHash, user.Status, user.CreatedAt, user.UpdatedAt, ) }
query := fmt.Sprintf( "INSERT INTO users (id, email, name, passwordhash, status, createdat, updated_at) VALUES %s", strings.Join(valueStrings, ","), )
_, err := r.db.ExecContext(ctx, query, valueArgs...) return err }
---
## ORM vs Query Builder Trade-offs
### The Go Database Access Spectrum
Go's database ecosystem offers multiple approaches, each with distinct trade-offs for production systems.
Raw SQL ←→ Query Builders ←→ Light ORMs ←→ Full ORMs
### Approach Comparison
| Approach | Type Safety | Performance | Complexity | Flexibility |
|----------|-------------|-------------|------------|-------------|
| **database/sql** | None | Excellent | Low | Maximum |
| **sqlc** | Compile-time | Excellent | Low | High |
| **Squirrel** | Runtime | Excellent | Medium | High |
| **sqlx** | Struct tags | Excellent | Low | High |
| **Ent** | Compile-time | Good | High | Medium |
| **GORM** | Runtime | Fair | High | Medium |
### Raw SQL with database/sql
**Best for**: Maximum control, performance-critical paths, complex queries
go
// Pros: Complete control, no magic, excellent performance
// Cons: Boilerplate, no compile-time safety, manual mapping
type UserRepo struct { db *sql.DB }
func (r UserRepo) FindActiveUsers(ctx context.Context, limit int) ([]User, error) {
query :=
SELECT u.id, u.email, u.name, u.created_at,
COUNT(o.id) as order_count,
COALESCE(SUM(o.total), 0) as total_spent
FROM users u
LEFT JOIN orders o ON o.user_id = u.id AND o.status = 'completed'
WHERE u.status = 'active'
GROUP BY u.id
ORDER BY u.created_at DESC
LIMIT $1
rows, err := r.db.QueryContext(ctx, query, limit) if err != nil { return nil, fmt.Errorf("query users: %w", err) } defer rows.Close()
var users []*User for rows.Next() { var u User err := rows.Scan( &u.ID, &u.Email, &u.Name, &u.CreatedAt, &u.OrderCount, &u.TotalSpent, ) if err != nil { return nil, fmt.Errorf("scan user: %w", err) } users = append(users, &u) }
return users, rows.Err() }
### sqlc - Compile-time SQL
**Best for**: Type safety with SQL control, CI/CD integration
sql
-- queries/users.sql
-- name: GetActiveUsers :many
SELECT
u.id, u.email, u.name, u.created_at,
COUNT(o.id)::int as order_count,
COALESCE(SUM(o.total), 0)::decimal as total_spent
FROM users u
LEFT JOIN orders o ON o.user_id = u.id AND o.status = 'completed'
WHERE u.status = 'active'
GROUP BY u.id
ORDER BY u.created_at DESC
LIMIT $1;
go
// Generated code provides type-safe queries
users, err := queries.GetActiveUsers(ctx, limit)
// Compile-time checked, no runtime surprises
### Squirrel - Programmatic Query Builder
**Best for**: Dynamic queries, maintaining SQL readability
go
import "github.com/Masterminds/squirrel"
type UserQueryBuilder struct { db *sql.DB }
func (q UserQueryBuilder) FindUsers(ctx context.Context, filter UserFilter) ([]User, error) { // Build query dynamically query := squirrel.Select( "u.id", "u.email", "u.name", "u.created_at", "COUNT(o.id) as order_count", "COALESCE(SUM(o.total), 0) as total_spent", ). From("users u"). LeftJoin("orders o ON o.user_id = u.id AND o.status = ?", "completed"). GroupBy("u.id"). OrderBy("u.created_at DESC")
// Add conditions dynamically if filter.Status != "" { query = query.Where(squirrel.Eq{"u.status": filter.Status}) }
if filter.MinOrders > 0 { query = query.Having("COUNT(o.id) >= ?", filter.MinOrders) }
if filter.Limit > 0 { query = query.Limit(uint64(filter.Limit)) }
// Execute with proper placeholder format sql, args, err := query.PlaceholderFormat(squirrel.Dollar).ToSql() if err != nil { return nil, fmt.Errorf("build query: %w", err) }
rows, err := q.db.QueryContext(ctx, sql, args...) // ... scanning logic }
### sqlx - Enhanced database/sql
**Best for**: Reducing boilerplate while keeping SQL control
go
import "github.com/jmoiron/sqlx"
type UserRepo struct { db *sqlx.DB }
// Struct tags for automatic scanning
type User struct {
ID string db:"id"
Email string db:"email"
Name string db:"name"
CreatedAt time.Time db:"created_at"
OrderCount int db:"order_count"
TotalSpent float64 db:"total_spent"
}
func (r UserRepo) FindActiveUsers(ctx context.Context, limit int) ([]User, error) {
query :=
SELECT u.id, u.email, u.name, u.created_at,
COUNT(o.id) as order_count,
COALESCE(SUM(o.total), 0) as total_spent
FROM users u
LEFT JOIN orders o ON o.user_id = u.id AND o.status = 'completed'
WHERE u.status = 'active'
GROUP BY u.id
ORDER BY u.created_at DESC
LIMIT $1
var users []*User // Automatic struct scanning err := r.db.SelectContext(ctx, &users, query, limit) return users, err }
// Named parameters for clarity
func (r *UserRepo) UpdateUser(ctx context.Context, id string, updates map[string]interface{}) error {
query :=
UPDATE users
SET email = :email, name = :name, updated_at = :updated_at
WHERE id = :id
updates["id"] = id updates["updated_at"] = time.Now()
_, err := r.db.NamedExecContext(ctx, query, updates) return err }
### Ent - Type-safe ORM with Code Generation
**Best for**: Graph-based data models, compile-time safety
go
// Schema definition
package schema
import ( "entgo.io/ent" "entgo.io/ent/schema/field" "entgo.io/ent/schema/edge" )
type User struct { ent.Schema }
func (User) Fields() []ent.Field { return []ent.Field{ field.String("email").Unique(), field.String("name"), field.Enum("status").Values("active", "inactive"), field.Time("created_at").Default(time.Now), } }
func (User) Edges() []ent.Edge { return []ent.Edge{ edge.To("orders", Order.Type), } }
// Usage - compile-time safe users, err := client.User. Query(). Where(user.StatusEQ(user.StatusActive)). WithOrders(func(q *ent.OrderQuery) { q.Where(order.StatusEQ("completed")) }). Order(ent.Desc(user.FieldCreatedAt)). Limit(limit). All(ctx)
### GORM - Full-featured ORM
**Best for**: Rapid prototyping, CRUD-heavy applications
go
import "gorm.io/gorm"
type User struct {
ID string gorm:"primaryKey"
Email string gorm:"uniqueIndex"
Name string
Status string
CreatedAt time.Time
Orders []Order gorm:"foreignKey:UserID"
// Computed fields
OrderCount int gorm:"->;-:migration"
TotalSpent float64 gorm:"->;-:migration"
}
func FindActiveUsersWithStats(db gorm.DB, limit int) ([]User, error) { var users []*User
err := db.
Select(users.*,
COUNT(orders.id) as order_count,
COALESCE(SUM(orders.total), 0) as total_spent
).
Joins("LEFT JOIN orders ON orders.user_id = users.id AND orders.status = ?", "completed").
Where("users.status = ?", "active").
Group("users.id").
Order("users.created_at DESC").
Limit(limit).
Find(&users).Error
return users, err }
// GORM's magic can be problematic db.Where(&User{Status: "active"}).Find(&users) // Works but hides SQL db.Preload("Orders").Find(&users) // N+1 query risk
### Production Recommendations
#### 1. **RECOMMENDED: sqlc for Most Applications**
**The best choice for production Go applications** - combines type safety with SQL control:
bash
Simple setup
go install github.com/kyleconroy/sqlc/cmd/sqlc@latest sqlc generate # Generates type-safe Go from SQL
**Why sqlc is the gold standard:**
- ✅ **Compile-time type safety** - catch query errors at build time
- ✅ **Zero runtime overhead** - generates native Go code, no reflection
- ✅ **Real SQL** - use full database features, no ORM abstraction layer
- ✅ **CI/CD integration** - fails builds on invalid queries
- ✅ **Multiple database support** - PostgreSQL, MySQL, SQLite
- ✅ **Perfect for AI code generation** - clear, predictable patterns
#### 2. **Alternative: Raw database/sql**
Use when sqlc doesn't fit your specific needs:
- Maximum control and visibility (but more boilerplate)
- Excellent performance
- No tooling dependencies
- Manual type handling required
#### 3. **Add Query Builders for Dynamic Queries**
Use Squirrel on top of sqlc/raw SQL for complex filtering:
- Programmatic query building
- Maintains SQL readability
- Perfect for admin interfaces and search endpoints
#### 4. **ORMs: Specific Use Cases Only**
Reserve ORMs for rapid prototyping or non-critical paths:
- CRUD-heavy admin panels (GORM)
- Graph-like data models (Ent)
- **Not recommended for production business logic**
### Migration Strategy
go
// Start simple, evolve as needed
type UserRepo struct {
db *sql.DB // Start here
sqlx *sqlx.DB // Reduce boilerplate
builder squirrel.StatementBuilderType // Add for dynamic queries
}
// Critical path: raw SQL
func (r UserRepo) GetUserForAuth(ctx context.Context, email string) (User, error) {
// Performance critical - use raw SQL
query := SELECT id, email, password_hash FROM users WHERE email = $1
// ...
}
// Dynamic search: query builder func (r UserRepo) SearchUsers(ctx context.Context, filter SearchFilter) ([]User, error) { // Complex dynamic query - use builder query := r.builder.Select("*").From("users") // ... }
// Admin panel: ORM acceptable func (r UserRepo) AdminGetUserWithRelations(ctx context.Context, id string) (User, error) { // Non-critical path, convenience matters return r.orm.Preload("Orders").Preload("Profile").First(&User{ID: id}) }
### Performance Comparison
go
// Benchmark results (typical)
// BenchmarkRawSQL-8 50000 30µs/op 896B/op 12allocs/op
// BenchmarkSqlx-8 45000 33µs/op 1024B/op 14allocs/op
// BenchmarkSquirrel-8 40000 38µs/op 1280B/op 18allocs/op
// BenchmarkEnt-8 30000 48µs/op 1920B/op 28allocs/op
// BenchmarkGORM-8 20000 75µs/op 3840B/op 62allocs/op
### Decision Matrix
| Factor | **sqlc (RECOMMENDED)** | Raw SQL | Squirrel | sqlx | Ent | GORM |
|--------|------------------------|---------|----------|------|-----|------|
| **Type Safety** | ★★★★★ | ★ | ★★ | ★★ | ★★★★★ | ★★★ |
| **Performance** | ★★★★★ | ★★★★★ | ★★★★★ | ★★★★★ | ★★★★ | ★★★ |
| **AI Friendliness** | ★★★★★ | ★★★ | ★★★★ | ★★★ | ★★ | ★★ |
| **CI/CD Integration** | ★★★★★ | ★★★ | ★★★ | ★★★ | ★★★★ | ★★★ |
| **Debugging** | ★★★★★ | ★★★★★ | ★★★★ | ★★★★★ | ★★★ | ★★ |
| **Maintenance** | ★★★★★ | ★★★ | ★★★★ | ★★★★ | ★★★★ | ★★ |
| **Learning Curve** | ★★★★ | ★★★★★ | ★★★★ | ★★★★★ | ★★ | ★★★ |
**Key Insight**: sqlc provides the best balance of type safety, performance, and maintainability for production Go applications.
---
## Migration Management
### Embedded Migrations
go
// internal/database/migrate/migrate.go
package migrate
import ( "database/sql" "embed" "fmt" "sort" "strings" )
//go:embed migrations/*.sql var migrations embed.FS
type Migration struct { Version int Name string UpScript string DownScript string }
type Migrator struct { db *sql.DB migrations []Migration }
func NewMigrator(db sql.DB) (Migrator, error) { m := &Migrator{db: db}
// Ensure migrations table exists if err := m.createMigrationsTable(); err != nil { return nil, err }
// Load migrations if err := m.loadMigrations(); err != nil { return nil, err }
return m, nil }
func (m *Migrator) createMigrationsTable() error {
query :=
CREATE TABLE IF NOT EXISTS schema_migrations (
version INTEGER PRIMARY KEY,
name TEXT NOT NULL,
applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
)
_, err := m.db.Exec(query) return err }
func (m *Migrator) Up() error { applied, err := m.appliedMigrations() if err != nil { return err }
for _, migration := range m.migrations { if applied[migration.Version] { continue }
fmt.Printf("Applying migration %d: %s\n", migration.Version, migration.Name)
if err := m.applyMigration(migration); err != nil { return fmt.Errorf("migration %d failed: %w", migration.Version, err) } }
return nil }
func (m *Migrator) applyMigration(migration Migration) error { tx, err := m.db.Begin() if err != nil { return err } defer tx.Rollback()
// Execute migration if _, err := tx.Exec(migration.UpScript); err != nil { return err }
// Record migration if _, err := tx.Exec( "INSERT INTO schema_migrations (version, name) VALUES (?, ?)", migration.Version, migration.Name, ); err != nil { return err }
return tx.Commit() }
### Migration Files
sql
-- migrations/001createusers.up.sql
CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT genrandomuuid(),
email VARCHAR(255) UNIQUE NOT NULL,
name VARCHAR(255) NOT NULL,
password_hash VARCHAR(255) NOT NULL,
status VARCHAR(50) NOT NULL DEFAULT 'pending',
createdat TIMESTAMP NOT NULL DEFAULT CURRENTTIMESTAMP,
updatedat TIMESTAMP NOT NULL DEFAULT CURRENTTIMESTAMP,
deleted_at TIMESTAMP
);
CREATE INDEX idxusersemail ON users(email); CREATE INDEX idxusersstatus ON users(status); CREATE INDEX idxusersdeletedat ON users(deletedat);
-- migrations/001createusers.down.sql DROP TABLE IF EXISTS users;
### Third-Party Migration Libraries
While the embedded migration system above is educational and provides complete control, most production teams use established migration libraries that provide:
- Battle-tested migration logic
- CLI tooling for creating and managing migrations
- Support for multiple database engines
- Rollback and versioning features
- Team collaboration features
#### Popular Migration Libraries
##### 1. golang-migrate/migrate
The most popular and comprehensive migration library.
bash
Install CLI
brew install golang-migrate
Create migration
migrate create -ext sql -dir db/migrations -seq createuserstable
Run migrations
migrate -path db/migrations -database "postgresql://user:pass@localhost/dbname?sslmode=disable" up
Rollback
migrate -path db/migrations -database "postgresql://user:pass@localhost/dbname?sslmode=disable" down 1
Integration in Go code:
go
import (
"github.com/golang-migrate/migrate/v4"
_ "github.com/golang-migrate/migrate/v4/database/postgres"
_ "github.com/golang-migrate/migrate/v4/source/file"
)
func RunMigrations(databaseURL string) error { m, err := migrate.New( "file://db/migrations", databaseURL, ) if err != nil { return err }
if err := m.Up(); err != nil && err != migrate.ErrNoChange { return err }
return nil }
##### 2. pressly/goose
Simple and straightforward migration tool with Go and SQL support.
bash
Install
go install github.com/pressly/goose/v3/cmd/goose@latest
Create migration
goose -dir db/migrations create adduserstable sql
Run migrations
goose -dir db/migrations postgres "user=postgres dbname=mydb sslmode=disable" up
Status
goose -dir db/migrations postgres "user=postgres dbname=mydb sslmode=disable" status
Embedding in Go:
go
import (
"embed"
"github.com/pressly/goose/v3"
)
//go:embed migrations/*.sql var embedMigrations embed.FS
func RunGooseMigrations(db *sql.DB) error { goose.SetBaseFS(embedMigrations)
if err := goose.SetDialect("postgres"); err != nil { return err }
if err := goose.Up(db, "migrations"); err != nil { return err }
return nil }
##### 3. sqlc migrations
If you're already using sqlc, it integrates well with golang-migrate.
yaml
sqlc.yaml
version: "2" sql: - engine: "postgresql" queries: "queries.sql" schema: "db/migrations" # Points to migration files gen: go: package: "db" out: "internal/db"
#### Migration Library Comparison
| Feature | golang-migrate | goose | DIY/Embedded |
|---------|----------------|-------|--------------|
| **CLI Tools** | ✅ Comprehensive | ✅ Simple | ❌ Build your own |
| **Database Support** | 20+ databases | 10+ databases | Manual per DB |
| **Rollback** | ✅ Version-based | ✅ Up/Down | ✅ Manual |
| **Go Integration** | Library + CLI | Library + CLI | Code only |
| **Transactions** | ✅ Per migration | ✅ Configurable | ✅ Full control |
| **Team Features** | ✅ Locking | Basic | Build yourself |
| **Embedding** | ✅ Multiple sources | ✅ embed.FS | ✅ Native |
#### Production Recommendations
1. **Use golang-migrate for complex projects**
- Most features and database support
- Active community and maintenance
- Good CI/CD integration
2. **Use goose for simpler projects**
- Easier to understand and debug
- Good embed.FS support
- Sufficient for most applications
3. **Build custom only when**
- You need specific migration behavior
- Compliance requires full control
- Learning purposes
#### Migration Best Practices
go
// cmd/migrate/main.go
package main
import ( "flag" "log" "os"
"github.com/golang-migrate/migrate/v4" _ "github.com/golang-migrate/migrate/v4/database/postgres" _ "github.com/golang-migrate/migrate/v4/source/file" )
func main() { var ( dir = flag.String("dir", "db/migrations", "migrations directory") dbURL = flag.String("db", os.Getenv("DATABASE_URL"), "database URL") verbose = flag.Bool("v", false, "verbose logging") ) flag.Parse()
if *dbURL == "" { log.Fatal("database URL required") }
m, err := migrate.New( "file://"+*dir, *dbURL, ) if err != nil { log.Fatal(err) }
if *verbose { m.Log = &logger{} }
cmd := flag.Arg(0) switch cmd { case "up": if err := m.Up(); err != nil && err != migrate.ErrNoChange { log.Fatal(err) } case "down": if err := m.Down(); err != nil && err != migrate.ErrNoChange { log.Fatal(err) } case "version": v, dirty, err := m.Version() if err != nil { log.Fatal(err) } log.Printf("version: %d, dirty: %v", v, dirty) default: log.Fatal("usage: migrate [up|down|version]") } }
#### CI/CD Integration
yaml
.github/workflows/migrate.yml
name: Database Migrations
on: push: paths: - 'db/migrations/**'
jobs: validate: runs-on: ubuntu-latest services: postgres: image: postgres:15 env: POSTGRES_PASSWORD: postgres options: >- --health-cmd pg_isready --health-interval 10s
steps: - uses: actions/checkout@v3
- name: Install migrate run: | curl -L https://github.com/golang-migrate/migrate/releases/download/v4.16.2/migrate.linux-amd64.tar.gz | tar xvz sudo mv migrate /usr/local/bin/
- name: Run migrations run: | migrate -path db/migrations -database "postgresql://postgres:postgres@localhost/postgres?sslmode=disable" up
- name: Validate rollback run: | migrate -path db/migrations -database "postgresql://postgres:postgres@localhost/postgres?sslmode=disable" down 1 migrate -path db/migrations -database "postgresql://postgres:postgres@localhost/postgres?sslmode=disable" up
---
## Repository Pattern
### Decision Guide: sqlc vs Generic Repository
**Choose based on your application characteristics:**
#### Use sqlc (Recommended for Most Applications)
✅ **Best for**:
- Static, well-defined queries known at compile time
- Teams comfortable writing SQL
- Performance-critical applications
- CI/CD pipelines with build-time validation
- Microservices with focused data access patterns
go
// sqlc generates type-safe Go from SQL
-- name: GetUserWithOrders :many
SELECT u.id, u.email, u.name,
COALESCE(o.total, 0) as order_total
FROM users u
LEFT JOIN orders o ON o.user_id = u.id
WHERE u.status = $1 AND u.created_at > $2;
// Generated Go code (zero runtime overhead) func (q *Queries) GetUserWithOrders(ctx context.Context, arg GetUserWithOrdersParams) ([]GetUserWithOrdersRow, error) { // Type-safe, generated implementation }
#### Use Generic Repository (Advanced Pattern)
✅ **Best for**:
- Rapid prototyping and early development phases
- Admin panels with extensive CRUD operations
- Applications requiring true database-agnostic layers
- Teams preferring ORM-style abstractions
- Dynamic query requirements
go
// Generic repository abstracts common operations
userRepo := storage.NewGenericRepository*User
productRepo := storage.NewGenericRepository*Product
// Consistent interface across all entities user, err := userRepo.GetByID(ctx, "123") products, err := productRepo.List(ctx, storage.Filter{})
#### Production Decision Matrix
| Factor | sqlc | Generic Repository |
|--------|------|-------------------|
| **Type Safety** | Compile-time | Runtime |
| **Performance** | Zero overhead | Minimal reflection |
| **SQL Control** | Full SQL features | Limited to generic patterns |
| **Learning Curve** | SQL knowledge required | Go patterns familiar |
| **Build Process** | Code generation step | Standard go build |
| **Debugging** | SQL traces directly | Abstract layer traces |
| **Team Preference** | SQL-first developers | ORM/abstraction-first |
### Generic Repository Pattern (Go 1.18+)
**For teams choosing the generic approach**, modern Go applications can use generics to create reusable repository implementations while maintaining type safety.
#### Generic Repository Interface
go
// internal/storage/generic.go
package storage
import ( "context" "database/sql" )
// Entity represents any domain entity with basic fields type Entity interface { GetID() string GetCreatedAt() time.Time GetUpdatedAt() time.Time SetCreatedAt(time.Time) SetUpdatedAt(time.Time) }
// Repository provides generic CRUD operations type Repository[T Entity] interface { Create(ctx context.Context, entity T) error GetByID(ctx context.Context, id string) (T, error) Update(ctx context.Context, entity T) error Delete(ctx context.Context, id string) error List(ctx context.Context, filter ListFilter) ([]T, error) Count(ctx context.Context, filter ListFilter) (int64, error) }
// QueryBuilder builds dynamic queries for entity type T type QueryBuilder[T Entity] interface { Where(field string, operator string, value interface{}) QueryBuilder[T] OrderBy(field string, direction string) QueryBuilder[T] Limit(limit int) QueryBuilder[T] Offset(offset int) QueryBuilder[T] Build() (string, []interface{}) }
#### Generic Repository Implementation
go
// BaseRepository provides common CRUD operations for any entity
type BaseRepository[T Entity] struct {
db *sql.DB
logger Logger
tableName string
columns []string
scanner EntityScanner[T]
}
// EntityScanner handles database row scanning for entity type T type EntityScanner[T Entity] interface { Scan(rows *sql.Rows) (T, error) ScanArgs(entity T) []interface{} }
func NewBaseRepositoryT Entity *BaseRepository[T] { return &BaseRepository[T]{ db: db, logger: logger, tableName: tableName, columns: columns, scanner: scanner, } }
func (r *BaseRepository[T]) Create(ctx context.Context, entity T) error { now := time.Now() entity.SetCreatedAt(now) entity.SetUpdatedAt(now)
query := r.buildInsertQuery() args := r.scanner.ScanArgs(entity)
if _, err := r.db.ExecContext(ctx, query, args...); err != nil { r.logger.Error("failed to create entity", "table", r.tableName, "entity_id", entity.GetID(), "error", err) return fmt.Errorf("create %s: %w", r.tableName, err) }
return nil }
func (r *BaseRepository[T]) GetByID(ctx context.Context, id string) (T, error) { var zero T
query := fmt.Sprintf("SELECT %s FROM %s WHERE id = $1", strings.Join(r.columns, ", "), r.tableName)
rows, err := r.db.QueryContext(ctx, query, id) if err != nil { r.logger.Error("failed to query entity", "table", r.tableName, "entity_id", id, "error", err) return zero, fmt.Errorf("query %s: %w", r.tableName, err) } defer rows.Close()
if !rows.Next() { return zero, ErrNotFound }
entity, err := r.scanner.Scan(rows) if err != nil { r.logger.Error("failed to scan entity", "table", r.tableName, "entity_id", id, "error", err) return zero, fmt.Errorf("scan %s: %w", r.tableName, err) }
return entity, nil }
func (r *BaseRepository[T]) List(ctx context.Context, filter ListFilter) ([]T, error) { query := fmt.Sprintf("SELECT %s FROM %s", strings.Join(r.columns, ", "), r.tableName)
var args []interface{} if filter.Status != "" { query += " WHERE status = $1" args = append(args, filter.Status) }
if filter.Limit > 0 { query += fmt.Sprintf(" LIMIT %d", filter.Limit) }
if filter.Offset > 0 { query += fmt.Sprintf(" OFFSET %d", filter.Offset) }
rows, err := r.db.QueryContext(ctx, query, args...) if err != nil { return nil, fmt.Errorf("query %s list: %w", r.tableName, err) } defer rows.Close()
var entities []T for rows.Next() { entity, err := r.scanner.Scan(rows) if err != nil { r.logger.Error("failed to scan entity in list", "table", r.tableName, "error", err) continue } entities = append(entities, entity) }
return entities, nil }
func (r *BaseRepository[T]) buildInsertQuery() string { placeholders := make([]string, len(r.columns)) for i := range r.columns { placeholders[i] = fmt.Sprintf("$%d", i+1) }
return fmt.Sprintf("INSERT INTO %s (%s) VALUES (%s)", r.tableName, strings.Join(r.columns, ", "), strings.Join(placeholders, ", ")) }
#### Concrete Repository Implementation
go
// internal/storage/postgres/user_repository.go
package postgres
import ( "database/sql" "myapp/internal/domain" "myapp/internal/storage" )
// UserRepository extends BaseRepository with user-specific methods type UserRepository struct { storage.BaseRepository[domain.User] }
// UserScanner implements EntityScanner for User entities type UserScanner struct{}
func (s UserScanner) Scan(rows sql.Rows) (*domain.User, error) { var user domain.User
err := rows.Scan( &user.ID, &user.Email, &user.Name, &user.PasswordHash, &user.Status, &user.CreatedAt, &user.UpdatedAt, )
return &user, err }
func (s UserScanner) ScanArgs(user domain.User) []interface{} { return []interface{}{ user.ID, user.Email, user.Name, user.PasswordHash, user.Status, user.CreatedAt, user.UpdatedAt, } }
func NewUserRepository(db sql.DB, logger Logger) UserRepository { columns := []string{"id", "email", "name", "passwordhash", "status", "createdat", "updated_at"} scanner := &UserScanner{}
baseRepo := storage.NewBaseRepository*domain.User
return &UserRepository{ BaseRepository: baseRepo, } }
// User-specific methods func (r UserRepository) GetByEmail(ctx context.Context, email string) (domain.User, error) { query := "SELECT id, email, name, passwordhash, status, createdat, updated_at FROM users WHERE email = $1"
rows, err := r.db.QueryContext(ctx, query, email) if err != nil { return nil, fmt.Errorf("query user by email: %w", err) } defer rows.Close()
if !rows.Next() { return nil, ErrNotFound }
return r.scanner.Scan(rows) }
func (r UserRepository) GetActiveUsers(ctx context.Context) ([]domain.User, error) { filter := storage.ListFilter{Status: "active"} return r.List(ctx, filter) }
#### Generic Query Builder
go
// PostgreSQLQueryBuilder provides dynamic query building for PostgreSQL
type PostgreSQLQueryBuilder[T Entity] struct {
tableName string
columns []string
conditions []condition
orderBy []orderClause
limit int
offset int
}
type condition struct { field string operator string value interface{} }
type orderClause struct { field string direction string }
func NewQueryBuilderT Entity *PostgreSQLQueryBuilder[T] { return &PostgreSQLQueryBuilder[T]{ tableName: tableName, columns: columns, } }
func (qb *PostgreSQLQueryBuilder[T]) Where(field, operator string, value interface{}) storage.QueryBuilder[T] { qb.conditions = append(qb.conditions, condition{ field: field, operator: operator, value: value, }) return qb }
func (qb *PostgreSQLQueryBuilder[T]) OrderBy(field, direction string) storage.QueryBuilder[T] { qb.orderBy = append(qb.orderBy, orderClause{ field: field, direction: direction, }) return qb }
func (qb *PostgreSQLQueryBuilder[T]) Limit(limit int) storage.QueryBuilder[T] { qb.limit = limit return qb }
func (qb *PostgreSQLQueryBuilder[T]) Offset(offset int) storage.QueryBuilder[T] { qb.offset = offset return qb }
func (qb *PostgreSQLQueryBuilder[T]) Build() (string, []interface{}) { query := fmt.Sprintf("SELECT %s FROM %s", strings.Join(qb.columns, ", "), qb.tableName)
var args []interface{} paramIndex := 1
// WHERE clause if len(qb.conditions) > 0 { query += " WHERE " var whereClauses []string
for _, cond := range qb.conditions { whereClauses = append(whereClauses, fmt.Sprintf("%s %s $%d", cond.field, cond.operator, paramIndex)) args = append(args, cond.value) paramIndex++ }
query += strings.Join(whereClauses, " AND ") }
// ORDER BY clause if len(qb.orderBy) > 0 { query += " ORDER BY " var orderClauses []string
for _, order := range qb.orderBy { orderClauses = append(orderClauses, fmt.Sprintf("%s %s", order.field, order.direction)) }
query += strings.Join(orderClauses, ", ") }
// LIMIT and OFFSET if qb.limit > 0 { query += fmt.Sprintf(" LIMIT %d", qb.limit) }
if qb.offset > 0 { query += fmt.Sprintf(" OFFSET %d", qb.offset) }
return query, args }
// Usage example func (r UserRepository) FindUsers(ctx context.Context, status string, minAge int) ([]domain.User, error) { qb := NewQueryBuilder*domain.User. Where("status", "=", status). Where("age", ">=", minAge). OrderBy("created_at", "DESC"). Limit(100)
query, args := qb.Build()
rows, err := r.db.QueryContext(ctx, query, args...) if err != nil { return nil, fmt.Errorf("find users: %w", err) } defer rows.Close()
var users []*domain.User for rows.Next() { user, err := r.scanner.Scan(rows) if err != nil { continue } users = append(users, user) }
return users, nil }
#### Benefits of Generic Repository Pattern
1. **Reduced Boilerplate**: Common CRUD operations implemented once
2. **Type Safety**: Compile-time checking of entity types
3. **Consistency**: Same patterns across all repositories
4. **Extensibility**: Easy to add domain-specific methods
5. **Testing**: Generic test patterns can be reused
#### When to Use Generic vs Specific Repositories
go
// Use Generic Repository for:
// - Simple CRUD operations
// - Standard entity patterns
// - Rapid prototyping
type ProductRepository struct { storage.BaseRepository[domain.Product] }
// Use Specific Repository for: // - Complex queries with joins // - Domain-specific business logic // - Performance-critical paths
func (r UserRepository) GetUserWithOrders(ctx context.Context, userID string) (UserWithOrders, error) {
// Complex query that doesn't fit generic pattern
query :=
SELECT u.id, u.email, u.name,
COALESCE(json_agg(o.*) FILTER (WHERE o.id IS NOT NULL), '[]') as orders
FROM users u
LEFT JOIN orders o ON u.id = o.user_id
WHERE u.id = $1
GROUP BY u.id, u.email, u.name
// Custom implementation...
}
---
## Repository Pattern
### Interface Definition
go
// internal/service/interfaces.go
package service
// Repository interfaces defined by service layer type UserRepository interface { Create(ctx context.Context, user *domain.User) error GetByID(ctx context.Context, id string) (*domain.User, error) GetByEmail(ctx context.Context, email string) (*domain.User, error) Update(ctx context.Context, user *domain.User) error Delete(ctx context.Context, id string) error List(ctx context.Context, filter ListFilter) ([]*domain.User, error) }
type ListFilter struct { Status string Limit int Offset int }
### Repository Implementation
go
// internal/storage/postgres/user_repository.go
package postgres
import ( "context" "database/sql" "errors"
"myapp/internal/domain" "myapp/internal/service" )
// Ensure interface compliance var _ service.UserRepository = (*UserRepository)(nil)
type UserRepository struct { db *sql.DB logger Logger }
func NewUserRepository(db sql.DB, logger Logger) UserRepository { return &UserRepository{ db: db, logger: logger, } }
func (r UserRepository) GetByID(ctx context.Context, id string) (domain.User, error) { var user domain.User
err := r.db.QueryRowContext(ctx, getUserByID, id).Scan( &user.ID, &user.Email, &user.Name, &user.PasswordHash, &user.Status, &user.CreatedAt, &user.UpdatedAt, )
if err == sql.ErrNoRows { return nil, service.ErrNotFound }
if err != nil { r.logger.Error("failed to get user", slog.String("user_id", id), slog.Error(err)) return nil, fmt.Errorf("get user: %w", err) }
return &user, nil }
func (r UserRepository) Create(ctx context.Context, user domain.User) error { _, err := r.db.ExecContext(ctx, createUser, user.ID, user.Email, user.Name, user.PasswordHash, user.Status, user.CreatedAt, user.UpdatedAt, )
if err != nil { if isUniqueViolation(err, "usersemailkey") { return service.ErrEmailTaken } return fmt.Errorf("create user: %w", err) }
return nil }
// Helper to check PostgreSQL errors func isUniqueViolation(err error, constraint string) bool { var pgErr *pq.Error if errors.As(err, &pgErr) { return pgErr.Code == "23505" && pgErr.Constraint == constraint } return false }
---
## Transaction Handling
### Transaction Patterns
#### ❌ Anti-Pattern: Using panic/recover (AVOID)
go
// internal/database/transaction.go
package database
// Transaction executes fn within a database transaction func Transaction(ctx context.Context, db sql.DB, fn func(sql.Tx) error) error { tx, err := db.BeginTx(ctx, nil) if err != nil { return fmt.Errorf("begin transaction: %w", err) }
// Note: This defer pattern is shown for completeness but has drawbacks: // - panic/recover can hide the original error location // - It's better to use explicit error handling // Consider the alternative patterns below defer func() { if p := recover(); p != nil { tx.Rollback() panic(p) // Re-panic } }()
if err := fn(tx); err != nil { if rbErr := tx.Rollback(); rbErr != nil { return fmt.Errorf("rollback failed: %v (original: %w)", rbErr, err) } return err }
if err := tx.Commit(); err != nil { return fmt.Errorf("commit transaction: %w", err) }
return nil }
### Complex Transaction Example
go
// internal/service/transfer_service.go
package service
func (s *TransferService) Transfer(ctx context.Context, from, to string, amount decimal.Decimal) error { return database.Transaction(ctx, s.db, func(tx *sql.Tx) error { // Lock source account var sourceBalance decimal.Decimal err := tx.QueryRowContext(ctx, "SELECT balance FROM accounts WHERE id = $1 FOR UPDATE", from, ).Scan(&sourceBalance) if err != nil { return fmt.Errorf("get source balance: %w", err) }
// Check sufficient funds if sourceBalance.LessThan(amount) { return ErrInsufficientFunds }
// Debit source _, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance - $2 WHERE id = $1", from, amount, ) if err != nil { return fmt.Errorf("debit source: %w", err) }
// Credit destination _, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance + $2 WHERE id = $1", to, amount, ) if err != nil { return fmt.Errorf("credit destination: %w", err) }
// Record transaction _, err = tx.ExecContext(ctx, "INSERT INTO transfers (fromaccount, toaccount, amount) VALUES ($1, $2, $3)", from, to, amount, ) if err != nil { return fmt.Errorf("record transfer: %w", err) }
return nil }) }
### Savepoints
go
func (r *Repository) ComplexOperation(ctx context.Context) error {
return database.Transaction(ctx, r.db, func(tx *sql.Tx) error {
// First operation
if err := r.operation1(ctx, tx); err != nil {
return err
}
// Create savepoint _, err := tx.ExecContext(ctx, "SAVEPOINT operation2") if err != nil { return err }
// Second operation (can fail independently) if err := r.operation2(ctx, tx); err != nil { // Rollback to savepoint tx.ExecContext(ctx, "ROLLBACK TO SAVEPOINT operation2") // Log but continue r.logger.Warn("operation2 failed, continuing", slog.Error(err)) }
// Third operation return r.operation3(ctx, tx) }) }
#### ✅ Recommended Pattern: Explicit Error Handling
go
// RECOMMENDED: Transaction with explicit error handling (no panic/recover)
func TransactionV2(ctx context.Context, db sql.DB, fn func(sql.Tx) error) error {
tx, err := db.BeginTx(ctx, nil)
if err != nil {
return fmt.Errorf("begin transaction: %w", err)
}
// Execute the transaction err = fn(tx) if err != nil { // Best effort rollback if rbErr := tx.Rollback(); rbErr != nil { return fmt.Errorf("tx failed: %w, rollback failed: %v", err, rbErr) } return fmt.Errorf("transaction failed: %w", err) }
// Commit if err := tx.Commit(); err != nil { return fmt.Errorf("commit failed: %w", err) }
return nil }
#### Pattern 2: Cleanup Helper
go
// Helper that ensures cleanup
type TxRunner struct {
db *sql.DB
logger Logger
}
func (r TxRunner) RunInTransaction(ctx context.Context, fn func(sql.Tx) error) error { tx, err := r.db.BeginTx(ctx, nil) if err != nil { return fmt.Errorf("begin tx: %w", err) }
// Track whether we committed committed := false
// Ensure cleanup defer func() { if !committed { if err := tx.Rollback(); err != nil && err != sql.ErrTxDone { r.logger.Error("rollback failed", slog.Error(err)) } } }()
// Run function if err := fn(tx); err != nil { return err // Rollback happens in defer }
// Commit if err := tx.Commit(); err != nil { return fmt.Errorf("commit: %w", err) }
committed = true return nil }
#### Pattern 3: Context-Aware Transactions
go
// Respects context cancellation
func ContextAwareTransaction(ctx context.Context, db sql.DB, opts sql.TxOptions, fn func(*sql.Tx) error) error {
// Check context before starting
if err := ctx.Err(); err != nil {
return fmt.Errorf("context cancelled before tx: %w", err)
}
tx, err := db.BeginTx(ctx, opts) if err != nil { return fmt.Errorf("begin tx: %w", err) }
done := false defer func() { if !done { _ = tx.Rollback() // Ignore error on cleanup } }()
// Monitor context during transaction errCh := make(chan error, 1) go func() { errCh <- fn(tx) }()
select { case <-ctx.Done(): return ctx.Err() case err := <-errCh: if err != nil { return err } }
if err := tx.Commit(); err != nil { return fmt.Errorf("commit: %w", err) }
done = true return nil }
#### Pattern 4: Retryable Transactions
go
// For handling deadlocks and transient errors
func RetryableTransaction(ctx context.Context, db sql.DB, fn func(sql.Tx) error) error {
maxRetries := 3
backoff := 100 * time.Millisecond
for attempt := 0; attempt < maxRetries; attempt++ { err := TransactionV2(ctx, db, fn) if err == nil { return nil }
// Check if retryable if !isRetryableError(err) { return err }
// Last attempt? if attempt == maxRetries-1 { return fmt.Errorf("transaction failed after %d attempts: %w", maxRetries, err) }
// Backoff select { case <-ctx.Done(): return ctx.Err() case <-time.After(backoff): backoff *= 2 } }
return nil }
func isRetryableError(err error) bool { // PostgreSQL serialization failure if strings.Contains(err.Error(), "40001") { return true } // MySQL deadlock if strings.Contains(err.Error(), "Deadlock found") { return true } return false }
---
## Caching Strategies
### Understanding Cache Patterns
Caching is essential for database performance. The service architecture defines a `Cache` interface, but choosing the right caching strategy is critical for system behavior.
### Core Caching Patterns
#### 1. Cache-Aside (Lazy Loading)
**How it works**: Application manages cache explicitly - read from cache, on miss read from database and populate cache.
go
// internal/service/user_service.go
type UserService struct {
repo UserRepository
cache Cache
}
func (s UserService) GetUser(ctx context.Context, id string) (User, error) { // Try cache first key := fmt.Sprintf("user:%s", id) if cached, err := s.cache.Get(ctx, key); err == nil { var user User if err := json.Unmarshal(cached, &user); err == nil { return &user, nil } }
// Cache miss - load from database user, err := s.repo.GetByID(ctx, id) if err != nil { return nil, err }
// Update cache for next time if data, err := json.Marshal(user); err == nil { s.cache.Set(ctx, key, data, 5*time.Minute) }
return user, nil }
// Update operations must invalidate cache func (s *UserService) UpdateUser(ctx context.Context, id string, updates UpdateRequest) error { if err := s.repo.Update(ctx, id, updates); err != nil { return err }
// Invalidate cache entry key := fmt.Sprintf("user:%s", id) s.cache.Delete(ctx, key)
return nil }
**Pros**: Simple, application has full control, works with any cache
**Cons**: Cache logic scattered throughout code, risk of cache inconsistency
#### 2. Read-Through Cache
**How it works**: Cache sits between application and database, automatically loads missing data.
go
// internal/cache/readthrough.go
type ReadThroughCache struct {
cache Cache
loader DataLoader
}
type DataLoader interface { Load(ctx context.Context, key string) ([]byte, error) }
func (c *ReadThroughCache) Get(ctx context.Context, key string) ([]byte, error) { // Try cache first if data, err := c.cache.Get(ctx, key); err == nil { return data, nil }
// Load through cache data, err := c.loader.Load(ctx, key) if err != nil { return nil, err }
// Cache for next time c.cache.Set(ctx, key, data, 5*time.Minute)
return data, nil }
// Usage with repository pattern type UserLoader struct { repo UserRepository }
func (l *UserLoader) Load(ctx context.Context, key string) ([]byte, error) { // Extract ID from key format "user:123" parts := strings.Split(key, ":") if len(parts) != 2 || parts[0] != "user" { return nil, fmt.Errorf("invalid key format: %s", key) }
user, err := l.repo.GetByID(ctx, parts[1]) if err != nil { return nil, err }
return json.Marshal(user) }
**Pros**: Centralized caching logic, consistent behavior
**Cons**: Less flexible, requires cache implementation support
#### 3. Write-Through Cache
**How it works**: Writes go through cache to database, cache always stays synchronized.
go
// internal/cache/writethrough.go
type WriteThroughCache struct {
cache Cache
writer DataWriter
}
type DataWriter interface { Write(ctx context.Context, key string, data []byte) error }
func (c *WriteThroughCache) Set(ctx context.Context, key string, data []byte) error { // Write to database first if err := c.writer.Write(ctx, key, data); err != nil { return err }
// Then update cache return c.cache.Set(ctx, key, data, 0) // No expiry for write-through }
// Combining with transactions func (s Service) CreateOrder(ctx context.Context, order Order) error { return database.Transaction(ctx, s.db, func(tx *sql.Tx) error { // Create order in database if err := s.repo.CreateOrderTx(ctx, tx, order); err != nil { return err }
// Update user's cached order count userKey := fmt.Sprintf("user:%s:orders", order.UserID) count, _ := s.cache.Increment(ctx, userKey, 1)
// Update product inventory cache productKey := fmt.Sprintf("product:%s:inventory", order.ProductID) s.cache.Decrement(ctx, productKey, order.Quantity)
return nil }) }
**Pros**: Cache always consistent with database, good for read-heavy workloads
**Cons**: Higher write latency, complexity with transactions
### In-Memory vs Distributed Caching
#### Local In-Memory Cache (Single Instance)
Best for:
- Single-instance applications
- Frequently accessed configuration
- Small datasets that fit in memory
- Low-latency requirements
go
// internal/cache/memory/cache.go
package memory
import ( "context" "sync" "time" )
type Cache struct { mu sync.RWMutex items map[string]*item }
type item struct { value []byte expiration time.Time }
func New() *Cache { c := &Cache{ items: make(map[string]*item), }
// Cleanup goroutine go c.cleanup()
return c }
func (c *Cache) Get(ctx context.Context, key string) ([]byte, error) { c.mu.RLock() defer c.mu.RUnlock()
item, ok := c.items[key] if !ok { return nil, ErrNotFound }
if !item.expiration.IsZero() && time.Now().After(item.expiration) { return nil, ErrNotFound }
return item.value, nil }
func (c *Cache) Set(ctx context.Context, key string, value []byte, ttl time.Duration) error { c.mu.Lock() defer c.mu.Unlock()
exp := time.Time{} if ttl > 0 { exp = time.Now().Add(ttl) }
c.items[key] = &item{ value: value, expiration: exp, }
return nil }
#### Distributed Cache (Redis)
Best for:
- Multi-instance applications
- Large datasets
- Shared state between services
- Cache persistence requirements
go
// internal/cache/redis/cache.go
package redis
import ( "context" "time"
"github.com/redis/go-redis/v9" )
type Cache struct { client *redis.Client prefix string }
func New(addr string, prefix string) (*Cache, error) { client := redis.NewClient(&redis.Options{ Addr: addr, DialTimeout: 5 * time.Second, ReadTimeout: 3 * time.Second, WriteTimeout: 3 * time.Second, PoolSize: 10, MinIdleConns: 5, })
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel()
if err := client.Ping(ctx).Err(); err != nil { return nil, fmt.Errorf("redis ping failed: %w", err) }
return &Cache{ client: client, prefix: prefix, }, nil }
func (c *Cache) Get(ctx context.Context, key string) ([]byte, error) { val, err := c.client.Get(ctx, c.prefix+key).Bytes() if err == redis.Nil { return nil, ErrNotFound } return val, err }
func (c *Cache) Set(ctx context.Context, key string, value []byte, ttl time.Duration) error { return c.client.Set(ctx, c.prefix+key, value, ttl).Err() }
// Advanced Redis features func (c *Cache) SetNX(ctx context.Context, key string, value []byte, ttl time.Duration) (bool, error) { return c.client.SetNX(ctx, c.prefix+key, value, ttl).Result() }
func (c *Cache) Increment(ctx context.Context, key string, delta int64) (int64, error) { return c.client.IncrBy(ctx, c.prefix+key, delta).Result() }
### Decision Matrix: When to Use Each Pattern
| Scenario | Recommended Pattern | Cache Type | TTL Strategy |
|----------|-------------------|------------|--------------|
| User profiles | Cache-Aside | Redis | 5-15 minutes |
| Session data | Write-Through | Redis | Session lifetime |
| Product catalog | Read-Through | Local + Redis | 1 hour |
| Inventory counts | Write-Through | Redis | No expiry |
| Configuration | Read-Through | Local | Application lifetime |
| API rate limits | Write-Through | Redis | Rolling window |
### Cache Implementation Best Practices
go
// internal/cache/interface.go
package cache
import ( "context" "time" )
// Cache interface that both memory and Redis implement type Cache interface { Get(ctx context.Context, key string) ([]byte, error) Set(ctx context.Context, key string, value []byte, ttl time.Duration) error Delete(ctx context.Context, key string) error
// Optional advanced operations SetNX(ctx context.Context, key string, value []byte, ttl time.Duration) (bool, error) Increment(ctx context.Context, key string, delta int64) (int64, error) Expire(ctx context.Context, key string, ttl time.Duration) error }
// Wrapper to add consistent key prefixing and metrics type InstrumentedCache struct { underlying Cache prefix string metrics Metrics }
func (c *InstrumentedCache) Get(ctx context.Context, key string) ([]byte, error) { start := time.Now() fullKey := c.prefix + ":" + key
data, err := c.underlying.Get(ctx, fullKey)
if err == nil { c.metrics.Counter("cache.hits", 1, "key_prefix", c.prefix) } else { c.metrics.Counter("cache.misses", 1, "key_prefix", c.prefix) }
c.metrics.Histogram("cache.get.duration", time.Since(start).Seconds())
return data, err }
### Cache Stampede Prevention
go
// Prevent multiple goroutines from regenerating the same cache entry
type StampedeProtector struct {
cache Cache
loader DataLoader
inflight map[string]*sync.WaitGroup
mu sync.Mutex
}
func (p *StampedeProtector) Get(ctx context.Context, key string) ([]byte, error) { // Fast path - check cache first if data, err := p.cache.Get(ctx, key); err == nil { return data, nil }
// Slow path - coordinate loading p.mu.Lock() wg, loading := p.inflight[key] if loading { // Another goroutine is loading p.mu.Unlock() wg.Wait() return p.cache.Get(ctx, key) }
// We'll do the loading wg = &sync.WaitGroup{} wg.Add(1) p.inflight[key] = wg p.mu.Unlock()
// Load data data, err := p.loader.Load(ctx, key)
// Cleanup and signal others p.mu.Lock() delete(p.inflight, key) p.mu.Unlock() wg.Done()
if err != nil { return nil, err }
// Cache the result p.cache.Set(ctx, key, data, 5*time.Minute) return data, nil }
### Production Caching Checklist
- [ ] Choose appropriate caching pattern for each use case
- [ ] Implement cache stampede protection for expensive operations
- [ ] Monitor cache hit/miss ratios
- [ ] Set appropriate TTLs based on data volatility
- [ ] Handle cache failures gracefully (fallback to database)
- [ ] Use cache warming for critical data
- [ ] Implement cache invalidation strategy
- [ ] Monitor cache memory usage
- [ ] Test cache consistency under load
- [ ] Document cache key naming conventions
---
## Performance Optimization
### Query Performance Monitoring
go
// internal/database/instrumentation.go
package database
type instrumentedDB struct { *sql.DB slowQueryThreshold time.Duration logger Logger metrics Metrics }
func (db instrumentedDB) QueryContext(ctx context.Context, query string, args ...interface{}) (sql.Rows, error) { start := time.Now()
rows, err := db.DB.QueryContext(ctx, query, args...)
duration := time.Since(start)
if duration > db.slowQueryThreshold { db.logger.Warn("slow query detected", slog.String("query", truncateQuery(query)), slog.Duration("duration", duration))
db.metrics.Counter("db.slow_queries", 1) }
db.metrics.Histogram("db.query.duration", duration.Seconds())
return rows, err }
### Connection Pool Monitoring
go
func (m *Monitor) CollectStats() {
ticker := time.NewTicker(10 * time.Second)
defer ticker.Stop()
for range ticker.C { stats := m.db.Stats()
m.metrics.Gauge("db.connections.open", float64(stats.OpenConnections)) m.metrics.Gauge("db.connections.idle", float64(stats.Idle)) m.metrics.Gauge("db.connections.in_use", float64(stats.InUse)) m.metrics.Gauge("db.connections.wait_count", float64(stats.WaitCount)) m.metrics.Gauge("db.connections.waitdurationms", float64(stats.WaitDuration.Milliseconds()))
// Alert if connection pool is exhausted if stats.OpenConnections == stats.MaxOpenConnections { m.logger.Warn("connection pool exhausted", slog.Int("max_connections", stats.MaxOpenConnections)) } } }
### Best Practices Summary
1. **Always use parameterized queries** - Never concatenate SQL
2. **Configure connection pools properly** based on load
3. **Use transactions for consistency**
4. **Monitor slow queries** and connection pool health
5. **Use prepared statements** for frequently executed queries
6. **Implement proper error handling** for constraint violations
7. **Use context for timeouts** and cancellation
8. **Test with real databases** using testcontainers
9. **Version your schema** with migrations
10. **Use appropriate indexes** based on query patterns
## Quick Reference Checklist
### Connection Management
- [ ] Use database-specific connection pool settings
- [ ] Configure SQLite with single connection (handled by `NewDB()`)
- [ ] Set appropriate timeouts for all database operations
- [ ] Use connection pool monitoring and health checks
- [ ] Test connection with `PingContext()` on startup
- [ ] Handle connection failures gracefully
### Query Construction & Safety
- [ ] Always use parameterized queries (never string concatenation)
- [ ] Use prepared statements for frequently executed queries
- [ ] Implement query builders for dynamic query construction
- [ ] Validate all SQL inputs and parameters
- [ ] Use proper placeholder formats (`$1` for Postgres, `?` for MySQL)
- [ ] Log slow queries and monitor performance
### Repository Pattern Implementation
- [ ] Define repository interfaces in service layer (not storage)
- [ ] Implement repositories in storage layer packages
- [ ] Use constructor injection for database dependencies
- [ ] Return domain objects, not database rows
- [ ] Map database errors to domain errors
- [ ] Implement proper constraint violation handling
### Transaction Management
- [ ] Use transaction wrappers for complex operations
- [ ] Handle rollback on errors properly
- [ ] Avoid panic/recover in transaction code
- [ ] Use savepoints for partial rollback scenarios
- [ ] Implement timeout handling for long transactions
- [ ] Test transaction rollback scenarios
### Database Abstraction Strategy
- [ ] Choose appropriate abstraction level (raw SQL vs ORM)
- [ ] Use raw SQL or sqlx for performance-critical paths
- [ ] **PRIORITY**: Use sqlc for type-safe SQL queries (strongly recommended for production)
- [ ] Generate queries with `sqlc generate` during build process
- [ ] Use query builders (Squirrel) for dynamic queries
- [ ] Reserve ORMs for CRUD-heavy admin interfaces
- [ ] Profile and benchmark database access patterns
### Migration & Schema Management
- [ ] Embed migrations in binary using `embed.FS`
- [ ] Create both up and down migration scripts
- [ ] Version migrations with sequential numbers
- [ ] Test migrations against production-like data
- [ ] Implement migration rollback capabilities
- [ ] Use atomic migrations with proper error handling
### Performance Optimization
- [ ] Monitor connection pool statistics
- [ ] Index queries based on actual usage patterns
- [ ] Use `EXPLAIN ANALYZE` to optimize slow queries
- [ ] Implement proper batch operations for bulk inserts
- [ ] Cache frequently accessed, rarely changed data
- [ ] Monitor query execution times and patterns
### Testing Database Code
- [ ] Use testcontainers for integration tests
- [ ] Create realistic test data fixtures
- [ ] Test constraint violations and edge cases
- [ ] Mock repository interfaces for unit tests
- [ ] Test migration scripts thoroughly
- [ ] Validate database schema changes
### Error Handling & Observability
- [ ] Map database errors to meaningful domain errors
- [ ] Log database operations with structured logging
- [ ] Implement circuit breakers for external databases
- [ ] Monitor database connection health
- [ ] Track query performance metrics
- [ ] Handle connection timeouts gracefully
---
---
# 6. HTTP & API Patterns
## Why HTTP Patterns Matter for CLI Applications\n\nMany CLI applications need to:\n- **Consume REST APIs** - Calling external services (GitHub API, AWS API, etc.)\n- **Serve management endpoints** - Health checks, metrics, admin interfaces \n- **Act as API gateways** - `kubectl proxy`, CLI tools that expose local servers\n- **Implement webhooks** - CLI tools that receive callbacks\n\nThis section focuses on HTTP client patterns and lightweight server patterns specifically relevant to CLI applications.\n\n## Table of Contents
1. [Server Setup](#server-setup)
2. [Middleware Patterns](#middleware-patterns)
3. [Request/Response Handling](#requestresponse-handling)
4. [Error Handling](#error-handling)
5. [HTTP Client](#http-client)
6. [gRPC for High-Performance APIs](#grpc-for-high-performance-apis)
7. [Rate Limiting & Circuit Breakers](#rate-limiting--circuit-breakers)
---
## Server Setup
### Chi Router Setup
go
// internal/transport/http/server.go
package http
import ( "context" "net/http" "time"
"github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5/middleware" )
// Server represents the HTTP server type Server struct { router chi.Router server *http.Server services *service.Container logger Logger config Config }
type Config struct { Addr string ReadTimeout time.Duration WriteTimeout time.Duration IdleTimeout time.Duration MaxHeaderBytes int ShutdownTimeout time.Duration }
// NewServer creates a new HTTP server func NewServer(cfg Config, services service.Container, logger Logger) Server { s := &Server{ router: chi.NewRouter(), services: services, logger: logger, config: cfg, }
s.setupMiddleware() s.setupRoutes()
s.server = &http.Server{ Addr: cfg.Addr, Handler: s.router, ReadTimeout: cfg.ReadTimeout, WriteTimeout: cfg.WriteTimeout, IdleTimeout: cfg.IdleTimeout, MaxHeaderBytes: cfg.MaxHeaderBytes, }
return s }
func (s *Server) setupMiddleware() { // Request ID for tracing s.router.Use(middleware.RequestID)
// Real IP extraction s.router.Use(middleware.RealIP)
// Logging s.router.Use(LoggingMiddleware(s.logger))
// Panic recovery s.router.Use(RecoveryMiddleware(s.logger))
// Timeout s.router.Use(middleware.Timeout(60 * time.Second))
// Rate limiting s.router.Use(RateLimitMiddleware(100)) // 100 req/sec
// CORS s.router.Use(CORSMiddleware(s.config.AllowedOrigins))
// Security headers s.router.Use(SecurityHeadersMiddleware())
// Compression s.router.Use(middleware.Compress(5)) }
func (s *Server) setupRoutes() { // Health checks s.router.Get("/health", s.handleHealth) s.router.Get("/ready", s.handleReady)
// API routes s.router.Route("/api/v1", func(r chi.Router) { // API-specific middleware r.Use(ContentTypeJSON) r.Use(AuthMiddleware(s.services.Auth))
// User routes r.Route("/users", func(r chi.Router) { r.Post("/", s.handleCreateUser) r.Get("/{userID}", s.handleGetUser) r.Put("/{userID}", s.handleUpdateUser) r.Delete("/{userID}", s.handleDeleteUser) }) }) }
// Graceful shutdown func (s *Server) Shutdown(ctx context.Context) error { shutdownCtx, cancel := context.WithTimeout(ctx, s.config.ShutdownTimeout) defer cancel()
return s.server.Shutdown(shutdownCtx) }
---
## Middleware Patterns
### Logging Middleware
go
// internal/transport/http/middleware/logging.go
package middleware
import ( "net/http" "time"
"github.com/go-chi/chi/v5/middleware" )
func LoggingMiddleware(logger Logger) func(http.Handler) http.Handler { return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { start := time.Now()
// Wrap response writer ww := middleware.NewWrapResponseWriter(w, r.ProtoMajor)
// Add request ID to logger requestID := middleware.GetReqID(r.Context()) logger := logger.With( slog.String("request_id", requestID), slog.String("operation", r.Method), slog.String("request_path", r.URL.Path), slog.String("remote_addr", r.RemoteAddr), )
// Add logger to context (see context propagation) ctx := ContextWithLogger(r.Context(), logger)
// Log request start using structured logging logger.Info("request started", slog.String("user_agent", r.UserAgent()), slog.Int64("content_length", r.ContentLength))
// Process request next.ServeHTTP(ww, r.WithContext(ctx))
// Log completion duration := time.Since(start) logger.Info("request completed", slog.Int("status", ww.Status()), slog.Int("bytes_written", ww.BytesWritten()), slog.Duration("duration", duration)) }) } }
### Authentication Middleware
go
// internal/transport/http/middleware/auth.go
package middleware
func AuthMiddleware(authService AuthService) func(http.Handler) http.Handler { return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { // Extract token authHeader := r.Header.Get("Authorization") if authHeader == "" { respondError(w, http.StatusUnauthorized, "missing authorization") return }
// Validate Bearer token const bearerPrefix = "Bearer " if !strings.HasPrefix(authHeader, bearerPrefix) { respondError(w, http.StatusUnauthorized, "invalid authorization format") return }
token := authHeader[len(bearerPrefix):]
// Validate token claims, err := authService.ValidateToken(r.Context(), token) if err != nil { respondError(w, http.StatusUnauthorized, "invalid token") return }
// Add claims to context (see context propagation) ctx := context.WithValue(r.Context(), claimsKey, claims) next.ServeHTTP(w, r.WithContext(ctx)) }) } }
### Rate Limiting Middleware
go
// internal/transport/http/middleware/ratelimit.go
package middleware
import ( "net/http" "golang.org/x/time/rate" )
func RateLimitMiddleware(rps int) func(http.Handler) http.Handler { // Create limiter per IP limiters := make(map[string]*rate.Limiter) mu := sync.Mutex{}
// Cleanup old limiters periodically go func() { ticker := time.NewTicker(5 * time.Minute) defer ticker.Stop()
for range ticker.C { mu.Lock() // Remove limiters not used for 10 minutes for ip, limiter := range limiters { if limiter.Tokens() == float64(rps) { delete(limiters, ip) } } mu.Unlock() } }()
return func(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { ip := getClientIP(r)
mu.Lock() limiter, exists := limiters[ip] if !exists { limiter = rate.NewLimiter(rate.Limit(rps), rps*2) limiters[ip] = limiter } mu.Unlock()
if !limiter.Allow() { w.Header().Set("Retry-After", "1") http.Error(w, "Too Many Requests", http.StatusTooManyRequests) return }
next.ServeHTTP(w, r) }) } }
### Security Headers
go
func SecurityHeadersMiddleware() func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("X-Content-Type-Options", "nosniff")
w.Header().Set("X-Frame-Options", "DENY")
w.Header().Set("X-XSS-Protection", "1; mode=block")
w.Header().Set("Strict-Transport-Security", "max-age=31536000; includeSubDomains")
w.Header().Set("Content-Security-Policy", "default-src 'self'")
w.Header().Set("Referrer-Policy", "strict-origin-when-cross-origin")
w.Header().Set("Permissions-Policy", "geolocation=(), microphone=(), camera=()")
next.ServeHTTP(w, r) }) } }
---
## Request/Response Handling
### Base Handler
go
// internal/transport/http/handlers/base.go
package handlers
import ( "encoding/json" "errors" "net/http" "strings"
"github.com/go-chi/chi/v5" "github.com/go-playground/validator/v10" )
// BaseHandler provides common handler functionality type BaseHandler struct { validator *validator.Validate logger Logger }
// decode reads and validates JSON request body func (h BaseHandler) decode(r http.Request, v interface{}) error { // Check content type contentType := r.Header.Get("Content-Type") if !strings.HasPrefix(contentType, "application/json") { return NewHTTPError(http.StatusUnsupportedMediaType, "content type must be application/json") }
// Limit body size r.Body = http.MaxBytesReader(nil, r.Body, 1048576) // 1MB
// Decode JSON decoder := json.NewDecoder(r.Body) decoder.DisallowUnknownFields()
if err := decoder.Decode(v); err != nil { return h.parseJSONError(err) }
// Ensure body was fully read if decoder.More() { return NewHTTPError(http.StatusBadRequest, "request body contains multiple JSON objects") }
// Validate struct if err := h.validator.Struct(v); err != nil { return h.formatValidationError(err) }
return nil }
// respond writes JSON response func (h *BaseHandler) respond(w http.ResponseWriter, status int, data interface{}) { w.Header().Set("Content-Type", "application/json") w.WriteHeader(status)
if data != nil { if err := json.NewEncoder(w).Encode(data); err != nil { h.logger.Error("failed to encode response", slog.Int("status", status), slog.Error(err)) } } }
// respondError writes error response func (h *BaseHandler) respondError(w http.ResponseWriter, err error) { var httpErr HTTPError if !errors.As(err, &httpErr) { httpErr = HTTPError{ Code: http.StatusInternalServerError, Message: "Internal Server Error", } }
h.respond(w, httpErr.Code, ErrorResponse{ Error: ErrorDetail{ Code: httpErr.ErrorCode, Message: httpErr.Message, Details: httpErr.Details, }, }) }
### Request Validation
go
// Custom validation tags
func setupValidator() *validator.Validate {
v := validator.New()
// Register custom validations v.RegisterValidation("password", validatePassword) v.RegisterValidation("phone", validatePhone)
return v }
func validatePassword(fl validator.FieldLevel) bool { password := fl.Field().String()
if len(password) < 8 { return false }
var hasUpper, hasLower, hasDigit, hasSpecial bool for _, r := range password { switch { case unicode.IsUpper(r): hasUpper = true case unicode.IsLower(r): hasLower = true case unicode.IsDigit(r): hasDigit = true case unicode.IsPunct(r) || unicode.IsSymbol(r): hasSpecial = true } }
return hasUpper && hasLower && hasDigit && hasSpecial }
// Request types with validation
type CreateUserRequest struct {
Email string json:"email" validate:"required,email"
Name string json:"name" validate:"required,min=2,max=100"
Password string json:"password" validate:"required,password"
Phone string json:"phone,omitempty" validate:"omitempty,phone"
}
### Pagination
go
// internal/transport/http/pagination.go
package http
type PaginationParams struct {
Page int json:"page"
Limit int json:"limit"
}
func (p *PaginationParams) Validate() error { if p.Page < 1 { p.Page = 1 }
if p.Limit < 1 || p.Limit > 100 { p.Limit = 20 }
return nil }
func (p *PaginationParams) Offset() int { return (p.Page - 1) * p.Limit }
type PaginatedResponse struct {
Data interface{} json:"data"
Pagination Pagination json:"pagination"
}
type Pagination struct {
Page int json:"page"
Limit int json:"limit"
Total int json:"total"
TotalPages int json:"total_pages"
}
func NewPaginatedResponse(data interface{}, page, limit, total int) PaginatedResponse { totalPages := (total + limit - 1) / limit
return PaginatedResponse{ Data: data, Pagination: Pagination{ Page: page, Limit: limit, Total: total, TotalPages: totalPages, }, } }
---
## Error Handling
### HTTP Error Types
go
// internal/transport/http/errors.go
package http
import ( "fmt" "net/http" )
type HTTPError struct { Code int ErrorCode string Message string Details interface{} }
func (e HTTPError) Error() string { return fmt.Sprintf("HTTP %d: %s", e.Code, e.Message) }
func NewHTTPError(code int, message string) HTTPError { return HTTPError{ Code: code, Message: message, } }
// Error mapping from service to HTTP func mapServiceError(err error) error { switch { case errors.Is(err, service.ErrNotFound): return NewHTTPError(http.StatusNotFound, "resource not found")
case errors.Is(err, service.ErrUnauthorized): return NewHTTPError(http.StatusUnauthorized, "unauthorized")
case errors.Is(err, service.ErrForbidden): return NewHTTPError(http.StatusForbidden, "forbidden")
case errors.Is(err, service.ErrValidation): var validErr *service.ValidationError if errors.As(err, &validErr) { return HTTPError{ Code: http.StatusBadRequest, Message: "validation failed", Details: validErr.Fields, } } return NewHTTPError(http.StatusBadRequest, "validation failed")
default: // Log unexpected errors logger.Error("unexpected service error", slog.Error(err), slog.String("type", fmt.Sprintf("%T", err)))
return NewHTTPError(http.StatusInternalServerError, "internal server error") } }
---
## HTTP Client
### Configurable Client
go
// internal/client/http_client.go
package client
import ( "bytes" "context" "encoding/json" "fmt" "io" "net/http" "time"
"golang.org/x/time/rate" )
// HTTPClient is a configured HTTP client type HTTPClient struct { baseURL string httpClient *http.Client limiter *rate.Limiter logger Logger auth AuthProvider }
type Config struct { BaseURL string Timeout time.Duration RateLimit int MaxRetries int BackoffBase time.Duration }
// NewHTTPClient creates a new HTTP client func NewHTTPClient(cfg Config) *HTTPClient { transport := &http.Transport{ MaxIdleConns: 100, MaxIdleConnsPerHost: 10, IdleConnTimeout: 90 * time.Second, DisableCompression: false, DisableKeepAlives: false, }
return &HTTPClient{ baseURL: cfg.BaseURL, httpClient: &http.Client{ Timeout: cfg.Timeout, Transport: transport, }, limiter: rate.NewLimiter(rate.Limit(cfg.RateLimit), cfg.RateLimit), logger: logger, } }
// Request performs an HTTP request with retries func (c *HTTPClient) Request(ctx context.Context, method, path string, body, result interface{}) error { url := c.baseURL + path
// Marshal body var bodyReader io.Reader if body != nil { data, err := json.Marshal(body) if err != nil { return fmt.Errorf("marshal body: %w", err) } bodyReader = bytes.NewReader(data) }
// Retry logic backoff := 100 * time.Millisecond
for attempt := 0; attempt < 3; attempt++ { if attempt > 0 { select { case <-ctx.Done(): return ctx.Err() case <-time.After(backoff): } backoff *= 2 }
// Rate limiting if err := c.limiter.Wait(ctx); err != nil { return fmt.Errorf("rate limit: %w", err) }
// Create request req, err := http.NewRequestWithContext(ctx, method, url, bodyReader) if err != nil { return fmt.Errorf("create request: %w", err) }
// Set headers req.Header.Set("Content-Type", "application/json") req.Header.Set("Accept", "application/json")
// Add auth if c.auth != nil { if err := c.auth.AddAuth(req); err != nil { return fmt.Errorf("add auth: %w", err) } }
// Execute resp, err := c.httpClient.Do(req) if err != nil { continue // Retry on network errors } defer resp.Body.Close()
// Check status if resp.StatusCode >= 500 { io.Copy(io.Discard, resp.Body) continue // Retry on 5xx }
if resp.StatusCode >= 400 { return c.handleErrorResponse(resp) }
// Success - decode response if result != nil { if err := json.NewDecoder(resp.Body).Decode(result); err != nil { return fmt.Errorf("decode response: %w", err) } }
return nil }
return fmt.Errorf("request failed after 3 attempts") }
---
## gRPC for High-Performance APIs (Advanced/Optional)
> **Note**: This section covers gRPC integration for CLI applications that act as clients to gRPC services or need to expose gRPC endpoints. Most traditional CLI applications can skip this section and focus on HTTP clients and REST APIs.
### When to Use gRPC vs HTTP/JSON
**Use gRPC for:**
- Service-to-service communication
- High-performance, low-latency requirements
- Strong typing and contract enforcement
- Streaming data (bi-directional)
- Microservices with defined schemas
**Use HTTP/JSON for:**
- Public APIs and web frontends
- Third-party integrations
- Simple request/response patterns
- Browser compatibility requirements
### Protocol Buffer Definition
protobuf
// proto/user/v1/user.proto
syntax = "proto3";
package user.v1;
option go_package = "github.com/yourapp/proto/user/v1;userv1";
// User service for managing user accounts service UserService { // Create a new user account rpc CreateUser(CreateUserRequest) returns (CreateUserResponse);
// Get user by ID rpc GetUser(GetUserRequest) returns (GetUserResponse);
// List users with pagination rpc ListUsers(ListUsersRequest) returns (ListUsersResponse);
// Stream user events (server streaming) rpc StreamUserEvents(StreamUserEventsRequest) returns (stream UserEvent); }
message User { string id = 1; string email = 2; string name = 3; int64 created_at = 4; UserStatus status = 5; }
enum UserStatus { USERSTATUSUNSPECIFIED = 0; USERSTATUSACTIVE = 1; USERSTATUSINACTIVE = 2; USERSTATUSSUSPENDED = 3; }
message CreateUserRequest { string email = 1; string name = 2; }
message CreateUserResponse { User user = 1; }
message GetUserRequest { string id = 1; }
message GetUserResponse { User user = 1; }
message ListUsersRequest { int32 page_size = 1; string page_token = 2; UserStatus status = 3; }
message ListUsersResponse { repeated User users = 1; string nextpagetoken = 2; }
message StreamUserEventsRequest { string user_id = 1; }
message UserEvent { string event_id = 1; string user_id = 2; string event_type = 3; int64 timestamp = 4; map<string, string> metadata = 5; }
### gRPC Server Implementation
go
// internal/grpc/server.go
package grpc
import ( "context" "fmt" "log/slog" "net" "time"
"google.golang.org/grpc" "google.golang.org/grpc/codes" "google.golang.org/grpc/status" "google.golang.org/grpc/reflection"
userv1 "github.com/yourapp/proto/user/v1" "github.com/yourapp/internal/service" )
type Server struct { server *grpc.Server userService *service.UserService logger *slog.Logger }
func NewServer(userService service.UserService, logger slog.Logger) *Server { // Create gRPC server with middleware opts := []grpc.ServerOption{ grpc.UnaryInterceptor(unaryLoggerInterceptor(logger)), grpc.StreamInterceptor(streamLoggerInterceptor(logger)), }
server := grpc.NewServer(opts...)
s := &Server{ server: server, userService: userService, logger: logger, }
// Register services userv1.RegisterUserServiceServer(server, s)
// Enable reflection for development reflection.Register(server)
return s }
func (s *Server) Start(addr string) error { lis, err := net.Listen("tcp", addr) if err != nil { return fmt.Errorf("failed to listen: %w", err) }
s.logger.Info("Starting gRPC server", "addr", addr) return s.server.Serve(lis) }
func (s *Server) Stop() { s.logger.Info("Stopping gRPC server") s.server.GracefulStop() }
// Implement UserService methods func (s Server) CreateUser(ctx context.Context, req userv1.CreateUserRequest) (*userv1.CreateUserResponse, error) { // Validate request if req.Email == "" { return nil, status.Error(codes.InvalidArgument, "email is required") }
// Call business logic user, err := s.userService.CreateUser(ctx, service.CreateUserParams{ Email: req.Email, Name: req.Name, }) if err != nil { // Convert domain errors to gRPC status codes return nil, domainErrorToGRPCStatus(err) }
// Convert domain model to protobuf return &userv1.CreateUserResponse{ User: domainUserToProto(user), }, nil }
func (s Server) GetUser(ctx context.Context, req userv1.GetUserRequest) (*userv1.GetUserResponse, error) { if req.Id == "" { return nil, status.Error(codes.InvalidArgument, "id is required") }
user, err := s.userService.GetUser(ctx, req.Id) if err != nil { return nil, domainErrorToGRPCStatus(err) }
return &userv1.GetUserResponse{ User: domainUserToProto(user), }, nil }
func (s Server) ListUsers(ctx context.Context, req userv1.ListUsersRequest) (*userv1.ListUsersResponse, error) { pageSize := req.PageSize if pageSize == 0 { pageSize = 50 // Default page size } if pageSize > 100 { pageSize = 100 // Max page size }
users, nextToken, err := s.userService.ListUsers(ctx, service.ListUsersParams{ PageSize: int(pageSize), PageToken: req.PageToken, Status: protoStatusToDomain(req.Status), }) if err != nil { return nil, domainErrorToGRPCStatus(err) }
protoUsers := make([]*userv1.User, len(users)) for i, user := range users { protoUsers[i] = domainUserToProto(user) }
return &userv1.ListUsersResponse{ Users: protoUsers, NextPageToken: nextToken, }, nil }
func (s Server) StreamUserEvents(req userv1.StreamUserEventsRequest, stream userv1.UserService_StreamUserEventsServer) error { if req.UserId == "" { return status.Error(codes.InvalidArgument, "user_id is required") }
// Create event stream from service layer eventChan, err := s.userService.StreamUserEvents(stream.Context(), req.UserId) if err != nil { return domainErrorToGRPCStatus(err) }
// Stream events to client for { select { case event, ok := <-eventChan: if !ok { return nil // Stream ended }
protoEvent := &userv1.UserEvent{ EventId: event.ID, UserId: event.UserID, EventType: event.Type, Timestamp: event.Timestamp.Unix(), Metadata: event.Metadata, }
if err := stream.Send(protoEvent); err != nil { return err }
case <-stream.Context().Done(): return stream.Context().Err() } } }
// Error conversion func domainErrorToGRPCStatus(err error) error { // Convert domain errors to appropriate gRPC status codes switch { case errors.Is(err, service.ErrUserNotFound): return status.Error(codes.NotFound, "user not found") case errors.Is(err, service.ErrDuplicateEmail): return status.Error(codes.AlreadyExists, "email already exists") case errors.Is(err, service.ErrValidation): return status.Error(codes.InvalidArgument, err.Error()) default: return status.Error(codes.Internal, "internal server error") } }
// Model conversion helpers func domainUserToProto(user service.User) userv1.User { return &userv1.User{ Id: user.ID, Email: user.Email, Name: user.Name, CreatedAt: user.CreatedAt.Unix(), Status: domainStatusToProto(user.Status), } }
func domainStatusToProto(status service.UserStatus) userv1.UserStatus { switch status { case service.UserStatusActive: return userv1.UserStatusUSERSTATUS_ACTIVE case service.UserStatusInactive: return userv1.UserStatusUSERSTATUS_INACTIVE case service.UserStatusSuspended: return userv1.UserStatusUSERSTATUS_SUSPENDED default: return userv1.UserStatusUSERSTATUS_UNSPECIFIED } }
func protoStatusToDomain(status userv1.UserStatus) service.UserStatus { switch status { case userv1.UserStatusUSERSTATUS_ACTIVE: return service.UserStatusActive case userv1.UserStatusUSERSTATUS_INACTIVE: return service.UserStatusInactive case userv1.UserStatusUSERSTATUS_SUSPENDED: return service.UserStatusSuspended default: return service.UserStatusActive // Default } }
// Middleware func unaryLoggerInterceptor(logger *slog.Logger) grpc.UnaryServerInterceptor { return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (interface{}, error) { start := time.Now() resp, err := handler(ctx, req) duration := time.Since(start)
code := codes.OK if err != nil { if st, ok := status.FromError(err); ok { code = st.Code() } }
logger.Info("gRPC request", "method", info.FullMethod, "duration", duration, "code", code.String(), )
return resp, err } }
func streamLoggerInterceptor(logger *slog.Logger) grpc.StreamServerInterceptor { return func(srv interface{}, stream grpc.ServerStream, info *grpc.StreamServerInfo, handler grpc.StreamHandler) error { start := time.Now() err := handler(srv, stream) duration := time.Since(start)
code := codes.OK if err != nil { if st, ok := status.FromError(err); ok { code = st.Code() } }
logger.Info("gRPC stream", "method", info.FullMethod, "duration", duration, "code", code.String(), )
return err } }
### gRPC Client Example
go
// internal/client/user_client.go
package client
import ( "context" "fmt" "time"
"google.golang.org/grpc" "google.golang.org/grpc/credentials/insecure"
userv1 "github.com/yourapp/proto/user/v1" )
type UserClient struct { conn *grpc.ClientConn client userv1.UserServiceClient }
func NewUserClient(addr string) (*UserClient, error) { // For production, use TLS credentials conn, err := grpc.Dial(addr, grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithTimeout(5*time.Second), ) if err != nil { return nil, fmt.Errorf("failed to connect: %w", err) }
return &UserClient{ conn: conn, client: userv1.NewUserServiceClient(conn), }, nil }
func (c *UserClient) Close() error { return c.conn.Close() }
func (c UserClient) CreateUser(ctx context.Context, email, name string) (userv1.User, error) { resp, err := c.client.CreateUser(ctx, &userv1.CreateUserRequest{ Email: email, Name: name, }) if err != nil { return nil, fmt.Errorf("create user failed: %w", err) }
return resp.User, nil }
func (c UserClient) GetUser(ctx context.Context, id string) (userv1.User, error) { resp, err := c.client.GetUser(ctx, &userv1.GetUserRequest{ Id: id, }) if err != nil { return nil, fmt.Errorf("get user failed: %w", err) }
return resp.User, nil }
func (c UserClient) StreamEvents(ctx context.Context, userID string) (<-chan userv1.UserEvent, error) { stream, err := c.client.StreamUserEvents(ctx, &userv1.StreamUserEventsRequest{ UserId: userID, }) if err != nil { return nil, fmt.Errorf("stream events failed: %w", err) }
eventChan := make(chan *userv1.UserEvent)
go func() { defer close(eventChan)
for { event, err := stream.Recv() if err != nil { return // Stream closed or error }
select { case eventChan <- event: case <-ctx.Done(): return } } }()
return eventChan, nil }
### Build Configuration
makefile
Makefile
.PHONY: proto proto: protoc --goout=. --goopt=paths=source_relative \ --go-grpcout=. --go-grpcopt=paths=source_relative \ proto/user/v1/*.proto
.PHONY: grpc-server grpc-server: go run cmd/grpc-server/main.go
.PHONY: test-grpc test-grpc: grpc_cli call localhost:8080 user.v1.UserService.ListUsers '{}'
### Production Considerations
1. **TLS Configuration**
go
// Use proper TLS in production
creds, err := credentials.NewServerTLSFromFile("cert.pem", "key.pem")
server := grpc.NewServer(grpc.Creds(creds))
2. **Connection Pooling**
go
// Client-side connection pooling
conn, err := grpc.Dial(addr,
grpc.WithKeepaliveParams(keepalive.ClientParameters{
Time: 10 * time.Second,
Timeout: time.Second,
PermitWithoutStream: true,
}),
)
3. **Health Checks**
go
import "google.golang.org/grpc/health/grpchealthv1"
// Register health service grpchealthv1.RegisterHealthServer(server, health.NewServer())
---
## Rate Limiting & Circuit Breakers
### Circuit Breaker
go
// internal/client/circuit_breaker.go
package client
import ( "sync" "time" )
type CircuitState int
const ( CircuitClosed CircuitState = iota CircuitOpen CircuitHalfOpen )
type CircuitBreaker struct { mu sync.Mutex // Use regular mutex to avoid race conditions failures int successes int lastFailureTime time.Time state CircuitState
maxFailures int resetTimeout time.Duration successThreshold int }
func NewCircuitBreaker(maxFailures int, resetTimeout time.Duration) *CircuitBreaker { return &CircuitBreaker{ maxFailures: maxFailures, resetTimeout: resetTimeout, successThreshold: 2, state: CircuitClosed, } }
func (cb *CircuitBreaker) Allow() bool { cb.mu.Lock() defer cb.mu.Unlock()
switch cb.state { case CircuitClosed: return true
case CircuitOpen: if time.Since(cb.lastFailureTime) > cb.resetTimeout { cb.state = CircuitHalfOpen return true } return false
case CircuitHalfOpen: return true }
return false }
func (cb *CircuitBreaker) RecordSuccess() { cb.mu.Lock() defer cb.mu.Unlock()
cb.failures = 0 cb.successes++
if cb.state == CircuitHalfOpen && cb.successes >= cb.successThreshold { cb.state = CircuitClosed } }
func (cb *CircuitBreaker) RecordFailure() { cb.mu.Lock() defer cb.mu.Unlock()
cb.failures++ cb.lastFailureTime = time.Now() cb.successes = 0
if cb.failures >= cb.maxFailures { cb.state = CircuitOpen } }
// Usage with HTTP client func (c *HTTPClient) RequestWithCircuitBreaker(ctx context.Context, ...) error { if !c.circuitBreaker.Allow() { return ErrCircuitOpen }
err := c.Request(ctx, ...)
if err != nil { c.circuitBreaker.RecordFailure() return err }
c.circuitBreaker.RecordSuccess() return nil }
### Adaptive Rate Limiting
go
// internal/client/adaptive_limiter.go
package client
type AdaptiveLimiter struct { mu sync.RWMutex currentRate float64 minRate float64 maxRate float64 limiter *rate.Limiter
successCount int64 failureCount int64 lastAdjusted time.Time }
func NewAdaptiveLimiter(minRate, maxRate float64) *AdaptiveLimiter { initialRate := (minRate + maxRate) / 2
return &AdaptiveLimiter{ currentRate: initialRate, minRate: minRate, maxRate: maxRate, limiter: rate.NewLimiter(rate.Limit(initialRate), int(initialRate)), } }
func (al *AdaptiveLimiter) Wait(ctx context.Context) error { return al.limiter.Wait(ctx) }
func (al *AdaptiveLimiter) RecordResult(success bool) { al.mu.Lock() defer al.mu.Unlock()
if success { al.successCount++ } else { al.failureCount++ }
// Adjust rate every minute if time.Since(al.lastAdjusted) > time.Minute { al.adjustRate() al.lastAdjusted = time.Now() } }
func (al *AdaptiveLimiter) adjustRate() { total := al.successCount + al.failureCount if total == 0 { return }
successRate := float64(al.successCount) / float64(total)
if successRate > 0.95 && al.currentRate < al.maxRate { // Increase rate al.currentRate = min(al.currentRate*1.1, al.maxRate) } else if successRate < 0.90 && al.currentRate > al.minRate { // Decrease rate al.currentRate = max(al.currentRate*0.9, al.minRate) }
al.limiter.SetLimit(rate.Limit(al.currentRate)) al.limiter.SetBurst(int(al.currentRate))
// Reset counters al.successCount = 0 al.failureCount = 0 }
### Best Practices Summary
1. **Use middleware for cross-cutting concerns**
2. **Implement proper request validation**
3. **Set appropriate timeouts at all levels**
4. **Use structured error responses**
5. **Implement rate limiting** to prevent abuse
6. **Add circuit breakers** for external dependencies
7. **Use correlation IDs** for request tracing
8. **Monitor all metrics** - latency, errors, throughput
9. **Implement graceful shutdown**
10. **Use connection pooling** in HTTP clients
---
## Related Sections
- **[Error Handling](go-practices-error-logging.md#error-handling-decision-tree)** - Converting domain errors to HTTP responses
- **[Service Architecture](go-practices-service-architecture.md#service-layer-design)** - HTTP handlers calling service layer
- **[Testing](go-practices-testing.md#integration-testing)** - Testing HTTP handlers and integration tests
- **[Concurrency](go-practices-concurrency.md#worker-pools)** - Rate limiting and concurrent request handling
- **[CLI Design](go-practices-cli-config.md#context-guidelines)** - Context propagation in HTTP handlers
## Quick Reference Checklist
### Server Setup & Configuration
- [ ] Configure proper server timeouts (read, write, idle)
- [ ] Set appropriate MaxHeaderBytes for security
- [ ] Use Chi router for clean route organization
- [ ] Implement graceful shutdown with timeout
- [ ] Configure CORS headers for cross-origin requests
- [ ] Set up proper TLS configuration for production
### Middleware Implementation
- [ ] Add request ID middleware for tracing
- [ ] Implement structured logging middleware
- [ ] Add panic recovery middleware
- [ ] Configure rate limiting per IP/endpoint
- [ ] Set security headers (HSTS, CSP, etc.)
- [ ] Add compression middleware for responses
### Request/Response Handling
- [ ] Validate Content-Type headers
- [ ] Limit request body size (1MB default)
- [ ] Use proper JSON encoding/decoding with validation
- [ ] Implement pagination for list endpoints
- [ ] Return consistent error response format
- [ ] Handle empty request bodies gracefully
### Authentication & Authorization
- [ ] Implement Bearer token validation
- [ ] Extract and validate JWT claims properly
- [ ] Add authentication context to requests
- [ ] Implement role-based authorization checks
- [ ] Handle authentication failures consistently
- [ ] Log security-relevant events
### Error Handling & Responses
- [ ] Map service errors to appropriate HTTP status codes
- [ ] Use structured error responses with codes and details
- [ ] Log errors with appropriate detail level
- [ ] Don't expose internal error details to clients
- [ ] Implement consistent error response format
- [ ] Handle validation errors with field-specific details
### HTTP Client Implementation
- [ ] Configure connection pooling and timeouts
- [ ] Implement retry logic with exponential backoff
- [ ] Add rate limiting to prevent API abuse
- [ ] Use context for request cancellation
- [ ] Add proper authentication headers
- [ ] Handle different HTTP status codes appropriately
### gRPC Implementation
- [ ] Define service contracts using protobuf schemas
- [ ] Implement proper error mapping from domain to gRPC status codes
- [ ] Add logging middleware for gRPC requests and streams
- [ ] Use TLS credentials for production deployments
- [ ] Implement proper context handling for cancellation
- [ ] Add health check endpoints for service discovery
- [ ] Consider gRPC for service-to-service communication over HTTP/JSON
### Circuit Breaker & Resilience
- [ ] Implement circuit breaker for external dependencies
- [ ] Add timeout handling for slow services
- [ ] Use adaptive rate limiting based on success rates
- [ ] Handle network failures gracefully
- [ ] Implement proper backoff strategies
- [ ] Monitor circuit breaker state and metrics
### Security Best Practices
- [ ] Validate all user inputs and sanitize outputs
- [ ] Implement CSRF protection for state-changing operations
- [ ] Use HTTPS in production with proper certificate validation
- [ ] Set secure cookie flags (HttpOnly, Secure, SameSite)
- [ ] Implement proper session management
- [ ] Add request/response size limits
### Testing HTTP Components
#### HTTP Handler Testing with `net/http/httptest`
**The Standard Library Foundation**: `net/http/httptest` is the cornerstone of testing HTTP handlers in Go. It provides lightweight test servers and request/response recording.
go
// internal/transport/http/userhandlertest.go
package http
import ( "bytes" "encoding/json" "net/http" "net/http/httptest" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" )
func TestUserHandler_GetUser(t *testing.T) { tests := []struct { name string userID string mockSetup func(*MockUserService) expectedStatus int expectedBody string expectedError string }{ { name: "successful user retrieval", userID: "123", mockSetup: func(m *MockUserService) { m.On("GetUser", mock.Anything, "123").Return(&User{ ID: "123", Name: "John Doe", Email: "john@example.com", }, nil) }, expectedStatus: http.StatusOK, }, { name: "user not found", userID: "999", mockSetup: func(m *MockUserService) { m.On("GetUser", mock.Anything, "999").Return(nil, ErrUserNotFound) }, expectedStatus: http.StatusNotFound, expectedError: "USERNOTFOUND", }, { name: "invalid user ID format", userID: "invalid", mockSetup: func(m *MockUserService) { // No mock needed - validation happens first }, expectedStatus: http.StatusBadRequest, expectedError: "VALIDATION_FAILED", }, }
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { // Setup mockService := NewMockUserService(t) if tt.mockSetup != nil { tt.mockSetup(mockService) }
handler := NewUserHandler(mockService, logger)
// Create request using httptest req := httptest.NewRequest(http.MethodGet, "/users/"+tt.userID, nil) req = req.WithContext(context.WithValue(req.Context(), "userID", tt.userID))
// Record response using httptest rr := httptest.NewRecorder()
// Execute handler.GetUser(rr, req)
// Assert assert.Equal(t, tt.expectedStatus, rr.Code)
if tt.expectedError != "" { var response ErrorResponse err := json.Unmarshal(rr.Body.Bytes(), &response) require.NoError(t, err) assert.Equal(t, tt.expectedError, response.Code) }
mockService.AssertExpectations(t) }) } }
// Test with actual HTTP server using httptest.Server func TestUserHandler_Integration(t *testing.T) { // Setup real dependencies (or test doubles) userService := NewUserService(testDB, logger) handler := NewUserHandler(userService, logger)
// Create test server server := httptest.NewServer(http.HandlerFunc(handler.GetUser)) defer server.Close()
// Make real HTTP request resp, err := http.Get(server.URL + "/users/123") require.NoError(t, err) defer resp.Body.Close()
assert.Equal(t, http.StatusOK, resp.StatusCode)
var user User err = json.NewDecoder(resp.Body).Decode(&user) require.NoError(t, err) assert.Equal(t, "123", user.ID) }
#### Middleware Testing Pattern
go
func TestAuthMiddleware(t *testing.T) {
tests := []struct {
name string
authHeader string
expectedStatus int
shouldCallNext bool
}{
{
name: "valid token",
authHeader: "Bearer valid-token",
expectedStatus: http.StatusOK,
shouldCallNext: true,
},
{
name: "missing auth header",
authHeader: "",
expectedStatus: http.StatusUnauthorized,
shouldCallNext: false,
},
{
name: "invalid token format",
authHeader: "InvalidFormat",
expectedStatus: http.StatusUnauthorized,
shouldCallNext: false,
},
}
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { // Setup nextCalled := false nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { nextCalled = true w.WriteHeader(http.StatusOK) })
middleware := AuthMiddleware(mockAuthService) handler := middleware(nextHandler)
// Create request req := httptest.NewRequest(http.MethodGet, "/protected", nil) if tt.authHeader != "" { req.Header.Set("Authorization", tt.authHeader) }
rr := httptest.NewRecorder()
// Execute handler.ServeHTTP(rr, req)
// Assert assert.Equal(t, tt.expectedStatus, rr.Code) assert.Equal(t, tt.shouldCallNext, nextCalled) }) } }
#### Testing External HTTP Clients
go
func TestExternalAPIClient(t *testing.T) {
// Create mock server using httptest.Server
mockServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Verify request
assert.Equal(t, "/api/users", r.URL.Path)
assert.Equal(t, "application/json", r.Header.Get("Content-Type"))
// Send mock response response := map[string]interface{}{ "id": "123", "name": "John Doe", } json.NewEncoder(w).Encode(response) })) defer mockServer.Close()
// Use mock server URL in client client := NewExternalAPIClient(mockServer.URL, http.DefaultClient)
// Test the client user, err := client.GetUser(context.Background(), "123") require.NoError(t, err) assert.Equal(t, "123", user.ID) assert.Equal(t, "John Doe", user.Name) }
#### Testing Checklist
- [ ] Test handlers with table-driven test cases using `httptest.NewRecorder()`
- [ ] Use `httptest.NewServer()` for integration tests with real HTTP calls
- [ ] Mock external HTTP dependencies with `httptest.Server`
- [ ] Test middleware behavior in isolation
- [ ] Verify proper error handling and status codes
- [ ] Test authentication and authorization flows
- [ ] Validate request parsing and response serialization
### Performance & Monitoring
- [ ] Monitor request latency and throughput
- [ ] Track error rates by endpoint and status code
- [ ] Implement health check endpoints (/health, /ready)
- [ ] Add metrics for business-critical operations
- [ ] Monitor connection pool utilization
- [ ] Set up alerting for error rate thresholds
---
---
# 7. Concurrency & Performance
## Table of Contents
1. [Worker Pool Pattern](#worker-pool-pattern)
2. [Errgroup Pattern](#errgroup-pattern)
3. [Pipeline Architecture](#pipeline-architecture)
4. [Synchronization Primitives](#synchronization-primitives)
5. [Resource Management](#resource-management)
6. [Performance Optimization](#performance-optimization)
7. [Goroutine Lifecycle](#goroutine-lifecycle)
---
## Worker Pool Pattern
### Basic Worker Pool
go
// internal/worker/pool.go
package worker
import ( "context" "fmt" "sync" "time" )
// Job represents a unit of work type Job interface { ID() string Process(ctx context.Context) error }
// Result wraps job execution outcome type Result struct { JobID string Error error Duration time.Duration }
// Pool manages a fixed number of workers type Pool struct { workers int jobQueue chan Job results chan Result wg sync.WaitGroup logger Logger metrics Metrics
// Graceful shutdown shutdown chan struct{} shutdownWG sync.WaitGroup }
// NewPool creates a worker pool func NewPool(workers int, queueSize int, logger Logger) *Pool { return &Pool{ workers: workers, jobQueue: make(chan Job, queueSize), results: make(chan Result, queueSize), shutdown: make(chan struct{}), logger: logger, } }
// Start initializes all workers func (p *Pool) Start(ctx context.Context) { p.logger.Info("starting worker pool", slog.Int("workers", p.workers))
for i := 0; i < p.workers; i++ { p.shutdownWG.Add(1) go p.worker(ctx, i) } }
// worker processes jobs from the queue func (p *Pool) worker(ctx context.Context, id int) { defer p.shutdownWG.Done()
logger := p.logger.With(slog.Int("worker_id", id)) logger.Info("worker started")
for { select { case <-ctx.Done(): logger.Info("worker stopped: context cancelled") return
case <-p.shutdown: logger.Info("worker stopped: shutdown signal") return
case job, ok := <-p.jobQueue: if !ok { logger.Info("worker stopped: queue closed") return }
p.processJob(ctx, job, logger) } } }
// processJob handles a single job with recovery func (p *Pool) processJob(ctx context.Context, job Job, logger Logger) { start := time.Now()
// Panic recovery defer func() { if r := recover(); r != nil { err := fmt.Errorf("panic in job %s: %v", job.ID(), r) logger.Error("job panic", slog.String("job_id", job.ID()), slog.Any("panic", r))
p.results <- Result{ JobID: job.ID(), Error: err, Duration: time.Since(start), } } }()
// Process with timeout jobCtx, cancel := context.WithTimeout(ctx, 5*time.Minute) defer cancel()
err := job.Process(jobCtx)
result := Result{ JobID: job.ID(), Error: err, Duration: time.Since(start), }
select { case p.results <- result: case <-ctx.Done(): return }
// Metrics p.metrics.Histogram("worker.job.duration", result.Duration.Seconds()) if err != nil { p.metrics.Counter("worker.job.errors", 1) } else { p.metrics.Counter("worker.job.success", 1) } }
// Submit adds a job to the queue func (p *Pool) Submit(ctx context.Context, job Job) error { select { case p.jobQueue <- job: return nil case <-ctx.Done(): return ctx.Err() default: return fmt.Errorf("job queue full") } }
// Results returns the results channel func (p *Pool) Results() <-chan Result { return p.results }
// Shutdown gracefully stops all workers func (p *Pool) Shutdown(ctx context.Context) error { p.logger.Info("shutting down worker pool")
// Signal shutdown close(p.shutdown)
// Wait for workers with timeout done := make(chan struct{}) go func() { p.shutdownWG.Wait() close(done) }()
select { case <-done: close(p.jobQueue) close(p.results) return nil case <-ctx.Done(): return fmt.Errorf("shutdown timeout") } }
### Advanced Worker Pool with Metrics
**NOTE**: Dynamic scaling of worker pools is complex and error-prone. For most use cases, prefer a fixed-size pool with proper monitoring.
go
// internal/worker/monitored_pool.go
package worker
import ( "context" "sync" "sync/atomic" "time" )
// MonitoredPool provides metrics and observability type MonitoredPool struct { *Pool
// Metrics jobsProcessed int64 jobsFailed int64 totalDuration int64 activeWorkers int64 queueSize int64
// Monitoring metricsInterval time.Duration metricsLogger Logger lastReport time.Time }
func NewMonitoredPool(workers int, queueSize int, logger Logger) *MonitoredPool { return &MonitoredPool{ Pool: NewPool(workers, queueSize, logger), metricsInterval: 30 * time.Second, metricsLogger: logger, lastReport: time.Now(), } }
func (p *MonitoredPool) Start(ctx context.Context) { p.Pool.Start(ctx) go p.metricsReporter(ctx) }
func (p *MonitoredPool) Submit(ctx context.Context, job Job) error { atomic.AddInt64(&p.queueSize, 1)
err := p.Pool.Submit(ctx, job) if err != nil { atomic.AddInt64(&p.queueSize, -1) }
return err }
func (p *MonitoredPool) processJobWithMetrics(ctx context.Context, job Job, logger Logger) { start := time.Now() atomic.AddInt64(&p.activeWorkers, 1) atomic.AddInt64(&p.queueSize, -1)
defer func() { duration := time.Since(start) atomic.AddInt64(&p.activeWorkers, -1) atomic.AddInt64(&p.totalDuration, int64(duration)) }()
// Process job (copy from original processJob) err := job.Process(ctx)
if err != nil { atomic.AddInt64(&p.jobsFailed, 1) } else { atomic.AddInt64(&p.jobsProcessed, 1) } }
func (p *MonitoredPool) metricsReporter(ctx context.Context) { ticker := time.NewTicker(p.metricsInterval) defer ticker.Stop()
for { select { case <-ctx.Done(): return case <-ticker.C: p.reportMetrics() } } }
func (p *MonitoredPool) reportMetrics() { processed := atomic.LoadInt64(&p.jobsProcessed) failed := atomic.LoadInt64(&p.jobsFailed) totalDuration := atomic.LoadInt64(&p.totalDuration) active := atomic.LoadInt64(&p.activeWorkers) queueSize := atomic.LoadInt64(&p.queueSize)
now := time.Now() interval := now.Sub(p.lastReport) p.lastReport = now
var avgDuration time.Duration if processed > 0 { avgDuration = time.Duration(totalDuration / processed) }
p.metricsLogger.Info("worker pool metrics", slog.Int64("jobs_processed", processed), slog.Int64("jobs_failed", failed), slog.Int64("active_workers", active), slog.Int64("queue_size", queueSize), slog.Duration("avgjobduration", avgDuration), slog.Duration("report_interval", interval)) }
// GetMetrics returns current pool metrics func (p *MonitoredPool) GetMetrics() PoolMetrics { return PoolMetrics{ JobsProcessed: atomic.LoadInt64(&p.jobsProcessed), JobsFailed: atomic.LoadInt64(&p.jobsFailed), ActiveWorkers: atomic.LoadInt64(&p.activeWorkers), QueueSize: atomic.LoadInt64(&p.queueSize), AvgDuration: time.Duration(atomic.LoadInt64(&p.totalDuration) / max(atomic.LoadInt64(&p.jobsProcessed), 1)), } }
type PoolMetrics struct { JobsProcessed int64 JobsFailed int64 ActiveWorkers int64 QueueSize int64 AvgDuration time.Duration }
---
## Errgroup Pattern
### Managing Groups of Goroutines
The `errgroup` package (from `golang.org/x/sync/errgroup`) is the de facto standard for managing groups of goroutines that may return errors. It provides automatic cancellation, error propagation, and synchronization.
### Basic Errgroup Usage
go
// internal/fetcher/parallel.go
package fetcher
import ( "context" "fmt"
"golang.org/x/sync/errgroup" )
// FetchAll fetches multiple URLs in parallel func FetchAll(ctx context.Context, urls []string) ([]Response, error) { g, ctx := errgroup.WithContext(ctx)
// Results channel results := make([]Response, len(urls))
// Launch goroutine for each URL for i, url := range urls { i, url := i, url // Capture loop variables
g.Go(func() error { response, err := fetchURL(ctx, url) if err != nil { return fmt.Errorf("fetch %s: %w", url, err) }
results[i] = response return nil }) }
// Wait for all goroutines if err := g.Wait(); err != nil { // First error is returned, context was cancelled return nil, err }
return results, nil }
### Bounded Concurrency with Errgroup
go
// internal/processor/batch.go
package processor
import ( "context" "fmt"
"golang.org/x/sync/errgroup" "golang.org/x/sync/semaphore" )
// ProcessBatch processes items with limited concurrency func ProcessBatch(ctx context.Context, items []Item, maxConcurrency int) error { g, ctx := errgroup.WithContext(ctx)
// Semaphore limits concurrent operations sem := semaphore.NewWeighted(int64(maxConcurrency))
for _, item := range items { item := item // Capture loop variable
g.Go(func() error { // Acquire semaphore if err := sem.Acquire(ctx, 1); err != nil { return err } defer sem.Release(1)
// Process item return processItem(ctx, item) }) }
return g.Wait() }
### Pipeline with Errgroup
go
// internal/pipeline/errgroup_pipeline.go
package pipeline
import ( "context" "fmt"
"golang.org/x/sync/errgroup" )
// ProcessPipeline runs a multi-stage pipeline with error handling func ProcessPipeline(ctx context.Context, input []Data) ([]Result, error) { g, ctx := errgroup.WithContext(ctx)
// Stage 1: Validate validated := make(chan Data, len(input)) g.Go(func() error { defer close(validated) for _, data := range input { if err := validate(data); err != nil { return fmt.Errorf("validation failed: %w", err) }
select { case validated <- data: case <-ctx.Done(): return ctx.Err() } } return nil })
// Stage 2: Transform (multiple workers) transformed := make(chan Transformed, len(input)) for i := 0; i < 5; i++ { g.Go(func() error { for data := range validated { result, err := transform(ctx, data) if err != nil { return fmt.Errorf("transform failed: %w", err) }
select { case transformed <- result: case <-ctx.Done(): return ctx.Err() } } return nil }) }
// Stage 3: Collect results var results []Result g.Go(func() error { defer close(transformed) for t := range transformed { results = append(results, t.ToResult()) } return nil })
// Wait for all stages if err := g.Wait(); err != nil { return nil, err }
return results, nil }
### Errgroup with Result Collection
go
// internal/aggregator/parallel.go
package aggregator
import ( "context" "sync"
"golang.org/x/sync/errgroup" )
// AggregateResults collects results from multiple sources type AggregateResults struct { Users []User Products []Product Orders []Order }
// FetchAllData fetches data from multiple services in parallel func FetchAllData(ctx context.Context, userID string) (*AggregateResults, error) { g, ctx := errgroup.WithContext(ctx)
var ( results AggregateResults mu sync.Mutex )
// Fetch users g.Go(func() error { users, err := fetchUsers(ctx, userID) if err != nil { return fmt.Errorf("fetch users: %w", err) }
mu.Lock() results.Users = users mu.Unlock() return nil })
// Fetch products g.Go(func() error { products, err := fetchProducts(ctx, userID) if err != nil { return fmt.Errorf("fetch products: %w", err) }
mu.Lock() results.Products = products mu.Unlock() return nil })
// Fetch orders g.Go(func() error { orders, err := fetchOrders(ctx, userID) if err != nil { return fmt.Errorf("fetch orders: %w", err) }
mu.Lock() results.Orders = orders mu.Unlock() return nil })
// Wait for all fetches to complete if err := g.Wait(); err != nil { return nil, err }
return &results, nil }
### Errgroup with Timeout
go
// internal/client/timeout.go
package client
import ( "context" "time"
"golang.org/x/sync/errgroup" )
// CallServicesWithTimeout calls multiple services with an overall timeout func CallServicesWithTimeout(ctx context.Context, timeout time.Duration) error { // Create context with timeout ctx, cancel := context.WithTimeout(ctx, timeout) defer cancel()
g, ctx := errgroup.WithContext(ctx)
// Service calls services := []struct { name string call func(context.Context) error }{ {"auth", callAuthService}, {"user", callUserService}, {"billing", callBillingService}, }
for _, svc := range services { svc := svc // Capture loop variable
g.Go(func() error { start := time.Now() err := svc.call(ctx) duration := time.Since(start)
if err != nil { logger.Error("service call failed", slog.String("service", svc.name), slog.Duration("duration", duration), slog.Error(err)) return fmt.Errorf("%s: %w", svc.name, err) }
logger.Info("service call succeeded", slog.String("service", svc.name), slog.Duration("duration", duration)) return nil }) }
return g.Wait() }
### Advanced Pattern: Errgroup with Rate Limiting
go
// internal/crawler/rate_limited.go
package crawler
import ( "context" "time"
"golang.org/x/sync/errgroup" "golang.org/x/time/rate" )
// CrawlSites crawls multiple sites with rate limiting func CrawlSites(ctx context.Context, sites []string, rps int) ([]SiteData, error) { g, ctx := errgroup.WithContext(ctx)
// Rate limiter: rps requests per second limiter := rate.NewLimiter(rate.Limit(rps), rps)
results := make([]SiteData, len(sites))
for i, site := range sites { i, site := i, site // Capture
g.Go(func() error { // Wait for rate limiter if err := limiter.Wait(ctx); err != nil { return err }
data, err := crawlSite(ctx, site) if err != nil { return fmt.Errorf("crawl %s: %w", site, err) }
results[i] = data return nil }) }
if err := g.Wait(); err != nil { return nil, err }
return results, nil }
### Testing with Errgroup
go
// internal/service/user_test.go
package service_test
import ( "context" "errors" "testing"
"golang.org/x/sync/errgroup" )
func TestConcurrentUserCreation(t *testing.T) { svc := NewUserService() ctx := context.Background()
// Create multiple users concurrently g, ctx := errgroup.WithContext(ctx)
userCount := 100 users := make([]*User, userCount)
for i := 0; i < userCount; i++ { i := i g.Go(func() error { user, err := svc.CreateUser(ctx, CreateUserInput{ Email: fmt.Sprintf("user%d@example.com", i), Name: fmt.Sprintf("User %d", i), }) if err != nil { return err } users[i] = user return nil }) }
err := g.Wait() require.NoError(t, err)
// Verify all users created require.Len(t, users, userCount)
// Verify no duplicates seen := make(map[string]bool) for _, user := range users { require.NotNil(t, user) require.False(t, seen[user.ID], "duplicate user ID") seen[user.ID] = true } }
func TestErrgroup_FirstErrorCancelsAll(t *testing.T) { g, ctx := errgroup.WithContext(context.Background())
// Track which goroutines ran var completed int32 expectedError := errors.New("expected error")
// Fast failing goroutine g.Go(func() error { return expectedError })
// Slow goroutines that should be cancelled for i := 0; i < 5; i++ { g.Go(func() error { select { case <-time.After(5 * time.Second): atomic.AddInt32(&completed, 1) return nil case <-ctx.Done(): // Cancelled as expected return nil } }) }
err := g.Wait() assert.ErrorIs(t, err, expectedError) assert.Equal(t, int32(0), atomic.LoadInt32(&completed)) }
### Best Practices for Errgroup
1. **Always Capture Loop Variables**
go
for i, item := range items {
i, item := i, item // Critical!
g.Go(func() error {
// Use i and item safely
})
}
2. **Use WithContext for Cancellation**
go
g, ctx := errgroup.WithContext(ctx)
// Now ctx is cancelled when any goroutine returns error
3. **Limit Concurrency When Needed**
go
sem := semaphore.NewWeighted(10)
g.Go(func() error {
if err := sem.Acquire(ctx, 1); err != nil {
return err
}
defer sem.Release(1)
// Do work
})
4. **Handle Partial Results Carefully**
go
var (
results []Result
mu sync.Mutex
)
g.Go(func() error { result, err := process() if err != nil { return err } mu.Lock() results = append(results, result) mu.Unlock() return nil })
5. **Don't Ignore Context Cancellation**
go
g.Go(func() error {
for item := range items {
select {
case <-ctx.Done():
return ctx.Err()
default:
// Process item
}
}
})
---
## Pipeline Architecture
### Stream Processing Pipeline
go
// internal/pipeline/pipeline.go
package pipeline
import ( "context" "sync" )
// Stage represents a pipeline processing stage type Stage[In, Out any] func(ctx context.Context, in <-chan In) <-chan Out
// Pipeline chains multiple stages together type Pipeline[T any] struct { stages []Stage[any, any] logger Logger }
// NewPipeline creates a processing pipeline func NewPipelineT any *Pipeline[T] { return &Pipeline[T]{ logger: logger, } }
// AddStage appends a processing stage func (p Pipeline[T]) AddStage(stage Stage[any, any]) Pipeline[T] { p.stages = append(p.stages, stage) return p }
// Run executes the pipeline func (p *Pipeline[T]) Run(ctx context.Context, input <-chan T) <-chan any { if len(p.stages) == 0 { out := make(chan any) close(out) return out }
// Chain stages current := make(chan any, 100) go func() { defer close(current) for item := range input { select { case current <- item: case <-ctx.Done(): return } } }()
for _, stage := range p.stages { current = stage(ctx, current) }
return current }
// Common Stages
// MapStage transforms items func MapStageIn, Out any (Out, error)) Stage[In, Out] { return func(ctx context.Context, in <-chan In) <-chan Out { out := make(chan Out, cap(in))
go func() { defer close(out)
for item := range in { result, err := fn(item) if err != nil { continue // Or handle error }
select { case out <- result: case <-ctx.Done(): return } } }()
return out } }
// FilterStage removes items based on predicate func FilterStageT any bool) Stage[T, T] { return func(ctx context.Context, in <-chan T) <-chan T { out := make(chan T, cap(in))
go func() { defer close(out)
for item := range in { if !predicate(item) { continue }
select { case out <- item: case <-ctx.Done(): return } } }()
return out } }
// BatchStage groups items into batches func BatchStageT any Stage[T, []T] { return func(ctx context.Context, in <-chan T) <-chan []T { out := make(chan []T, cap(in)/size)
go func() { defer close(out)
batch := make([]T, 0, size) timer := time.NewTimer(timeout) defer timer.Stop()
flush := func() { if len(batch) > 0 { select { case out <- batch: batch = make([]T, 0, size) case <-ctx.Done(): return } } timer.Reset(timeout) }
for { select { case item, ok := <-in: if !ok { flush() return }
batch = append(batch, item) if len(batch) >= size { flush() }
case <-timer.C: flush()
case <-ctx.Done(): return } } }()
return out } }
// FanOutStage distributes to multiple workers func FanOutStageT any error) Stage[T, T] { return func(ctx context.Context, in <-chan T) <-chan T { out := make(chan T, cap(in))
var wg sync.WaitGroup wg.Add(workers)
// Start workers for i := 0; i < workers; i++ { go func() { defer wg.Done()
for item := range in { if err := process(ctx, item); err != nil { continue }
select { case out <- item: case <-ctx.Done(): return } } }() }
// Close output when all workers done go func() { wg.Wait() close(out) }()
return out } }
---
## Synchronization Primitives
### Advanced Mutex Patterns
go
// internal/sync/keyed_mutex.go
package sync
import ( "sync" )
// KeyedMutex provides per-key locking with memory management type KeyedMutex struct { mu sync.Mutex locks map[string]*mutexEntry }
type mutexEntry struct { mutex *sync.Mutex refCount int }
func NewKeyedMutex() *KeyedMutex { return &KeyedMutex{ locks: make(map[string]*mutexEntry), } }
func (km *KeyedMutex) Lock(key string) { km.mu.Lock() entry, exists := km.locks[key] if !exists { entry = &mutexEntry{ mutex: &sync.Mutex{}, refCount: 0, } km.locks[key] = entry } entry.refCount++ km.mu.Unlock()
entry.mutex.Lock() }
func (km *KeyedMutex) Unlock(key string) { km.mu.Lock() entry, exists := km.locks[key] if !exists { km.mu.Unlock() return }
entry.refCount-- // Clean up unused mutexes to prevent memory leak if entry.refCount == 0 { delete(km.locks, key) } km.mu.Unlock()
entry.mutex.Unlock() }
### Semaphore Pattern
go
// internal/sync/semaphore.go
package sync
import ( "context" "fmt" )
// Semaphore limits concurrent operations type Semaphore struct { permits chan struct{} }
func NewSemaphore(max int) *Semaphore { return &Semaphore{ permits: make(chan struct{}, max), } }
func (s *Semaphore) Acquire(ctx context.Context) error { select { case s.permits <- struct{}{}: return nil case <-ctx.Done(): return ctx.Err() } }
func (s *Semaphore) Release() { select { case <-s.permits: default: panic("semaphore: release without acquire") } }
// WithSemaphore runs fn with semaphore protection func WithSemaphoreT any (T, error)) (T, error) { var zero T
if err := sem.Acquire(ctx); err != nil { return zero, fmt.Errorf("acquire semaphore: %w", err) } defer sem.Release()
return fn() }
### Broadcast Pattern
go
// internal/sync/broadcast.go
package sync
import ( "sync" )
// Broadcaster sends values to multiple listeners type Broadcaster[T any] struct { mu sync.RWMutex listeners []chan T closed bool }
func NewBroadcaster[T any]() *Broadcaster[T] { return &Broadcaster[T]{} }
func (b *Broadcaster[T]) Subscribe(buffer int) <-chan T { b.mu.Lock() defer b.mu.Unlock()
if b.closed { ch := make(chan T) close(ch) return ch }
ch := make(chan T, buffer) b.listeners = append(b.listeners, ch) return ch }
func (b *Broadcaster[T]) Broadcast(value T) { b.mu.RLock() defer b.mu.RUnlock()
if b.closed { return }
for _, ch := range b.listeners { select { case ch <- value: default: // Listener is slow, skip } } }
func (b *Broadcaster[T]) Close() { b.mu.Lock() defer b.mu.Unlock()
if b.closed { return }
b.closed = true for _, ch := range b.listeners { close(ch) } b.listeners = nil }
---
## Resource Management
### Connection Pool
go
// internal/pool/resource_pool.go
package pool
import ( "context" "fmt" "sync" "time" )
// Resource represents a pooled resource type Resource interface { IsHealthy() bool Close() error }
// Factory creates new resources type Factory[T Resource] func(ctx context.Context) (T, error)
// Pool manages reusable resources type Pool[T Resource] struct { factory Factory[T] resources chan T maxSize int maxIdleTime time.Duration
mu sync.Mutex closed bool
metrics Metrics logger Logger }
func NewPoolT Resource *Pool[T] { return &Pool[T]{ factory: factory, resources: make(chan T, maxSize), maxSize: maxSize, maxIdleTime: 10 * time.Minute, } }
// Get acquires a resource from the pool func (p *Pool[T]) Get(ctx context.Context) (T, error) { var zero T
select { case resource := <-p.resources: if resource.IsHealthy() { p.metrics.Counter("pool.hits", 1) return resource, nil } resource.Close() p.metrics.Counter("pool.evictions", 1)
case <-ctx.Done(): return zero, ctx.Err()
default: // Pool empty, create new resource }
// Create new resource p.metrics.Counter("pool.misses", 1) resource, err := p.factory(ctx) if err != nil { return zero, fmt.Errorf("create resource: %w", err) }
return resource, nil }
// Put returns a resource to the pool func (p *Pool[T]) Put(resource T) { p.mu.Lock() if p.closed { p.mu.Unlock() resource.Close() return } p.mu.Unlock()
if !resource.IsHealthy() { resource.Close() return }
select { case p.resources <- resource: // Resource returned to pool default: // Pool full, close resource resource.Close() } }
// Close drains and closes all resources func (p *Pool[T]) Close() error { p.mu.Lock() if p.closed { p.mu.Unlock() return nil } p.closed = true p.mu.Unlock()
close(p.resources)
for resource := range p.resources { resource.Close() }
return nil }
---
## Performance Optimization
### CPU Profiling Integration
go
// internal/debug/profiling.go
package debug
import ( "net/http" _ "net/http/pprof" "runtime" "runtime/pprof" )
// ProfilingServer runs pprof server type ProfilingServer struct { addr string }
func NewProfilingServer(addr string) *ProfilingServer { return &ProfilingServer{addr: addr} }
func (s *ProfilingServer) Start() error { // Set runtime parameters runtime.SetBlockProfileRate(1) runtime.SetMutexProfileFraction(1)
return http.ListenAndServe(s.addr, nil) }
// ProfileCPU runs CPU profiling for duration func ProfileCPU(duration time.Duration, filename string) error { f, err := os.Create(filename) if err != nil { return err } defer f.Close()
if err := pprof.StartCPUProfile(f); err != nil { return err }
time.Sleep(duration) pprof.StopCPUProfile()
return nil }
### Memory Optimization
go
// internal/memory/pool.go
package memory
import ( "sync" )
// BytePool manages byte slice reuse type BytePool struct { pools []*sync.Pool }
func NewBytePool() *BytePool { pools := make([]*sync.Pool, 20) // Handle sizes up to 1MB
for i := range pools { size := 1 << (i + 10) // 1KB, 2KB, 4KB... pools[i] = &sync.Pool{ New: func() interface{} { return make([]byte, size) }, } }
return &BytePool{pools: pools} }
func (p *BytePool) Get(size int) []byte { // Find appropriate pool for i, pool := range p.pools { poolSize := 1 << (i + 10) if poolSize >= size { buf := pool.Get().([]byte) return buf[:size] } }
// Too large for pools return make([]byte, size) }
func (p *BytePool) Put(buf []byte) { size := cap(buf)
// Find matching pool for i, pool := range p.pools { poolSize := 1 << (i + 10) if poolSize == size { pool.Put(buf) return } } }
---
## Goroutine Lifecycle
### Goroutine Manager
go
// internal/runtime/goroutine.go
package runtime
import ( "context" "fmt" "sync" "sync/atomic" )
// GoroutineManager tracks and manages goroutines type GoroutineManager struct { wg sync.WaitGroup active int64 maxGoroutines int64
errors chan error logger Logger }
func NewGoroutineManager(max int64) *GoroutineManager { return &GoroutineManager{ maxGoroutines: max, errors: make(chan error, 100), } }
// Go starts a managed goroutine func (m *GoroutineManager) Go(name string, fn func() error) error { current := atomic.LoadInt64(&m.active) if current >= m.maxGoroutines { return fmt.Errorf("goroutine limit reached: %d", current) }
atomic.AddInt64(&m.active, 1) m.wg.Add(1)
go func() { defer m.wg.Done() defer atomic.AddInt64(&m.active, -1) defer m.recover(name)
if err := fn(); err != nil { select { case m.errors <- fmt.Errorf("%s: %w", name, err): default: m.logger.Error("error channel full", slog.String("goroutine", name), slog.Error(err)) } } }()
return nil }
func (m *GoroutineManager) recover(name string) { if r := recover(); r != nil { err := fmt.Errorf("panic in %s: %v", name, r) select { case m.errors <- err: default: m.logger.Error("panic in goroutine", slog.String("goroutine", name), slog.Any("panic", r)) } } }
// Wait blocks until all goroutines complete func (m *GoroutineManager) Wait() { m.wg.Wait() close(m.errors) }
// Errors returns error channel func (m *GoroutineManager) Errors() <-chan error { return m.errors }
### Best Practices Summary
1. **Always pass context** for cancellation
2. **Limit concurrent goroutines** to prevent resource exhaustion
3. **Use sync.Pool** for frequently allocated objects
4. **Profile before optimizing** - measure, don't guess
5. **Batch operations** to reduce overhead
6. **Use channels for coordination**, mutexes for state
7. **Prefer pipelines** over shared memory
8. **Handle panics** in goroutines
9. **Monitor goroutine count** in production
10. **Clean up resources** with defer
---
## Related Sections
- **[Error Handling](go-practices-error-logging.md#structured-logging)** - Structured logging in concurrent code
- **[Testing](go-practices-testing.md#testing-patterns)** - Testing concurrent patterns safely
- **[CLI Design](go-practices-cli-config.md#graceful-shutdown)** - Graceful shutdown patterns
- **[HTTP Patterns](go-practices-http.md#middleware-patterns)** - Rate limiting middleware
- **[Service Architecture](go-practices-service-architecture.md#processing-patterns)** - Processing patterns and pipelines
## Quick Reference Checklist
### Worker Pool Implementation
- [ ] Use bounded channels for job queues to prevent memory issues
- [ ] Implement graceful shutdown with context cancellation
- [ ] Add panic recovery in worker goroutines
- [ ] Track worker pool metrics (active workers, queue depth)
- [ ] Use fixed-size pools - avoid dynamic scaling complexity
- [ ] Implement proper job result handling and error propagation
### Pipeline Architecture
- [ ] Chain processing stages with buffered channels
- [ ] Implement context cancellation at each stage
- [ ] Use appropriate buffer sizes for channel capacity
- [ ] Handle backpressure and slow consumers gracefully
- [ ] Implement fan-out/fan-in patterns where needed
- [ ] Add instrumentation for pipeline throughput monitoring
### Synchronization Primitives
- [ ] Use mutexes for protecting shared state, channels for communication
- [ ] Prefer RWMutex when reads vastly outnumber writes
- [ ] Implement custom synchronization (KeyedMutex, Semaphore) when needed
- [ ] Use sync.Once for one-time initialization
- [ ] Avoid complex locking hierarchies to prevent deadlocks
- [ ] Use atomic operations for simple counter/flag operations
### Resource Management
- [ ] Implement resource pools for expensive-to-create objects
- [ ] Add health checks for pooled resources
- [ ] Use proper cleanup and resource lifecycle management
- [ ] Implement connection timeouts and limits
- [ ] Monitor resource utilization and pool efficiency
- [ ] Handle resource exhaustion gracefully
### Goroutine Lifecycle Management
- [ ] Track goroutine count and prevent goroutine leaks
- [ ] Implement goroutine budgets to prevent resource exhaustion
- [ ] Use proper naming and identification for goroutines
- [ ] Handle panics in all goroutines with recovery
- [ ] Implement graceful shutdown coordination
- [ ] Monitor goroutine health and lifecycle
### Performance Optimization
- [ ] Profile concurrent code with CPU and memory profilers
- [ ] Use sync.Pool for frequently allocated objects
- [ ] Minimize mutex contention with fine-grained locking
- [ ] Batch operations to reduce coordination overhead
- [ ] Use appropriate buffer sizes for channels
- [ ] Avoid premature optimization - measure first
### Error Handling in Concurrent Code
- [ ] Propagate errors through channels or error groups
- [ ] Handle context cancellation consistently across goroutines
- [ ] Implement proper timeout handling for concurrent operations
- [ ] Use structured logging with goroutine identification
- [ ] Avoid swallowing errors in background goroutines
- [ ] Implement circuit breakers for external dependencies
### Testing Concurrent Code
- [ ] Use `go test -race` to detect race conditions
- [ ] Test goroutine cleanup and resource deallocation
- [ ] Use deterministic testing with controlled scheduling
- [ ] Test timeout and cancellation scenarios
- [ ] Verify proper error propagation in concurrent flows
- [ ] Load test concurrent components under realistic conditions
### Best Practices & Patterns
- [ ] Use errgroup for coordinated goroutine management
- [ ] Implement proper context propagation through concurrent operations
- [ ] Prefer composition over inheritance for concurrent types
- [ ] Use select statements for non-blocking channel operations
- [ ] Implement proper backoff strategies for retries
- [ ] Document concurrency requirements and guarantees
---
---
# 8. CLI Design & Configuration
## Table of Contents
1. [Cobra Command Structure](#cobra-command-structure)
2. [Configuration with Viper](#configuration-with-viper)
3. [Secrets Management](#secrets-management)
4. [Context Propagation](#context-propagation)
5. [Graceful Shutdown](#graceful-shutdown)
6. [CLI Testing](#cli-testing)
7. [Interactive Commands](#interactive-commands)
8. [Interactive CLI Libraries](#interactive-cli-libraries)
---
## Cobra Command Structure
### CRITICAL: Hierarchical Command Organization
**The Problem We're Solving**:
- Flat command structure becomes unmanageable
- Shared state via package globals
- No clear command boundaries
- Poor testability
**The Solution: Hierarchical Commands in Separate Packages**
### Root Command Setup
go
// cmd/myapp/root.go
package main
import ( "context" "fmt" "os"
"github.com/spf13/cobra" "github.com/spf13/viper"
"myapp/cmd/myapp/server" "myapp/cmd/myapp/migrate" "myapp/cmd/myapp/user" "myapp/internal/app" "myapp/internal/config" )
// Application container to hold shared state var ( appContainer *app.Container cfgFile string )
// rootCmd represents the base command
var rootCmd = &cobra.Command{
Use: "myapp",
Short: "MyApp is a blazingly fast CLI tool",
Long: MyApp provides enterprise-grade functionality
with a delightful developer experience.
,
// PersistentPreRunE runs before any subcommand PersistentPreRunE: func(cmd *cobra.Command, args []string) error { // Load configuration once cfg, err := config.Load() if err != nil { return fmt.Errorf("load config: %w", err) }
// Initialize logging if err := initLogging(cfg.Logging); err != nil { return fmt.Errorf("init logging: %w", err) }
// Validate config if err := cfg.Validate(); err != nil { return fmt.Errorf("invalid config: %w", err) }
// Initialize application container appContainer, err = app.New(cfg) if err != nil { return fmt.Errorf("init app: %w", err) }
return nil }, }
func init() { cobra.OnInitialize(initConfig)
// Global flags rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default is $HOME/.myapp.yaml)") rootCmd.PersistentFlags().String("log-level", "info", "log level (debug, info, warn, error)")
// Bind flags to viper viper.BindPFlag("log.level", rootCmd.PersistentFlags().Lookup("log-level"))
// Add subcommands - each in its own package // Note: Pass container factory function, not the container itself // This ensures subcommands can access the initialized container rootCmd.AddCommand( server.NewCmd(func() *app.Container { return appContainer }), migrate.NewCmd(func() *app.Container { return appContainer }), user.NewCmd(func() *app.Container { return appContainer }), ) }
func main() { ctx := context.Background()
// Handle signals ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM) defer cancel()
if err := rootCmd.ExecuteContext(ctx); err != nil { fmt.Fprintf(os.Stderr, "Error: %v\n", err) os.Exit(1) } }
### Subcommand Package Structure
go
// cmd/myapp/server/server.go
package server
import ( "context" "fmt" "time"
"github.com/spf13/cobra" "myapp/internal/app" )
// Package-scoped configuration var ( port int readTimeout time.Duration writeTimeout time.Duration getContainer func() *app.Container )
// NewCmd creates the server command func NewCmd(containerFunc func() app.Container) cobra.Command { // Store container accessor getContainer = containerFunc
cmd := &cobra.Command{
Use: "server",
Short: "Start the API server",
Long: Starts the HTTP API server with the specified configuration.
,
RunE: run,
}
// Command-specific flags cmd.Flags().IntVar(&port, "port", 8080, "Server port") cmd.Flags().DurationVar(&readTimeout, "read-timeout", 30*time.Second, "Read timeout") cmd.Flags().DurationVar(&writeTimeout, "write-timeout", 30*time.Second, "Write timeout")
// Mark required flags cmd.MarkFlagRequired("port")
return cmd }
func run(cmd *cobra.Command, args []string) error { ctx := cmd.Context()
// Get the initialized container container := getContainer() if container == nil { return fmt.Errorf("application not initialized") }
// Override configuration with command-specific flags container.Config.Server.Port = port container.Config.Server.ReadTimeout = readTimeout container.Config.Server.WriteTimeout = writeTimeout
// Start server using the container's services return container.RunServer(ctx) }
### Nested Command Structure
go
// cmd/myapp/user/user.go
package user
import ( "github.com/spf13/cobra"
"myapp/cmd/myapp/user/create" "myapp/cmd/myapp/user/list" "myapp/cmd/myapp/user/delete" "myapp/internal/app" )
// NewCmd creates the user management command group
func NewCmd(containerFunc func() app.Container) cobra.Command {
cmd := &cobra.Command{
Use: "user",
Short: "User management commands",
Long: Commands for managing users in the system.
,
}
// Add subcommands, passing container accessor cmd.AddCommand( create.NewCmd(containerFunc), list.NewCmd(containerFunc), delete.NewCmd(containerFunc), )
return cmd }
// cmd/myapp/user/create/create.go package create
import ( "context" "fmt"
"github.com/spf13/cobra" "myapp/internal/app" "myapp/internal/service" )
var ( email string name string role string sendWelcome bool getContainer func() *app.Container )
func NewCmd(containerFunc func() app.Container) cobra.Command { // Store container accessor getContainer = containerFunc
cmd := &cobra.Command{
Use: "create",
Short: "Create a new user",
Long: Creates a new user with the specified email and name.
,
Example: myapp user create --email john@example.com --name "John Doe"
myapp user create --email admin@example.com --name Admin --role admin
,
RunE: run,
}
cmd.Flags().StringVar(&email, "email", "", "User email (required)") cmd.Flags().StringVar(&name, "name", "", "User name (required)") cmd.Flags().StringVar(&role, "role", "user", "User role") cmd.Flags().BoolVar(&sendWelcome, "send-welcome", true, "Send welcome email")
cmd.MarkFlagRequired("email") cmd.MarkFlagRequired("name")
return cmd }
func run(cmd *cobra.Command, args []string) error { ctx := cmd.Context()
// Get the initialized container container := getContainer() if container == nil { return fmt.Errorf("application not initialized") }
// Use the user service from the container user, err := container.UserService.CreateUser(ctx, service.CreateUserInput{ Email: email, Name: name, Role: role, SendWelcome: sendWelcome, }) if err != nil { return fmt.Errorf("failed to create user: %w", err) }
fmt.Printf("User created successfully:\n") fmt.Printf(" ID: %s\n", user.ID) fmt.Printf(" Email: %s\n", user.Email)
return nil }
---
## Configuration with Viper
### Configuration Architecture
go
// internal/config/config.go
package config
import ( "fmt" "strings" "time"
"github.com/spf13/viper" )
// Config represents the application configuration
type Config struct {
App AppConfig mapstructure:"app"
Server ServerConfig mapstructure:"server"
Database DatabaseConfig mapstructure:"database"
Redis RedisConfig mapstructure:"redis"
Logging LogConfig mapstructure:"logging"
Auth AuthConfig mapstructure:"auth"
}
type AppConfig struct {
Name string mapstructure:"name"
Environment string mapstructure:"environment"
Version string mapstructure:"version"
}
type ServerConfig struct {
Port int mapstructure:"port"
Host string mapstructure:"host"
ReadTimeout time.Duration mapstructure:"read_timeout"
WriteTimeout time.Duration mapstructure:"write_timeout"
IdleTimeout time.Duration mapstructure:"idle_timeout"
}
// Load reads configuration from all sources func Load() (*Config, error) { v := viper.New()
// Set defaults setDefaults(v)
// Configure sources v.SetConfigName("config") v.SetConfigType("yaml")
// Add config paths - priority order v.AddConfigPath(".") v.AddConfigPath("./config") v.AddConfigPath("$HOME/.myapp") v.AddConfigPath("/etc/myapp")
// Environment variables v.SetEnvPrefix("MYAPP") v.SetEnvKeyReplacer(strings.NewReplacer(".", "_")) v.AutomaticEnv()
// Read config file if err := v.ReadInConfig(); err != nil { if _, ok := err.(viper.ConfigFileNotFoundError); !ok { return nil, fmt.Errorf("read config: %w", err) } // Config file not found; use defaults and env }
// Unmarshal var cfg Config if err := v.Unmarshal(&cfg); err != nil { return nil, fmt.Errorf("unmarshal config: %w", err) }
// Validate if err := cfg.Validate(); err != nil { return nil, fmt.Errorf("invalid config: %w", err) }
return &cfg, nil }
func setDefaults(v *viper.Viper) { // App defaults v.SetDefault("app.name", "myapp") v.SetDefault("app.environment", "development")
// Server defaults v.SetDefault("server.port", 8080) v.SetDefault("server.host", "0.0.0.0") v.SetDefault("server.read_timeout", "30s") v.SetDefault("server.write_timeout", "30s") v.SetDefault("server.idle_timeout", "120s")
// Database defaults v.SetDefault("database.maxopenconns", 25) v.SetDefault("database.maxidleconns", 5) v.SetDefault("database.connmaxlifetime", "1h") }
// Validate checks configuration validity func (c *Config) Validate() error { if c.Server.Port < 1 || c.Server.Port > 65535 { return fmt.Errorf("invalid server port: %d", c.Server.Port) }
if c.Database.DSN == "" { return fmt.Errorf("database DSN required") }
return nil }
### Advanced Configuration Patterns
#### Dependency Injection of Configuration
Don't use global `viper.Get()` calls throughout your application. Instead, load a configuration struct at startup and pass it explicitly to services that need it.
go
// ❌ BAD: Global viper usage creates hidden dependencies
func (s *UserService) CreateUser(ctx context.Context, user User) error {
timeout := viper.GetDuration("database.timeout") // Hidden dependency!
ctx, cancel := context.WithTimeout(ctx, timeout)
defer cancel()
return s.repo.Create(ctx, user) }
// ✅ GOOD: Explicit configuration dependency type UserService struct { repo UserRepository config DatabaseConfig // Explicit dependency logger *slog.Logger }
func NewUserService(repo UserRepository, dbConfig DatabaseConfig, logger slog.Logger) UserService { return &UserService{ repo: repo, config: dbConfig, logger: logger, } }
func (s *UserService) CreateUser(ctx context.Context, user User) error { ctx, cancel := context.WithTimeout(ctx, s.config.Timeout) // Explicit! defer cancel()
return s.repo.Create(ctx, user) }
#### Struct Tags for Configuration Mapping
Use struct tags to map environment variables, flags, and config file keys consistently:
go
type DatabaseConfig struct {
Host string mapstructure:"host" env:"DB_HOST" flag:"db-host" validate:"required"
Port int mapstructure:"port" env:"DB_PORT" flag:"db-port" validate:"min=1,max=65535"
Name string mapstructure:"name" env:"DB_NAME" flag:"db-name" validate:"required"
Username string mapstructure:"username" env:"DB_USERNAME" flag:"db-username" validate:"required"
Password string mapstructure:"password" env:"DB_PASSWORD" flag:"db-password" validate:"required"
SSLMode string mapstructure:"ssl_mode" env:"DB_SSL_MODE" flag:"db-ssl-mode" validate:"oneof=disable require verify-ca verify-full"
ConnectTimeout time.Duration mapstructure:"connect_timeout" env:"DB_CONNECT_TIMEOUT" flag:"db-connect-timeout" validate:"min=1s,max=30s"
MaxConnections int mapstructure:"max_connections" env:"DB_MAX_CONNECTIONS" flag:"db-max-connections" validate:"min=1,max=100"
IdleTimeout time.Duration mapstructure:"idle_timeout" env:"DB_IDLE_TIMEOUT" flag:"db-idle-timeout" validate:"min=1m,max=1h"
}
#### Validation with go-playground/validator
go
package config
import ( "fmt" "strings"
"github.com/go-playground/validator/v10" "github.com/spf13/viper" )
// Global validator instance var validate *validator.Validate
func init() { validate = validator.New()
// Register custom validators validate.RegisterValidation("envname", validateEnvironmentName) }
// validateEnvironmentName ensures environment names are valid func validateEnvironmentName(fl validator.FieldLevel) bool { env := fl.Field().String() validEnvs := []string{"development", "staging", "production", "test"}
for _, validEnv := range validEnvs { if env == validEnv { return true } } return false }
// Enhanced config with validation tags
type AppConfig struct {
Name string mapstructure:"name" env:"APP_NAME" validate:"required,min=3,max=50"
Environment string mapstructure:"environment" env:"APP_ENV" validate:"required,envname"
Version string mapstructure:"version" env:"APP_VERSION" validate:"required"
Debug bool mapstructure:"debug" env:"APP_DEBUG"
}
// Validate performs comprehensive validation func (c *Config) Validate() error { // Structural validation using tags if err := validate.Struct(c); err != nil { var validationErrors []string
for _, err := range err.(validator.ValidationErrors) { switch err.Tag() { case "required": validationErrors = append(validationErrors, fmt.Sprintf("%s is required", err.Field())) case "min": validationErrors = append(validationErrors, fmt.Sprintf("%s must be at least %s", err.Field(), err.Param())) case "max": validationErrors = append(validationErrors, fmt.Sprintf("%s must be at most %s", err.Field(), err.Param())) case "oneof": validationErrors = append(validationErrors, fmt.Sprintf("%s must be one of: %s", err.Field(), err.Param())) case "envname": validationErrors = append(validationErrors, fmt.Sprintf("%s must be a valid environment", err.Field())) default: validationErrors = append(validationErrors, fmt.Sprintf("%s failed validation: %s", err.Field(), err.Tag())) } }
return fmt.Errorf("configuration validation failed: %s", strings.Join(validationErrors, "; ")) }
// Business logic validation if err := c.validateBusinessRules(); err != nil { return fmt.Errorf("business rule validation failed: %w", err) }
return nil }
// validateBusinessRules checks complex validation rules func (c *Config) validateBusinessRules() error { // TLS configuration must be complete or empty if (c.Server.TLSCert == "") != (c.Server.TLSKey == "") { return fmt.Errorf("both tlscert and tlskey must be provided for TLS") }
// Production environment must use TLS if c.App.Environment == "production" && c.Server.TLSCert == "" { return fmt.Errorf("production environment requires TLS configuration") }
return nil }
### Environment-Specific Config
go
// internal/config/environments.go
package config
// LoadForEnvironment loads config based on environment func LoadForEnvironment(env string) (*Config, error) { v := viper.New()
// Base config v.SetConfigName("config") v.AddConfigPath("./config")
if err := v.ReadInConfig(); err != nil { return nil, err }
// Environment-specific overrides v.SetConfigName(fmt.Sprintf("config.%s", env)) if err := v.MergeInConfig(); err != nil { if _, ok := err.(viper.ConfigFileNotFoundError); !ok { return nil, err } }
// Environment variables override everything v.AutomaticEnv()
var cfg Config return &cfg, v.Unmarshal(&cfg) }
### Dynamic Configuration
go
// internal/config/watcher.go
package config
import ( "context" "sync"
"github.com/fsnotify/fsnotify" "github.com/spf13/viper" )
// Watcher monitors config changes type Watcher struct { mu sync.RWMutex config *Config onChange func(*Config) viper *viper.Viper }
func NewWatcher(onChange func(Config)) (Watcher, error) { w := &Watcher{ onChange: onChange, viper: viper.New(), }
// Initial load cfg, err := Load() if err != nil { return nil, err } w.config = cfg
// Watch for changes w.viper.WatchConfig() w.viper.OnConfigChange(func(e fsnotify.Event) { w.reload() })
return w, nil }
func (w *Watcher) reload() { cfg, err := Load() if err != nil { logger.Error("failed to reload config", slog.Error(err)) return }
w.mu.Lock() w.config = cfg w.mu.Unlock()
if w.onChange != nil { w.onChange(cfg) } }
func (w Watcher) Get() Config { w.mu.RLock() defer w.mu.RUnlock() return w.config }
---
## Secrets Management
### The Critical Gap in Configuration
**CRITICAL**: Never store secrets (API keys, database passwords, tokens) in configuration files, environment variables visible in process lists, or logs. This is a common security vulnerability in production Go applications.
### Secrets Architecture
go
// internal/secrets/secrets.go
package secrets
import ( "context" "fmt" "log/slog" )
// SecretLoader defines how to retrieve secrets from various providers type SecretLoader interface { LoadSecret(ctx context.Context, key string) (string, error) LoadSecrets(ctx context.Context, keys []string) (map[string]string, error) }
// SecretConfig holds references to secret keys, not the secrets themselves
type SecretConfig struct {
DatabasePasswordKey string yaml:"database_password_key"
APIKeyKey string yaml:"api_key_key"
JWTSecretKey string yaml:"jwt_secret_key"
}
// ResolvedSecrets contains the actual secret values type ResolvedSecrets struct { DatabasePassword string APIKey string JWTSecret string }
// String implements fmt.Stringer to prevent accidental logging of secrets func (rs ResolvedSecrets) String() string { return fmt.Sprintf("ResolvedSecrets{DatabasePassword:[REDACTED], APIKey:[REDACTED], JWTSecret:[REDACTED]}") }
// SecretsResolver handles the loading and resolution of secrets type SecretsResolver struct { loader SecretLoader logger *slog.Logger }
func NewSecretsResolver(loader SecretLoader, logger slog.Logger) SecretsResolver { return &SecretsResolver{ loader: loader, logger: logger, } }
func (r SecretsResolver) ResolveSecrets(ctx context.Context, config SecretConfig) (ResolvedSecrets, error) { keys := []string{ config.DatabasePasswordKey, config.APIKeyKey, config.JWTSecretKey, }
secretValues, err := r.loader.LoadSecrets(ctx, keys) if err != nil { return nil, fmt.Errorf("failed to load secrets: %w", err) }
r.logger.Info("secrets loaded successfully", "count", len(secretValues), // NEVER log the actual secret values )
return &ResolvedSecrets{ DatabasePassword: secretValues[config.DatabasePasswordKey], APIKey: secretValues[config.APIKeyKey], JWTSecret: secretValues[config.JWTSecretKey], }, nil }
### HashiCorp Vault Implementation
go
// internal/secrets/vault.go
package secrets
import ( "context" "fmt" "log/slog"
vault "github.com/hashicorp/vault/api" )
type VaultLoader struct { client *vault.Client path string logger *slog.Logger }
func NewVaultLoader(address, token, path string, logger slog.Logger) (VaultLoader, error) { config := vault.DefaultConfig() config.Address = address
client, err := vault.NewClient(config) if err != nil { return nil, fmt.Errorf("failed to create vault client: %w", err) }
client.SetToken(token)
return &VaultLoader{ client: client, path: path, logger: logger, }, nil }
func (v *VaultLoader) LoadSecret(ctx context.Context, key string) (string, error) { secret, err := v.client.Logical().ReadWithContext(ctx, v.path) if err != nil { return "", fmt.Errorf("failed to read from vault: %w", err) }
if secret == nil || secret.Data == nil { return "", fmt.Errorf("no data found at vault path: %s", v.path) }
value, exists := secret.Data[key] if !exists { return "", fmt.Errorf("key %s not found in vault", key) }
strValue, ok := value.(string) if !ok { return "", fmt.Errorf("key %s is not a string", key) }
v.logger.Debug("loaded secret from vault", "key", key, "path", v.path) return strValue, nil }
func (v *VaultLoader) LoadSecrets(ctx context.Context, keys []string) (map[string]string, error) { secret, err := v.client.Logical().ReadWithContext(ctx, v.path) if err != nil { return nil, fmt.Errorf("failed to read from vault: %w", err) }
if secret == nil || secret.Data == nil { return nil, fmt.Errorf("no data found at vault path: %s", v.path) }
result := make(map[string]string) for _, key := range keys { value, exists := secret.Data[key] if !exists { return nil, fmt.Errorf("key %s not found in vault", key) }
strValue, ok := value.(string) if !ok { return nil, fmt.Errorf("key %s is not a string", key) }
result[key] = strValue }
v.logger.Debug("loaded secrets from vault", "count", len(result), "path", v.path, )
return result, nil }
### AWS Secrets Manager Implementation
go
// internal/secrets/aws.go
package secrets
import ( "context" "encoding/json" "fmt" "log/slog"
"github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/secretsmanager" )
type AWSSecretsLoader struct { client *secretsmanager.Client logger *slog.Logger }
func NewAWSSecretsLoader(ctx context.Context, logger slog.Logger) (AWSSecretsLoader, error) { cfg, err := config.LoadDefaultConfig(ctx) if err != nil { return nil, fmt.Errorf("failed to load AWS config: %w", err) }
return &AWSSecretsLoader{ client: secretsmanager.NewFromConfig(cfg), logger: logger, }, nil }
func (a *AWSSecretsLoader) LoadSecret(ctx context.Context, secretName string) (string, error) { input := &secretsmanager.GetSecretValueInput{ SecretId: &secretName, }
result, err := a.client.GetSecretValue(ctx, input) if err != nil { return "", fmt.Errorf("failed to get secret %s: %w", secretName, err) }
a.logger.Debug("loaded secret from AWS", "secret_name", secretName) return *result.SecretString, nil }
func (a *AWSSecretsLoader) LoadSecrets(ctx context.Context, secretNames []string) (map[string]string, error) { result := make(map[string]string)
for _, secretName := range secretNames { value, err := a.LoadSecret(ctx, secretName) if err != nil { return nil, err } result[secretName] = value }
return result, nil }
// LoadSecretsAsJSON loads a single AWS secret that contains JSON with multiple keys func (a *AWSSecretsLoader) LoadSecretsAsJSON(ctx context.Context, secretName string) (map[string]string, error) { secretValue, err := a.LoadSecret(ctx, secretName) if err != nil { return nil, err }
var secrets map[string]string if err := json.Unmarshal([]byte(secretValue), &secrets); err != nil { return nil, fmt.Errorf("failed to parse secret JSON: %w", err) }
a.logger.Debug("loaded JSON secrets from AWS", "secret_name", secretName, "keys_count", len(secrets), )
return secrets, nil }
### GCP Secret Manager Implementation
go
// internal/secrets/gcp.go
package secrets
import ( "context" "fmt" "log/slog"
secretmanager "cloud.google.com/go/secretmanager/apiv1" "cloud.google.com/go/secretmanager/apiv1/secretmanagerpb" )
type GCPSecretsLoader struct { client *secretmanager.Client projectID string logger *slog.Logger }
func NewGCPSecretsLoader(ctx context.Context, projectID string, logger slog.Logger) (GCPSecretsLoader, error) { client, err := secretmanager.NewClient(ctx) if err != nil { return nil, fmt.Errorf("failed to create GCP secrets client: %w", err) }
return &GCPSecretsLoader{ client: client, projectID: projectID, logger: logger, }, nil }
func (g *GCPSecretsLoader) LoadSecret(ctx context.Context, secretName string) (string, error) { req := &secretmanagerpb.AccessSecretVersionRequest{ Name: fmt.Sprintf("projects/%s/secrets/%s/versions/latest", g.projectID, secretName), }
result, err := g.client.AccessSecretVersion(ctx, req) if err != nil { return "", fmt.Errorf("failed to access secret %s: %w", secretName, err) }
g.logger.Debug("loaded secret from GCP", "secret_name", secretName, "project_id", g.projectID, )
return string(result.Payload.Data), nil }
func (g *GCPSecretsLoader) LoadSecrets(ctx context.Context, secretNames []string) (map[string]string, error) { result := make(map[string]string)
for _, secretName := range secretNames { value, err := g.LoadSecret(ctx, secretName) if err != nil { return nil, err } result[secretName] = value }
return result, nil }
func (g *GCPSecretsLoader) Close() error { return g.client.Close() }
### Development/Testing Secret Loader
go
// internal/secrets/env.go
package secrets
import ( "context" "fmt" "log/slog" "os" )
// EnvLoader loads secrets from environment variables (development only) type EnvLoader struct { logger *slog.Logger }
func NewEnvLoader(logger slog.Logger) EnvLoader { return &EnvLoader{logger: logger} }
func (e *EnvLoader) LoadSecret(ctx context.Context, key string) (string, error) { value := os.Getenv(key) if value == "" { return "", fmt.Errorf("environment variable %s not set", key) }
e.logger.Warn("loading secret from environment variable (development only)", "key", key, )
return value, nil }
func (e *EnvLoader) LoadSecrets(ctx context.Context, keys []string) (map[string]string, error) { result := make(map[string]string)
for _, key := range keys { value, err := e.LoadSecret(ctx, key) if err != nil { return nil, err } result[key] = value }
return result, nil }
### Secure Logging with slog
go
// internal/config/logging.go
package config
import ( "log/slog" "strings" )
// CreateSecureLogger creates a logger that redacts sensitive fields func CreateSecureLogger() *slog.Logger { opts := &slog.HandlerOptions{ Level: slog.LevelInfo, ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr { return redactSensitiveFields(a) }, }
handler := slog.NewJSONHandler(os.Stdout, opts) return slog.New(handler) }
func redactSensitiveFields(attr slog.Attr) slog.Attr { // List of sensitive field names (case-insensitive) sensitiveFields := []string{ "password", "secret", "token", "key", "auth", "credential", "private", "api_key", "jwt", "databasepassword", "dbpassword", }
key := strings.ToLower(attr.Key) for _, sensitive := range sensitiveFields { if strings.Contains(key, sensitive) { return slog.String(attr.Key, "[REDACTED]") } }
// CRITICAL: Detect and redact struct values that might contain secrets if attr.Value.Kind() == slog.KindAny { // Check if it's a struct that might contain secrets valueStr := fmt.Sprintf("%T", attr.Value.Any()) if strings.Contains(strings.ToLower(valueStr), "secret") || strings.Contains(strings.ToLower(valueStr), "config") { return slog.String(attr.Key, "[STRUCT_REDACTED]") } }
// Also redact based on value patterns (for config structs) if str, ok := attr.Value.Any().(string); ok { // Redact anything that looks like a secret if len(str) > 10 && (strings.HasPrefix(str, "sk-") || strings.HasPrefix(str, "ghp_") || strings.HasPrefix(str, "xoxb-")) { return slog.String(attr.Key, "[REDACTED]") } }
return attr }
// Example of safe vs unsafe logging practices func ExampleSecureLogging(secrets ResolvedSecrets, logger slog.Logger) { // ❌ DANGEROUS: This could leak secrets to logs // logger.Info("loaded configuration", slog.Any("secrets", secrets))
// ✅ SAFE: String() method prevents leakage logger.Info("loaded configuration", slog.String("secrets", secrets.String()))
// ✅ SAFER: Log only non-sensitive metadata logger.Info("secrets loaded successfully", slog.Int("secret_count", 3), slog.String("status", "loaded"))
// ✅ SAFEST: Individual logging with explicit redaction logger.Info("secret validation complete", slog.String("databasepasswordstatus", getSecretStatus(secrets.DatabasePassword)), slog.String("apikeystatus", getSecretStatus(secrets.APIKey)), slog.String("jwtsecretstatus", getSecretStatus(secrets.JWTSecret))) }
func getSecretStatus(secret string) string { if secret == "" { return "missing" } return "present" }
### Configuration Integration
yaml
config.yaml - No secrets here!
app: name: "myapp" port: 8080
database: host: "localhost" port: 5432 database: "myapp" # Reference to secret, not the secret itself password_key: "database/myapp/password"
api: # Reference to secret location keyvaultpath: "api/keys/third-party"
secrets: provider: "vault" # vault, aws, gcp, env vault: address: "https://vault.company.com" path: "secret/myapp" aws: region: "us-west-2" gcp: project_id: "myapp-prod-12345"
go
// Wiring it all together
func setupApplication(ctx context.Context) (*App, error) {
// Load non-secret configuration
config, err := loadConfig()
if err != nil {
return nil, err
}
// Create secure logger logger := CreateSecureLogger()
// Create secrets loader based on configuration var secretLoader secrets.SecretLoader switch config.Secrets.Provider { case "vault": secretLoader, err = secrets.NewVaultLoader( config.Secrets.Vault.Address, os.Getenv("VAULT_TOKEN"), // Only this env var is acceptable config.Secrets.Vault.Path, logger, ) case "aws": secretLoader, err = secrets.NewAWSSecretsLoader(ctx, logger) case "gcp": secretLoader, err = secrets.NewGCPSecretsLoader(ctx, config.Secrets.GCP.ProjectID, logger) case "env": logger.Warn("using environment variables for secrets (development only)") secretLoader = secrets.NewEnvLoader(logger) default: return nil, fmt.Errorf("unknown secrets provider: %s", config.Secrets.Provider) }
if err != nil { return nil, fmt.Errorf("failed to create secrets loader: %w", err) }
// Load secrets resolver := secrets.NewSecretsResolver(secretLoader, logger) resolvedSecrets, err := resolver.ResolveSecrets(ctx, config.SecretConfig) if err != nil { return nil, fmt.Errorf("failed to resolve secrets: %w", err) }
// Use secrets to configure dependencies db, err := setupDatabase(config.Database, resolvedSecrets.DatabasePassword) if err != nil { return nil, err }
apiClient := setupAPIClient(config.API, resolvedSecrets.APIKey)
return &App{ DB: db, APIClient: apiClient, Logger: logger, }, nil }
### Production Best Practices
1. **Secret Rotation**
go
// Implement secret rotation for long-running applications
type RotatingSecretLoader struct {
loader SecretLoader
cache map[string]cachedSecret
refreshRate time.Duration
mu sync.RWMutex
}
type cachedSecret struct { value string expiresAt time.Time }
2. **Secret Validation**
go
func ValidateSecrets(secrets *ResolvedSecrets) error {
if secrets.DatabasePassword == "" {
return errors.New("database password is required")
}
if len(secrets.JWTSecret) < 32 {
return errors.New("JWT secret must be at least 32 characters")
}
return nil
}
3. **Graceful Degradation**
go
// Allow application to start with some missing secrets
func (r SecretsResolver) ResolveSecretsPartial(ctx context.Context, config SecretConfig) (ResolvedSecrets, []error) {
var errors []error
secrets := &ResolvedSecrets{}
if dbPass, err := r.loader.LoadSecret(ctx, config.DatabasePasswordKey); err != nil { errors = append(errors, fmt.Errorf("database password: %w", err)) } else { secrets.DatabasePassword = dbPass }
// Continue loading other secrets...
return secrets, errors }
### Security Checklist
- [ ] **NEVER** store secrets in configuration files or environment variables
- [ ] Use dedicated secret management systems (Vault, AWS Secrets Manager, GCP Secret Manager)
- [ ] Implement proper secret rotation for long-running applications
- [ ] Redact secrets from all log output using slog.ReplaceAttr
- [ ] Validate secret format and strength at startup
- [ ] Use encrypted connections to secret management systems
- [ ] Implement graceful degradation for non-critical secrets
- [ ] Audit secret access and implement proper RBAC
- [ ] Never commit secrets to version control
- [ ] Use different secrets per environment (dev/staging/prod)
---
## Context Propagation
### Context Usage Guidelines
**CRITICAL**: Context is for cancellation and request-scoped values that cross ALL layers, NOT for [dependency injection](go-practices-service-architecture.md#dependency-injection).
#### ✅ **Correct Context Usage**
go
// internal/cli/context.go
package cli
import ( "context" "time" )
type contextKey string
const ( requestIDKey contextKey = "request_id" traceIDKey contextKey = "trace_id" deadlineKey contextKey = "deadline" )
// WithRequestID adds request ID for tracing across all layers func WithRequestID(ctx context.Context, requestID string) context.Context { return context.WithValue(ctx, requestIDKey, requestID) }
// RequestIDFromContext retrieves request ID func RequestIDFromContext(ctx context.Context) string { if id, ok := ctx.Value(requestIDKey).(string); ok { return id } return "" }
// WithTraceID adds trace ID for distributed tracing func WithTraceID(ctx context.Context, traceID string) context.Context { return context.WithValue(ctx, traceIDKey, traceID) }
// TraceIDFromContext retrieves trace ID func TraceIDFromContext(ctx context.Context) string { if id, ok := ctx.Value(traceIDKey).(string); ok { return id } return "" }
#### ❌ **Context Anti-Patterns to Avoid**
go
// DON'T: Use context for dependency injection
func WithLogger(ctx context.Context, logger Logger) context.Context {
return context.WithValue(ctx, "logger", logger) // WRONG!
}
// DON'T: Use context for configuration func WithConfig(ctx context.Context, cfg *Config) context.Context { return context.WithValue(ctx, "config", cfg) // WRONG! }
// DON'T: Use context for services func WithService(ctx context.Context, svc Service) context.Context { return context.WithValue(ctx, "service", svc) // WRONG! }
#### ✅ **Correct Dependency Injection**
go
// Use explicit dependency injection instead
type CommandHandler struct {
logger logging.Logger
config *Config
userService *UserService
}
func NewCommandHandler(logger logging.Logger, cfg Config, userSvc UserService) *CommandHandler { return &CommandHandler{ logger: logger, config: cfg, userService: userSvc, } }
func (h *CommandHandler) Execute(ctx context.Context, args []string) error { // Use injected dependencies, context only for cancellation/request data requestID := RequestIDFromContext(ctx) h.logger.Info("executing command", slog.String("request_id", requestID), slog.String("operation", "command_execute"))
return h.userService.ProcessCommand(ctx, args) }
### Command Context Setup
go
// cmd/myapp/main.go
func main() {
ctx := context.Background()
// Only add request-scoped values to context ctx = cli.WithRequestID(ctx, uuid.New().String()) ctx = cli.WithTraceID(ctx, generateTraceID())
// Setup signal handling for cancellation ctx, cancel := signal.NotifyContext(ctx, os.Interrupt, syscall.SIGTERM) defer cancel()
// Initialize dependencies OUTSIDE context logger := logging.NewLogger(logging.Config{ Level: slog.LevelInfo, })
config, err := config.Load() if err != nil { logger.Error("failed to load config", slog.Error(err)) os.Exit(1) }
// Wire dependencies explicitly app, err := app.New(config, logger) if err != nil { logger.Error("failed to initialize app", slog.Error(err)) os.Exit(1) }
// Execute with context (for cancellation only) if err := app.ExecuteCommand(ctx, os.Args[1:]); err != nil { logger.Error("command failed", slog.Error(err), slog.String("operation", "cli_execute"), slog.String("request_id", cli.RequestIDFromContext(ctx))) os.Exit(1) } }
---
## Graceful Shutdown
### Shutdown Coordination
go
// internal/shutdown/shutdown.go
package shutdown
import ( "context" "sync" "time" )
// Hook is a shutdown function type Hook func(context.Context) error
// Coordinator manages graceful shutdown type Coordinator struct { mu sync.Mutex hooks []Hook }
func NewCoordinator() *Coordinator { return &Coordinator{} }
// Register adds a shutdown hook func (c *Coordinator) Register(hook Hook) { c.mu.Lock() defer c.mu.Unlock() c.hooks = append(c.hooks, hook) }
// Shutdown executes all hooks in reverse order func (c *Coordinator) Shutdown(ctx context.Context) error { c.mu.Lock() defer c.mu.Unlock()
// Execute in reverse order (LIFO) for i := len(c.hooks) - 1; i >= 0; i-- { if err := c.hooksi; err != nil { return fmt.Errorf("shutdown hook %d failed: %w", i, err) } }
return nil }
// Usage in application func (app *App) Run(ctx context.Context) error { shutdown := shutdown.NewCoordinator()
// Register HTTP server server := http.NewServer(app.config.Server) shutdown.Register(func(ctx context.Context) error { return server.Shutdown(ctx) })
// Register database shutdown.Register(func(ctx context.Context) error { return app.db.Close() })
// Register worker pool shutdown.Register(func(ctx context.Context) error { return app.workers.Shutdown(ctx) })
// Start services errCh := make(chan error, 1) go func() { errCh <- server.ListenAndServe() }()
// Wait for shutdown signal select { case err := <-errCh: return err case <-ctx.Done(): shutdownCtx, cancel := context.WithTimeout( context.Background(), 30*time.Second, ) defer cancel()
return shutdown.Shutdown(shutdownCtx) } }
---
## CLI Testing
### Command Testing
go
// cmd/myapp/user/create/create_test.go
package create_test
import ( "bytes" "context" "testing"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/require"
"myapp/cmd/myapp/user/create" "myapp/internal/test" )
func TestCreateCommand(t *testing.T) { tests := []struct { name string args []string setup func(*test.Harness) wantErr bool check func(t testing.T, h test.Harness) }{ { name: "create user successfully", args: []string{ "--email", "test@example.com", "--name", "Test User", }, setup: func(h *test.Harness) { h.UserService.CreateFunc = func(ctx context.Context, input service.CreateUserInput) (*domain.User, error) { return &domain.User{ ID: "user-123", Email: input.Email, Name: input.Name, }, nil } }, wantErr: false, check: func(t testing.T, h test.Harness) { assert.Contains(t, h.Stdout.String(), "User created successfully") assert.Contains(t, h.Stdout.String(), "user-123") }, }, { name: "missing required flag", args: []string{"--name", "Test User"}, wantErr: true, check: func(t testing.T, h test.Harness) { assert.Contains(t, h.Stderr.String(), "required flag(s) \"email\" not set") }, }, }
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { // Create test harness h := test.NewHarness(t) defer h.Cleanup()
if tt.setup != nil { tt.setup(h) }
// Create command cmd := create.NewCmd() cmd.SetOut(h.Stdout) cmd.SetErr(h.Stderr) cmd.SetArgs(tt.args)
// Execute err := cmd.ExecuteContext(h.Context())
if tt.wantErr { assert.Error(t, err) } else { assert.NoError(t, err) }
if tt.check != nil { tt.check(t, h) } }) } }
### Test Harness
go
// internal/test/harness.go
package test
import ( "bytes" "context" "testing"
"myapp/internal/service" )
// Harness provides test infrastructure type Harness struct { t *testing.T ctx context.Context cancel context.CancelFunc
Stdout *bytes.Buffer Stderr *bytes.Buffer
// Mocked services UserService *MockUserService AuthService *MockAuthService }
func NewHarness(t testing.T) Harness { ctx, cancel := context.WithCancel(context.Background())
return &Harness{ t: t, ctx: ctx, cancel: cancel, Stdout: new(bytes.Buffer), Stderr: new(bytes.Buffer), UserService: &MockUserService{}, AuthService: &MockAuthService{}, } }
func (h *Harness) Context() context.Context { return h.ctx }
func (h *Harness) Cleanup() { h.cancel() }
---
## Interactive Commands
### Prompts and Confirmation
go
// internal/cli/prompt/prompt.go
package prompt
import ( "bufio" "fmt" "os" "strings"
"golang.org/x/term" )
// Prompt asks for user input func Prompt(question string, defaultValue string) (string, error) { if defaultValue != "" { fmt.Printf("%s [%s]: ", question, defaultValue) } else { fmt.Printf("%s: ", question) }
reader := bufio.NewReader(os.Stdin) answer, err := reader.ReadString('\n') if err != nil { return "", err }
answer = strings.TrimSpace(answer) if answer == "" && defaultValue != "" { return defaultValue, nil }
return answer, nil }
// Password prompts for password without echo func Password(prompt string) (string, error) { fmt.Print(prompt + ": ")
password, err := term.ReadPassword(int(os.Stdin.Fd())) if err != nil { return "", err }
fmt.Println() // New line after password return string(password), nil }
// Confirm asks for yes/no confirmation func Confirm(question string, defaultYes bool) (bool, error) { suffix := " [y/N]: " if defaultYes { suffix = " [Y/n]: " }
fmt.Print(question + suffix)
reader := bufio.NewReader(os.Stdin) answer, err := reader.ReadString('\n') if err != nil { return false, err }
answer = strings.ToLower(strings.TrimSpace(answer))
if answer == "" { return defaultYes, nil }
return answer == "y" || answer == "yes", nil }
// Select presents options to choose from func Select(question string, options []string) (int, error) { fmt.Println(question) for i, option := range options { fmt.Printf(" %d) %s\n", i+1, option) }
for { answer, err := Prompt("Enter choice", "") if err != nil { return -1, err }
var choice int if _, err := fmt.Sscanf(answer, "%d", &choice); err != nil { fmt.Println("Please enter a valid number") continue }
if choice < 1 || choice > len(options) { fmt.Printf("Please enter a number between 1 and %d\n", len(options)) continue }
return choice - 1, nil } }
### Progress Indicators
go
// internal/cli/progress/progress.go
package progress
import ( "fmt" "io" "strings" "time" )
// Spinner shows activity type Spinner struct { frames []string delay time.Duration writer io.Writer stop chan struct{} }
func NewSpinner(w io.Writer) *Spinner { return &Spinner{ frames: []string{"⠋", "⠙", "⠹", "⠸", "⠼", "⠴", "⠦", "⠧", "⠇", "⠏"}, delay: 100 * time.Millisecond, writer: w, stop: make(chan struct{}), } }
func (s *Spinner) Start(message string) { go func() { for i := 0; ; i++ { select { case <-s.stop: fmt.Fprintf(s.writer, "\r%s\n", strings.Repeat(" ", len(message)+2)) return default: frame := s.frames[i%len(s.frames)] fmt.Fprintf(s.writer, "\r%s %s", frame, message) time.Sleep(s.delay) } } }() }
func (s *Spinner) Stop() { close(s.stop) }
// ProgressBar shows completion type ProgressBar struct { total int current int width int writer io.Writer }
func NewProgressBar(total int, w io.Writer) *ProgressBar { return &ProgressBar{ total: total, width: 40, writer: w, } }
func (p *ProgressBar) Update(current int) { p.current = current percent := float64(p.current) / float64(p.total) filled := int(percent * float64(p.width))
bar := strings.Repeat("█", filled) + strings.Repeat("░", p.width-filled)
fmt.Fprintf(p.writer, "\r[%s] %3.0f%% (%d/%d)", bar, percent*100, p.current, p.total)
if p.current >= p.total { fmt.Fprintln(p.writer) } }
### Best Practices Summary
1. **Hierarchical commands** in separate packages
2. **No global state** except in main()
3. **Explicit configuration** precedence
4. **Context propagation** through all layers
5. **Graceful shutdown** with timeout
6. **[Table-driven tests](go-practices-testing.md#table-driven-tests)** for commands
7. **Interactive prompts** with defaults
8. **Progress feedback** for long operations
9. **Structured errors** with context
10. **Environment-aware** configuration
---
## Interactive CLI Libraries
### Modern CLI Library Comparison
Go's ecosystem offers several excellent libraries for building interactive command-line interfaces. Each has its strengths for different use cases.
| Library | Best For | Learning Curve | Features |
|---------|----------|----------------|----------|
| **Bubble Tea** | Full TUI apps | Steep | Complete framework, reactive |
| **Cobra + Promptui** | Traditional CLIs | Gentle | Prompts with existing CLIs |
| **Survey** | Forms & wizards | Easy | Rich prompts, validation |
| **Huh** | Modern forms | Easy | Bubble Tea-based, simpler API |
| **Gum** | Shell scripts | Minimal | Standalone binary |
### Bubble Tea - Full TUI Framework
**Best for**: Terminal user interfaces, dashboards, interactive tools
go
// internal/tui/app.go
package tui
import ( "fmt" "strings"
"github.com/charmbracelet/bubbles/textinput" "github.com/charmbracelet/bubbles/list" tea "github.com/charmbracelet/bubbletea" "github.com/charmbracelet/lipgloss" )
// Model represents application state type Model struct { choices []string cursor int selected map[int]struct{}
// Components textInput textinput.Model list list.Model
// Styling styles Styles }
type Styles struct { Title lipgloss.Style Selected lipgloss.Style Normal lipgloss.Style Help lipgloss.Style }
func NewModel() Model { // Initialize components ti := textinput.New() ti.Placeholder = "Search..." ti.Focus()
items := []list.Item{ item{title: "Create user", desc: "Add a new user to the system"}, item{title: "List users", desc: "Show all users"}, item{title: "Delete user", desc: "Remove a user"}, }
l := list.New(items, itemDelegate{}, 0, 0) l.Title = "What would you like to do?"
return Model{ textInput: ti, list: l, selected: make(map[int]struct{}), styles: DefaultStyles(), } }
// Init is called once when program starts func (m Model) Init() tea.Cmd { return textinput.Blink }
// Update handles events and updates state func (m Model) Update(msg tea.Msg) (tea.Model, tea.Cmd) { switch msg := msg.(type) { case tea.KeyMsg: switch msg.String() { case "ctrl+c", "q": return m, tea.Quit
case "enter": // Handle selection i, ok := m.list.SelectedItem().(item) if ok { return m, tea.Batch( executeCommand(i.title), tea.Quit, ) } }
case tea.WindowSizeMsg: m.list.SetWidth(msg.Width) m.list.SetHeight(msg.Height - 4) }
// Update components var cmd tea.Cmd m.list, cmd = m.list.Update(msg) return m, cmd }
// View renders the UI func (m Model) View() string { if m.quitting { return "Goodbye!\n" }
return fmt.Sprintf( "%s\n\n%s\n\n%s", m.styles.Title.Render("User Management"), m.list.View(), m.styles.Help.Render("Press q to quit"), ) }
// Usage func RunTUI() error { p := tea.NewProgram(NewModel(), tea.WithAltScreen()) _, err := p.Run() return err }
### Huh - Modern Form Library
**Best for**: Forms, configuration wizards, user input flows
go
// internal/cli/forms/user.go
package forms
import ( "github.com/charmbracelet/huh" )
// UserFormData holds form results type UserFormData struct { Name string Email string Role string Department string SendWelcome bool Password string }
// NewUserForm creates an interactive user creation form func NewUserForm() (*UserFormData, error) { var data UserFormData
form := huh.NewForm( huh.NewGroup( huh.NewInput(). Title("Full Name"). Description("User's full name"). Placeholder("John Doe"). Value(&data.Name). Validate(validateName),
huh.NewInput(). Title("Email"). Description("User's email address"). Placeholder("john@example.com"). Value(&data.Email). Validate(validateEmail), ),
huh.NewGroup( huh.NewSelect[string](). Title("Role"). Options( huh.NewOption("Admin", "admin"), huh.NewOption("User", "user"), huh.NewOption("Viewer", "viewer"), ). Value(&data.Role),
huh.NewSelect[string](). Title("Department"). Options( huh.NewOption("Engineering", "eng"), huh.NewOption("Sales", "sales"), huh.NewOption("Marketing", "marketing"), huh.NewOption("Support", "support"), ). Value(&data.Department), ),
huh.NewGroup( huh.NewPassword(). Title("Password"). Description("Minimum 8 characters"). Value(&data.Password). Validate(validatePassword),
huh.NewConfirm(). Title("Send welcome email?"). Value(&data.SendWelcome). Affirmative("Yes"). Negative("No"), ), )
err := form.Run() if err != nil { return nil, err }
return &data, nil }
// Multi-step wizard func NewProjectWizard() (*ProjectData, error) { var data ProjectData
// Step 1: Basic Info basicForm := huh.NewForm( huh.NewGroup( huh.NewInput(). Title("Project Name"). Value(&data.Name), huh.NewText(). Title("Description"). Lines(3). Value(&data.Description), ).Title("Basic Information"), )
// Step 2: Configuration configForm := huh.NewForm( huh.NewGroup( huh.NewSelect[string](). Title("Database"). Options( huh.NewOption("PostgreSQL", "postgres"), huh.NewOption("MySQL", "mysql"), huh.NewOption("SQLite", "sqlite"), ). Value(&data.Database), huh.NewMultiSelect[string](). Title("Features"). Options( huh.NewOption("Authentication", "auth"), huh.NewOption("API", "api"), huh.NewOption("Admin Panel", "admin"), huh.NewOption("Metrics", "metrics"), ). Value(&data.Features), ).Title("Configuration"), )
// Run forms in sequence if err := basicForm.Run(); err != nil { return nil, err }
if err := configForm.Run(); err != nil { return nil, err }
return &data, nil }
### Survey - Established Interactive Prompts
**Best for**: Quick prompts, existing CLI enhancement, simple interactions
go
// internal/cli/prompts/survey.go
package prompts
import ( "errors" "regexp"
"github.com/AlecAivazis/survey/v2" )
// CollectUserInfo gathers user information interactively func CollectUserInfo() (*UserInfo, error) { var info UserInfo
questions := []*survey.Question{ { Name: "name", Prompt: &survey.Input{ Message: "What is your name?", Help: "Your full name", }, Validate: survey.Required, }, { Name: "email", Prompt: &survey.Input{ Message: "What is your email?", }, Validate: survey.ComposeValidators( survey.Required, validateEmailSurvey, ), }, { Name: "password", Prompt: &survey.Password{ Message: "Choose a password:", }, Validate: survey.MinLength(8), }, { Name: "role", Prompt: &survey.Select{ Message: "Choose your role:", Options: []string{"admin", "user", "viewer"}, Default: "user", }, }, { Name: "departments", Prompt: &survey.MultiSelect{ Message: "Select departments:", Options: []string{"Engineering", "Sales", "Marketing", "Support"}, }, }, { Name: "subscribe", Prompt: &survey.Confirm{ Message: "Subscribe to newsletter?", Default: true, }, }, }
err := survey.Ask(questions, &info) return &info, err }
// Custom validation func validateEmailSurvey(val interface{}) error { str, ok := val.(string) if !ok { return errors.New("email must be a string") }
emailRegex := regexp.MustCompile(^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$
)
if !emailRegex.MatchString(str) {
return errors.New("invalid email format")
}
return nil }
// Complex editor prompt func EditConfiguration(current string) (string, error) { prompt := &survey.Editor{ Message: "Edit configuration", Default: current, FileName: "config.yaml", HideDefault: true, AppendDefault: true, }
var result string err := survey.AskOne(prompt, &result) return result, err }
### Promptui - Lightweight Prompts
**Best for**: Simple prompts, minimal dependencies, Cobra integration
go
// internal/cli/prompts/promptui.go
package prompts
import ( "errors" "fmt" "strings"
"github.com/manifoldco/promptui" )
// SelectAction presents action menu func SelectAction() (string, error) { prompt := promptui.Select{ Label: "Select Action", Items: []string{ "Create User", "List Users", "Update User", "Delete User", "Exit", }, Templates: &promptui.SelectTemplates{ Label: "{{ . }}?", Active: "→ {{ . | cyan }}", Inactive: " {{ . | white }}", Selected: "✓ {{ . | green | bold }}", }, Size: 5, }
_, result, err := prompt.Run() return result, err }
// GetUserInput collects user data with validation func GetUserInput() (*User, error) { // Name prompt namePrompt := promptui.Prompt{ Label: "Name", Validate: func(input string) error { if len(input) < 3 { return errors.New("name must be at least 3 characters") } return nil }, }
name, err := namePrompt.Run() if err != nil { return nil, err }
// Email prompt with custom template emailPrompt := promptui.Prompt{ Label: "Email", Templates: &promptui.PromptTemplates{ Prompt: "{{ . }} ", Valid: "{{ . | green }} ", Invalid: "{{ . | red }} ", Success: "{{ . | bold }} ", }, Validate: validateEmail, }
email, err := emailPrompt.Run() if err != nil { return nil, err }
// Password with mask passwordPrompt := promptui.Prompt{ Label: "Password", Mask: '*', Validate: func(input string) error { if len(input) < 8 { return errors.New("password must be at least 8 characters") } return nil }, }
password, err := passwordPrompt.Run() if err != nil { return nil, err }
return &User{ Name: name, Email: email, Password: password, }, nil }
// Search with filtering
func SearchUsers(users []User) (*User, error) {
templates := &promptui.SelectTemplates{
Label: "{{ . }}?",
Active: "→ {{ .Name | cyan }} ({{ .Email | faint }})",
Inactive: " {{ .Name | white }} ({{ .Email | faint }})",
Selected: "✓ {{ .Name | green | bold }}",
Details:
--------- User Details ----------
{{ "Name:" | faint }}\t{{ .Name }}
{{ "Email:" | faint }}\t{{ .Email }}
{{ "Role:" | faint }}\t{{ .Role }}
{{ "Created:" | faint }}\t{{ .CreatedAt.Format "2006-01-02" }}
,
}
searcher := func(input string, index int) bool { user := users[index] name := strings.Replace(strings.ToLower(user.Name), " ", "", -1) email := strings.Replace(strings.ToLower(user.Email), " ", "", -1) input = strings.Replace(strings.ToLower(input), " ", "", -1)
return strings.Contains(name, input) || strings.Contains(email, input) }
prompt := promptui.Select{ Label: "Search Users", Items: users, Templates: templates, Size: 10, Searcher: searcher, }
i, _, err := prompt.Run() if err != nil { return nil, err }
return &users[i], nil }
### Integrating with Cobra
go
// cmd/myapp/user/create/create.go
package create
import ( "fmt"
"github.com/spf13/cobra" "myapp/internal/cli/forms" "myapp/internal/service" )
func NewCmd() *cobra.Command { var interactive bool
cmd := &cobra.Command{ Use: "create", Short: "Create a new user", RunE: func(cmd *cobra.Command, args []string) error { if interactive { return runInteractive(cmd) } return runWithFlags(cmd) }, }
// Flags for non-interactive mode cmd.Flags().StringP("name", "n", "", "User name") cmd.Flags().StringP("email", "e", "", "User email") cmd.Flags().StringP("role", "r", "user", "User role") cmd.Flags().BoolP("interactive", "i", false, "Interactive mode")
return cmd }
func runInteractive(cmd *cobra.Command) error { // Use Huh for rich form experience data, err := forms.NewUserForm() if err != nil { return err }
// Create user with collected data svc := cmd.Context().Value("userService").(*service.UserService) user, err := svc.CreateUser(cmd.Context(), service.CreateUserInput{ Name: data.Name, Email: data.Email, Password: data.Password, Role: data.Role, })
if err != nil { return fmt.Errorf("failed to create user: %w", err) }
fmt.Printf("✓ User created successfully\n") fmt.Printf(" ID: %s\n", user.ID) fmt.Printf(" Email: %s\n", user.Email)
return nil }
### Choosing the Right Library
#### Decision Matrix
| Use Case | Recommended | Why |
|----------|-------------|-----|
| **Full TUI application** | Bubble Tea | Complete framework, reactive updates |
| **Configuration wizard** | Huh | Modern UX, form groups, validation |
| **Quick prompts** | Promptui | Lightweight, easy Cobra integration |
| **Complex forms** | Survey | Mature, many prompt types |
| **Shell scripting** | Gum | Standalone binary, no Go needed |
#### Integration Examples
1. **Bubble Tea for Git-like TUI**
go
// Full terminal UI with panels, real-time updates
tea.NewProgram(git.NewModel())
2. **Huh for Initial Setup**
go
// Multi-step configuration wizard
config, _ := setupWizard.Run()
3. **Survey for Missing Flags**
go
// Prompt for missing required flags
if email == "" {
survey.AskOne(&survey.Input{Message: "Email:"}, &email)
}
4. **Promptui for Confirmations**
go
// Dangerous operation confirmations
prompt := promptui.Prompt{
Label: "Delete user? This cannot be undone",
IsConfirm: true,
}
### Best Practices
1. **Provide Non-Interactive Mode**: Always offer flag-based alternatives
2. **Validate Early**: Validate input before expensive operations
3. **Show Progress**: Use spinners for long operations
4. **Handle Interrupts**: Gracefully handle Ctrl+C
5. **Remember Preferences**: Save common selections
6. **Test Interactive Flows**: Use expect-style testing
7. **Accessible Defaults**: Ensure keyboard-only navigation
8. **Clear Help Text**: Provide context for each prompt
9. **Batch Operations**: Group related prompts
10. **Error Recovery**: Allow retry/edit on validation failure
## Quick Reference Checklist
### Command Structure & Organization
- [ ] Organize commands hierarchically in separate packages
- [ ] Place each command in its own package under `cmd/myapp/`
- [ ] Keep minimal logic in `main.go` (just wiring and startup)
- [ ] Use `cobra.Command` constructor functions (`NewCmd()`)
- [ ] Avoid global command state - use package-scoped variables
- [ ] Group related commands under parent commands
### Configuration Management
- [ ] Use Viper for configuration loading with proper precedence
- [ ] Support config files, environment variables, and flags
- [ ] Set sensible defaults for all configuration options
- [ ] Validate configuration early in `PersistentPreRunE`
- [ ] Use structured config types with validation tags
- [ ] Support environment-specific config overrides
### Secrets Management (CRITICAL for Production)
- [ ] **NEVER** store secrets in config files or environment variables
- [ ] Use dedicated secret management systems (Vault, AWS Secrets Manager, GCP)
- [ ] Implement SecretLoader interface for different providers
- [ ] Store only secret references/keys in configuration files
- [ ] Redact secrets from all log output using slog.ReplaceAttr
- [ ] Validate secret format and strength at startup
- [ ] Implement secret rotation for long-running applications
- [ ] Use different secrets per environment (dev/staging/prod)
- [ ] Never commit secrets to version control
- [ ] Audit secret access and implement proper RBAC
### Context Usage (Critical Guidelines)
- [ ] Use context ONLY for cancellation and request-scoped tracing data
- [ ] Never use context for dependency injection (use explicit DI)
- [ ] Pass logger, config, and services via struct fields, not context
- [ ] Add request ID and trace ID to context for cross-layer tracking
- [ ] Propagate context through all service calls
- [ ] Handle context cancellation gracefully in long operations
### Graceful Shutdown
- [ ] Register shutdown hooks in LIFO order
- [ ] Use `signal.NotifyContext()` for signal handling
- [ ] Implement timeouts for shutdown operations (30s typical)
- [ ] Close resources in reverse dependency order
- [ ] Handle shutdown errors gracefully
- [ ] Test shutdown behavior under load
### Interactive CLI Implementation
- [ ] Choose appropriate library (Bubble Tea, Huh, Survey, Promptui)
- [ ] Provide non-interactive mode with flags as fallback
- [ ] Validate user input early and clearly
- [ ] Handle Ctrl+C interruption gracefully
- [ ] Show progress indicators for long operations
- [ ] Remember user preferences when possible
### CLI Testing
- [ ] Create test harness with mocked dependencies
- [ ] Test both success and error scenarios
- [ ] Capture stdout/stderr for output validation
- [ ] Test command flag parsing and validation
- [ ] Use table-driven tests for multiple scenarios
- [ ] Mock external dependencies (services, APIs)
### Command Implementation Best Practices
- [ ] Keep command handlers thin - delegate to services
- [ ] Use structured error handling with proper exit codes
- [ ] Provide helpful error messages with context
- [ ] Implement proper flag validation and parsing
- [ ] Support common patterns (--help, --version, --dry-run)
- [ ] Use consistent naming conventions across commands
### Advanced Features
- [ ] Implement shell completion for better UX
- [ ] Add progress bars and spinners for long operations
- [ ] Support configuration file generation/validation
- [ ] Implement proper logging configuration per command
- [ ] Add debugging flags (--verbose, --debug)
- [ ] Support output format options (JSON, YAML, table)
### Security & Reliability
- [ ] Validate all user inputs and file paths
- [ ] Handle sensitive data (passwords, tokens) securely
- [ ] Use secure defaults for all operations
- [ ] Implement proper permission checks
- [ ] Log security-relevant operations
- [ ] Handle network timeouts and retries appropriately
---
---
# 9. Common Patterns Reference
## Table of Contents
1. [Functional Options Pattern](#functional-options-pattern)
2. [Builder Pattern](#builder-pattern)
3. [Strategy Pattern](#strategy-pattern)
4. [Chain of Responsibility](#chain-of-responsibility)
5. [Observer Pattern](#observer-pattern)
6. [Factory Pattern](#factory-pattern)
7. [Quick Reference Card](#quick-reference-card)
---
## Functional Options Pattern
### Basic Implementation
go
// internal/client/options.go
package client
import ( "time" "net/http" )
// Client represents a configurable HTTP client type Client struct { baseURL string timeout time.Duration maxRetries int transport http.RoundTripper headers map[string]string logger Logger }
// Option configures a Client type Option func(*Client)
// NewClient creates a client with options func NewClient(baseURL string, opts ...Option) *Client { // Default configuration c := &Client{ baseURL: baseURL, timeout: 30 * time.Second, maxRetries: 3, headers: make(map[string]string), transport: http.DefaultTransport, }
// Apply options for _, opt := range opts { opt(c) }
return c }
// Option constructors
// WithTimeout sets the request timeout func WithTimeout(timeout time.Duration) Option { return func(c *Client) { c.timeout = timeout } }
// WithRetries sets max retry attempts func WithRetries(retries int) Option { return func(c *Client) { c.maxRetries = retries } }
// WithTransport sets custom transport func WithTransport(transport http.RoundTripper) Option { return func(c *Client) { c.transport = transport } }
// WithHeader adds a default header func WithHeader(key, value string) Option { return func(c *Client) { c.headers[key] = value } }
// WithLogger sets the logger func WithLogger(logger Logger) Option { return func(c *Client) { c.logger = logger } }
// Usage client := NewClient("https://api.example.com", WithTimeout(60*time.Second), WithRetries(5), WithHeader("X-API-Key", "secret"), WithLogger(logger), )
### Advanced Options with Validation
go
// Option with validation
func WithRateLimit(rps int) Option {
return func(c *Client) error {
if rps <= 0 {
return fmt.Errorf("rate limit must be positive: %d", rps)
}
c.rateLimit = rate.NewLimiter(rate.Limit(rps), rps*2)
return nil
}
}
// Client with error handling func NewClient(baseURL string, opts ...Option) (*Client, error) { c := &Client{ baseURL: baseURL, // defaults... }
for _, opt := range opts { if err := opt(c); err != nil { return nil, fmt.Errorf("apply option: %w", err) } }
return c, nil }
---
## Builder Pattern
### Fluent Builder
go
// internal/query/builder.go
package query
import ( "fmt" "strings" )
// QueryBuilder builds SQL queries type QueryBuilder struct { table string columns []string conditions []condition joins []join orderBy []order limit *int offset *int errors []error }
type condition struct { column string operator string value interface{} }
type join struct { joinType string table string on string }
type order struct { column string desc bool }
// NewQueryBuilder creates a new builder func NewQueryBuilder(table string) *QueryBuilder { return &QueryBuilder{ table: table, } }
// Select specifies columns func (b QueryBuilder) Select(columns ...string) QueryBuilder { b.columns = append(b.columns, columns...) return b }
// Where adds a condition func (b QueryBuilder) Where(column, operator string, value interface{}) QueryBuilder { b.conditions = append(b.conditions, condition{ column: column, operator: operator, value: value, }) return b }
// Join adds an inner join func (b QueryBuilder) Join(table, on string) QueryBuilder { b.joins = append(b.joins, join{ joinType: "JOIN", table: table, on: on, }) return b }
// LeftJoin adds a left join func (b QueryBuilder) LeftJoin(table, on string) QueryBuilder { b.joins = append(b.joins, join{ joinType: "LEFT JOIN", table: table, on: on, }) return b }
// OrderBy adds ordering func (b QueryBuilder) OrderBy(column string, desc ...bool) QueryBuilder { isDesc := false if len(desc) > 0 { isDesc = desc[0] }
b.orderBy = append(b.orderBy, order{ column: column, desc: isDesc, }) return b }
// Limit sets result limit func (b QueryBuilder) Limit(limit int) QueryBuilder { b.limit = &limit return b }
// Offset sets result offset func (b QueryBuilder) Offset(offset int) QueryBuilder { b.offset = &offset return b }
// Build generates the SQL query func (b *QueryBuilder) Build() (string, []interface{}, error) { if len(b.errors) > 0 { return "", nil, b.errors[0] }
var parts []string var args []interface{} argIndex := 1
// SELECT clause selectClause := "*" if len(b.columns) > 0 { selectClause = strings.Join(b.columns, ", ") } parts = append(parts, fmt.Sprintf("SELECT %s FROM %s", selectClause, b.table))
// JOIN clauses for _, j := range b.joins { parts = append(parts, fmt.Sprintf("%s %s ON %s", j.joinType, j.table, j.on)) }
// WHERE clause if len(b.conditions) > 0 { var where []string for _, c := range b.conditions { where = append(where, fmt.Sprintf("%s %s $%d", c.column, c.operator, argIndex)) args = append(args, c.value) argIndex++ } parts = append(parts, "WHERE "+strings.Join(where, " AND ")) }
// ORDER BY clause if len(b.orderBy) > 0 { var orderParts []string for _, o := range b.orderBy { dir := "ASC" if o.desc { dir = "DESC" } orderParts = append(orderParts, fmt.Sprintf("%s %s", o.column, dir)) } parts = append(parts, "ORDER BY "+strings.Join(orderParts, ", ")) }
// LIMIT/OFFSET if b.limit != nil { parts = append(parts, fmt.Sprintf("LIMIT %d", *b.limit)) } if b.offset != nil { parts = append(parts, fmt.Sprintf("OFFSET %d", *b.offset)) }
return strings.Join(parts, " "), args, nil }
// Usage query, args, err := NewQueryBuilder("users"). Select("id", "name", "email"). LeftJoin("profiles", "profiles.user_id = users.id"). Where("status", "=", "active"). Where("created_at", ">", time.Now().Add(-3024time.Hour)). OrderBy("created_at", true). Limit(10). Build()
---
## Strategy Pattern
### Payment Processing Example
go
// internal/payment/strategy.go
package payment
import ( "context" "fmt" )
// PaymentStrategy defines payment processing interface type PaymentStrategy interface { Name() string Validate(amount decimal.Decimal, details map[string]string) error Process(ctx context.Context, amount decimal.Decimal, details map[string]string) (*Transaction, error) Refund(ctx context.Context, transactionID string, amount decimal.Decimal) error }
// PaymentProcessor uses strategies type PaymentProcessor struct { strategies map[string]PaymentStrategy logger Logger }
func NewPaymentProcessor(logger Logger) *PaymentProcessor { return &PaymentProcessor{ strategies: make(map[string]PaymentStrategy), logger: logger, } }
func (p *PaymentProcessor) RegisterStrategy(strategy PaymentStrategy) { p.strategies[strategy.Name()] = strategy }
func (p PaymentProcessor) Process(ctx context.Context, method string, amount decimal.Decimal, details map[string]string) (Transaction, error) { strategy, exists := p.strategies[method] if !exists { return nil, fmt.Errorf("unsupported payment method: %s", method) }
// Validate if err := strategy.Validate(amount, details); err != nil { return nil, fmt.Errorf("validation failed: %w", err) }
// Process p.logger.Info("processing payment", slog.String("operation", method), slog.String("amount", amount.String()))
tx, err := strategy.Process(ctx, amount, details) if err != nil { p.logger.Error("payment failed", slog.String("operation", method), slog.Error(err)) return nil, err }
p.logger.Info("payment successful", slog.String("transaction_id", tx.ID))
return tx, nil }
// Concrete strategies
// CreditCardStrategy processes credit cards type CreditCardStrategy struct { gateway Gateway }
func (s *CreditCardStrategy) Name() string { return "credit_card" }
func (s *CreditCardStrategy) Validate(amount decimal.Decimal, details map[string]string) error { // Validate card number cardNumber, ok := details["card_number"] if !ok || !isValidCardNumber(cardNumber) { return errors.New("invalid card number") }
// Validate expiry expiry, ok := details["expiry"] if !ok || !isValidExpiry(expiry) { return errors.New("invalid expiry date") }
// Validate CVV cvv, ok := details["cvv"] if !ok || len(cvv) < 3 { return errors.New("invalid CVV") }
return nil }
func (s CreditCardStrategy) Process(ctx context.Context, amount decimal.Decimal, details map[string]string) (Transaction, error) { return s.gateway.Charge(ctx, GatewayRequest{ Amount: amount, CardNumber: details["card_number"], Expiry: details["expiry"], CVV: details["cvv"], }) }
// PayPalStrategy processes PayPal payments type PayPalStrategy struct { client PayPalClient }
func (s *PayPalStrategy) Name() string { return "paypal" }
func (s *PayPalStrategy) Validate(amount decimal.Decimal, details map[string]string) error { email, ok := details["email"] if !ok || !isValidEmail(email) { return errors.New("invalid PayPal email") } return nil }
// Usage processor := NewPaymentProcessor(logger) processor.RegisterStrategy(&CreditCardStrategy{gateway: stripeGateway}) processor.RegisterStrategy(&PayPalStrategy{client: paypalClient}) processor.RegisterStrategy(&CryptoStrategy{wallet: cryptoWallet})
tx, err := processor.Process(ctx, "credit_card", amount, map[string]string{ "card_number": "4111111111111111", "expiry": "12/25", "cvv": "123", })
---
## Chain of Responsibility
### Request Validation Chain
go
// internal/validation/chain.go
package validation
import ( "context" "fmt" )
// Handler processes or passes to next handler type Handler interface { Handle(ctx context.Context, request Request) error SetNext(Handler) Handler }
// BaseHandler provides common functionality type BaseHandler struct { next Handler }
func (h *BaseHandler) SetNext(next Handler) Handler { h.next = next return next }
func (h *BaseHandler) handleNext(ctx context.Context, request Request) error { if h.next != nil { return h.next.Handle(ctx, request) } return nil }
// Concrete handlers
// AuthenticationHandler checks auth type AuthenticationHandler struct { BaseHandler authService AuthService }
func (h *AuthenticationHandler) Handle(ctx context.Context, request Request) error { token := request.Header("Authorization") if token == "" { return ErrUnauthorized }
user, err := h.authService.ValidateToken(ctx, token) if err != nil { return fmt.Errorf("invalid token: %w", err) }
request.SetUser(user) return h.handleNext(ctx, request) }
// RateLimitHandler checks rate limits type RateLimitHandler struct { BaseHandler limiter RateLimiter }
func (h *RateLimitHandler) Handle(ctx context.Context, request Request) error { key := request.ClientIP()
if !h.limiter.Allow(key) { return ErrRateLimitExceeded }
return h.handleNext(ctx, request) }
// ValidationHandler validates request data type ValidationHandler struct { BaseHandler validator Validator }
func (h *ValidationHandler) Handle(ctx context.Context, request Request) error { if err := h.validator.Validate(request.Body()); err != nil { return fmt.Errorf("validation failed: %w", err) }
return h.handleNext(ctx, request) }
// LoggingHandler logs requests type LoggingHandler struct { BaseHandler logger Logger }
func (h *LoggingHandler) Handle(ctx context.Context, request Request) error { start := time.Now()
h.logger.Info("request started", slog.String("method", request.Method()), slog.String("request_path", request.Path()))
err := h.handleNext(ctx, request)
h.logger.Info("request completed", slog.Duration("duration", time.Since(start)), slog.Bool("error", err != nil))
return err }
// Building the chain func BuildValidationChain() Handler { // Create handlers logging := &LoggingHandler{logger: logger} rateLimit := &RateLimitHandler{limiter: limiter} auth := &AuthenticationHandler{authService: authService} validation := &ValidationHandler{validator: validator}
// Build chain logging. SetNext(rateLimit). SetNext(auth). SetNext(validation)
return logging }
// Usage chain := BuildValidationChain() if err := chain.Handle(ctx, request); err != nil { return handleError(err) }
---
## Observer Pattern
### Event System
go
// internal/events/observer.go
package events
import ( "context" "sync" )
// Event represents a domain event type Event interface { Type() string Timestamp() time.Time }
// Observer handles events type Observer interface { Handle(ctx context.Context, event Event) error }
// ObserverFunc allows functions as observers type ObserverFunc func(ctx context.Context, event Event) error
func (f ObserverFunc) Handle(ctx context.Context, event Event) error { return f(ctx, event) }
// EventBus manages observers type EventBus struct { mu sync.RWMutex observers map[string][]Observer logger Logger }
func NewEventBus(logger Logger) *EventBus { return &EventBus{ observers: make(map[string][]Observer), logger: logger, } }
// Subscribe adds an observer for event type func (e *EventBus) Subscribe(eventType string, observer Observer) { e.mu.Lock() defer e.mu.Unlock()
e.observers[eventType] = append(e.observers[eventType], observer) }
// SubscribeFunc subscribes a function func (e *EventBus) SubscribeFunc(eventType string, fn func(context.Context, Event) error) { e.Subscribe(eventType, ObserverFunc(fn)) }
// Publish sends event to observers func (e *EventBus) Publish(ctx context.Context, event Event) error { e.mu.RLock() observers := e.observers[event.Type()] e.mu.RUnlock()
if len(observers) == 0 { return nil }
// Process synchronously - async processing should use worker pools // to avoid goroutine leaks and provide proper error handling return e.notifyObservers(ctx, event, observers) }
func (e *EventBus) notifyObservers(ctx context.Context, event Event, observers []Observer) error { var wg sync.WaitGroup errCh := make(chan error, len(observers))
for _, observer := range observers { wg.Add(1) go func(obs Observer) { defer wg.Done()
if err := obs.Handle(ctx, event); err != nil { e.logger.Error("observer failed", slog.String("event_type", event.Type()), slog.Error(err)) errCh <- err } }(observer) }
wg.Wait() close(errCh)
// Return first error for err := range errCh { return err }
return nil }
// Domain events type UserCreatedEvent struct { UserID string Email string CreatedAt time.Time }
func (e UserCreatedEvent) Type() string { return "user.created" } func (e UserCreatedEvent) Timestamp() time.Time { return e.CreatedAt }
// Usage eventBus := NewEventBus(logger)
// For async event processing, use a worker pool instead: // workerPool := worker.NewPool(5, 100, logger) // workerPool.Start(ctx) // // eventBus.SubscribeFunc("user.created", func(ctx context.Context, event Event) error { // job := NewEventProcessingJob(event) // return workerPool.Submit(job) // })
// Subscribe handlers eventBus.SubscribeFunc("user.created", func(ctx context.Context, event Event) error { e := event.(UserCreatedEvent) return sendWelcomeEmail(ctx, e.Email) })
eventBus.SubscribeFunc("user.created", func(ctx context.Context, event Event) error { e := event.(UserCreatedEvent) return createUserProfile(ctx, e.UserID) })
// Publish event eventBus.Publish(ctx, UserCreatedEvent{ UserID: user.ID, Email: user.Email, CreatedAt: time.Now(), })
---
## Factory Pattern
### Repository Factory
**CRITICAL**: Interfaces are defined by consumers ([service layer](go-practices-service-architecture.md#service-layer-design)), not by storage layer.
go
// internal/service/interfaces.go
package service
import ( "context" "myapp/internal/domain" )
// UserRepository interface defined by service layer (consumer) type UserRepository interface { Create(ctx context.Context, user *domain.User) error GetByID(ctx context.Context, id string) (*domain.User, error) GetByEmail(ctx context.Context, email string) (*domain.User, error) Update(ctx context.Context, user *domain.User) error Delete(ctx context.Context, id string) error }
// ProductRepository interface defined by service layer type ProductRepository interface { Create(ctx context.Context, product *domain.Product) error GetByID(ctx context.Context, id string) (*domain.Product, error) List(ctx context.Context, filter ProductFilter) ([]*domain.Product, error) }
// OrderRepository interface defined by service layer type OrderRepository interface { Create(ctx context.Context, order *domain.Order) error GetByID(ctx context.Context, id string) (*domain.Order, error) GetByUserID(ctx context.Context, userID string) ([]*domain.Order, error) }
go
// internal/service/factory.go
package service
import ( "database/sql" "fmt"
"myapp/internal/storage/postgres" "myapp/internal/storage/mysql" "myapp/internal/storage/sqlite" "myapp/internal/storage/memory" )
// RepositoryType defines storage backend type RepositoryType string
const ( PostgresRepository RepositoryType = "postgres" MySQLRepository RepositoryType = "mysql" SQLiteRepository RepositoryType = "sqlite" MemoryRepository RepositoryType = "memory" )
// RepositoryFactory creates repositories type RepositoryFactory struct { dbConnections map[RepositoryType]*sql.DB logger Logger }
func NewRepositoryFactory(logger Logger) *RepositoryFactory { return &RepositoryFactory{ dbConnections: make(map[RepositoryType]*sql.DB), logger: logger, } }
// RegisterConnection adds a database connection func (f RepositoryFactory) RegisterConnection(repoType RepositoryType, db sql.DB) { f.dbConnections[repoType] = db }
// CreateUserRepository creates appropriate user repository func (f *RepositoryFactory) CreateUserRepository(repoType RepositoryType) (UserRepository, error) { switch repoType { case PostgresRepository: db, ok := f.dbConnections[PostgresRepository] if !ok { return nil, fmt.Errorf("postgres connection not registered") } return postgres.NewUserRepository(db, f.logger), nil
case MySQLRepository: db, ok := f.dbConnections[MySQLRepository] if !ok { return nil, fmt.Errorf("mysql connection not registered") } return mysql.NewUserRepository(db, f.logger), nil
case SQLiteRepository: db, ok := f.dbConnections[SQLiteRepository] if !ok { return nil, fmt.Errorf("sqlite connection not registered") } return sqlite.NewUserRepository(db, f.logger), nil
case MemoryRepository: return memory.NewUserRepository(f.logger), nil
default: return nil, fmt.Errorf("unsupported repository type: %s", repoType) } }
// RepositorySet groups all repositories type RepositorySet struct { Users UserRepository Products ProductRepository Orders OrderRepository }
func (f RepositoryFactory) CreateRepositorySet(repoType RepositoryType) (RepositorySet, error) { users, err := f.CreateUserRepository(repoType) if err != nil { return nil, fmt.Errorf("create user repository: %w", err) }
products, err := f.CreateProductRepository(repoType) if err != nil { return nil, fmt.Errorf("create product repository: %w", err) }
orders, err := f.CreateOrderRepository(repoType) if err != nil { return nil, fmt.Errorf("create order repository: %w", err) }
return &RepositorySet{ Users: users, Products: products, Orders: orders, }, nil }
go
// internal/storage/postgres/user_repo.go
package postgres
import ( "context" "database/sql"
"myapp/internal/domain" )
// UserRepository implements service.UserRepository interface type UserRepository struct { db *sql.DB logger Logger }
func NewUserRepository(db sql.DB, logger Logger) UserRepository { return &UserRepository{ db: db, logger: logger, } }
func (r UserRepository) Create(ctx context.Context, user domain.User) error {
query := INSERT INTO users (id, email, name, created_at) VALUES ($1, $2, $3, $4)
_, err := r.db.ExecContext(ctx, query, user.ID, user.Email, user.Name, user.CreatedAt)
return err
}
func (r UserRepository) GetByID(ctx context.Context, id string) (domain.User, error) {
query := SELECT id, email, name, created_at FROM users WHERE id = $1
row := r.db.QueryRowContext(ctx, query, id)
var user domain.User err := row.Scan(&user.ID, &user.Email, &user.Name, &user.CreatedAt) if err != nil { return nil, err }
return &user, nil }
go
// Usage in main.go or app initialization
factory := service.NewRepositoryFactory(logger)
factory.RegisterConnection(service.PostgresRepository, pgDB)
factory.RegisterConnection(service.SQLiteRepository, sqliteDB)
// Create repositories based on config repoType := service.RepositoryType(config.Database.Type) repos, err := factory.CreateRepositorySet(repoType) if err != nil { return err }
// Initialize services with repositories userService := service.NewUserService(repos.Users, logger) productService := service.NewProductService(repos.Products, logger) orderService := service.NewOrderService(repos.Orders, repos.Users, logger)
---
## Quick Reference Card
### Pattern Selection Guide
| Pattern | When to Use | Example |
|---------|------------|---------|
| **Functional Options** | Configurable objects with defaults | HTTP clients, servers |
| **Builder** | Complex object construction | Query builders, configs |
| **Strategy** | Interchangeable algorithms | Payment processing, encoding |
| **Chain of Responsibility** | Sequential processing with early exit | Middleware, validation |
| **Observer** | Event-driven decoupling | Domain events, notifications |
| **Factory** | Abstract object creation | Multi-database support |
| **Errgroup** | Concurrent operations with error handling | Batch processing, fanout |
### Common Combinations
go
// Strategy + Factory
paymentFactory := NewPaymentFactory()
strategy := paymentFactory.CreateStrategy(paymentType)
processor := NewProcessor(strategy)
// Builder + Functional Options client := NewClientBuilder(). WithTimeout(30*time.Second). WithRetries(3). Build()
// Observer + Chain of Responsibility eventBus.Subscribe("order.created", NewHandlerChain( ValidateHandler(), EnrichHandler(), NotifyHandler(), ), )
// Factory + Repository Pattern repos := factory.CreateRepositories(dbType) service := NewService(repos)
### Anti-Patterns to Avoid
1. **God Object**: Don't put everything in one struct
2. **Anemic Domain**: Keep behavior with data
3. **Service Locator**: Use [dependency injection](go-practices-service-architecture.md#dependency-injection)
4. **Singleton**: Use [dependency injection](go-practices-service-architecture.md#dependency-injection) instead
5. **Active Record**: Separate domain from persistence
### Pattern Implementation Checklist
- [ ] Single responsibility per type
- [ ] Interface-based dependencies
- [ ] Constructor injection
- [ ] Immutable configuration
- [ ] Error handling at boundaries
- [ ] Concurrent safety when needed
- [ ] Clear ownership of resources
- [ ] Proper cleanup/shutdown
- [ ] Comprehensive tests
- [ ] Documentation with examples
## Quick Reference Checklist
### Functional Options Pattern Implementation
- [ ] Define option function type: `type Option func(*Type)`
- [ ] Create constructor that accepts variadic options: `New(required, ...Option)`
- [ ] Set sensible defaults before applying options
- [ ] Create option constructors for each configurable field
- [ ] Consider option validation and error handling for complex cases
- [ ] Use options for optional configuration, not required parameters
### Builder Pattern Best Practices
- [ ] Implement fluent interface with method chaining
- [ ] Validate required fields in the `Build()` method
- [ ] Return builder pointer for method chaining
- [ ] Handle errors gracefully (collect during building, return in Build)
- [ ] Make builder immutable or document mutation behavior
- [ ] Provide sensible defaults for optional fields
### Strategy Pattern Implementation
- [ ] Define strategy interface with clear, focused methods
- [ ] Create concrete strategies with single responsibility
- [ ] Use factory or registry pattern for strategy selection
- [ ] Implement strategy validation before execution
- [ ] Document strategy behavior and constraints
- [ ] Consider strategy composition for complex scenarios
### Chain of Responsibility Design
- [ ] Define handler interface with single Handle method
- [ ] Implement base handler for common functionality
- [ ] Use explicit next handler linking (not implicit)
- [ ] Handle errors at appropriate chain level
- [ ] Implement short-circuiting for early termination
- [ ] Document chain ordering requirements and dependencies
### Observer Pattern Implementation
- [ ] Define event and observer interfaces clearly
- [ ] Implement thread-safe observer registration/removal
- [ ] Consider async vs sync notification patterns
- [ ] Handle observer failures gracefully (don't break chain)
- [ ] Implement observer deregistration to prevent memory leaks
- [ ] Use typed events for better type safety
### Factory Pattern Guidelines
- [ ] Define interfaces in consumer packages (not factory)
- [ ] Use factory for complex object creation logic
- [ ] Support multiple implementations via configuration
- [ ] Handle factory errors with clear error messages
- [ ] Consider dependency injection over service locator
- [ ] Make factories testable with mock implementations
### Pattern Selection & Usage
- [ ] Use Functional Options for configurable constructors
- [ ] Apply Builder for complex, multi-step object creation
- [ ] Implement Strategy for interchangeable algorithms
- [ ] Use Chain of Responsibility for sequential processing with early exit
- [ ] Apply Observer for event-driven decoupling
- [ ] Use Factory for implementation abstraction
### Common Anti-Patterns to Avoid
- [ ] Avoid god objects (too many responsibilities)
- [ ] Don't use global state or singletons
- [ ] Avoid anemic domain models (data without behavior)
- [ ] Don't overuse service locator pattern
- [ ] Avoid active record (mixing persistence with domain logic)
- [ ] Don't create circular dependencies between packages
### Pattern Testing Strategies
- [ ] Test pattern behavior, not implementation details
- [ ] Mock dependencies at pattern boundaries
- [ ] Test error conditions and edge cases
- [ ] Verify pattern contracts and invariants
- [ ] Use table-driven tests for multiple scenarios
- [ ] Test pattern performance under realistic loads
### Code Quality & Maintainability
- [ ] Follow single responsibility principle for each pattern
- [ ] Use interface-based dependencies for testability
- [ ] Implement proper error handling and propagation
- [ ] Document pattern usage and constraints
- [ ] Ensure thread safety where required
- [ ] Provide comprehensive examples and usage documentation
---
---
# 10. Migration Guide & Code Smells
## Table of Contents
1. [Critical Code Smells](#critical-code-smells)
2. [Refactoring Strategies](#refactoring-strategies)
3. [Migration Patterns](#migration-patterns)
4. [Architecture Decision Records](#architecture-decision-records)
5. [Legacy Code Transformation](#legacy-code-transformation)
6. [Quality Metrics](#quality-metrics)
---
## Critical Code Smells
### 1. Global State Everywhere
**Smell:**
go
// ❌ BAD: Global variables scattered across packages
package config
var ( DB *sql.DB HTTPClient *http.Client Logger *log.Logger Config *Configuration )
package handlers
func GetUser(id string) (*User, error) { // Direct global access row := config.DB.QueryRow("SELECT * FROM users WHERE id = ?", id) config.Logger.Printf("Getting user %s", id) // ... }
**Fix:**
go
// ✅ GOOD: Dependency injection
package handlers
type UserHandler struct { db *sql.DB logger logging.Logger }
func NewUserHandler(db sql.DB, logger logging.Logger) UserHandler { return &UserHandler{ db: db, logger: logger, } }
func (h UserHandler) GetUser(ctx context.Context, id string) (User, error) { h.logger.Info("getting user", slog.String("user_id", id)) row := h.db.QueryRowContext(ctx, "SELECT * FROM users WHERE id = ?", id) // ... }
### 2. init() Functions with Side Effects
**Smell:**
go
// ❌ BAD: Side effects in init
package database
func init() { var err error DB, err = sql.Open("postgres", os.Getenv("DATABASE_URL")) if err != nil { panic(err) }
if err = DB.Ping(); err != nil { panic(err) } }
**Fix:**
go
// ✅ GOOD: Explicit initialization
package database
func NewConnection(cfg Config) (*sql.DB, error) { db, err := sql.Open(cfg.Driver, cfg.DSN) if err != nil { return nil, fmt.Errorf("open database: %w", err) }
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel()
if err := db.PingContext(ctx); err != nil { db.Close() return nil, fmt.Errorf("ping database: %w", err) }
return db, nil }
### 3. Interface Pollution
**Smell:**
go
// ❌ BAD: Huge interfaces
type UserService interface {
CreateUser(user *User) error
GetUser(id string) (*User, error)
UpdateUser(user *User) error
DeleteUser(id string) error
ListUsers(filter Filter) ([]*User, error)
SearchUsers(query string) ([]*User, error)
GetUserByEmail(email string) (*User, error)
ValidatePassword(userID, password string) error
ChangePassword(userID, newPassword string) error
SendPasswordReset(email string) error
// ... 20 more methods
}
**Fix:**
go
// ✅ GOOD: Small, focused interfaces
type UserGetter interface {
GetUser(ctx context.Context, id string) (*User, error)
}
type UserCreator interface { CreateUser(ctx context.Context, user *User) error }
type UserAuthenticator interface { ValidatePassword(ctx context.Context, userID, password string) error }
// Compose as needed type UserService interface { UserGetter UserCreator }
### 4. Flat Command Structure
**Smell:**
go
// ❌ BAD: All commands in one package
cmd/
├── root.go
├── create.go
├── delete.go
├── list.go
├── update.go
├── export.go
├── import.go
└── sync.go // 20+ files in one directory
**Fix:**
go
// ✅ GOOD: Hierarchical structure
cmd/myapp/
├── root.go
├── user/
│ ├── user.go // Parent command
│ ├── create/
│ │ └── create.go
│ ├── list/
│ │ └── list.go
│ └── delete/
│ └── delete.go
└── data/
├── data.go // Parent command
├── export/
│ └── export.go
└── import/
└── import.go
### 5. Error String Formatting
**Smell:**
go
// ❌ BAD: fmt.Errorf everywhere
func ProcessUser(id string) error {
user, err := getUser(id)
if err != nil {
return fmt.Errorf("failed to get user %s: %v", id, err)
}
if err := validate(user); err != nil { return fmt.Errorf("validation failed: %v", err) }
return fmt.Errorf("not implemented") }
**Fix:**
go
// ✅ GOOD: Typed errors
func ProcessUser(id string) error {
user, err := getUser(id)
if err != nil {
return &ServiceError{
Code: "USERNOTFOUND",
Message: "user not found",
Operation: "process_user",
Context: map[string]interface{}{"user_id": id},
Cause: err,
}
}
if err := validate(user); err != nil { return NewValidationError("user", err.Error()) }
return ErrNotImplemented }
---
## Refactoring Strategies
### Extract Service Layer
**Before:**
go
// ❌ HTTP handler with business logic
func HandleCreateUser(w http.ResponseWriter, r *http.Request) {
var input CreateUserInput
json.NewDecoder(r.Body).Decode(&input)
// Validation mixed with HTTP if input.Email == "" { http.Error(w, "email required", 400) return }
// Business logic in handler hashedPassword, _ := bcrypt.GenerateFromPassword([]byte(input.Password), 10)
// Direct database access _, err := db.Exec("INSERT INTO users (email, password) VALUES (?, ?)", input.Email, hashedPassword)
if err != nil { http.Error(w, "database error", 500) return }
json.NewEncoder(w).Encode(map[string]string{"status": "created"}) }
**After:**
go
// ✅ Clean separation of concerns
// Transport layer (HTTP handler) func (h UserHandler) CreateUser(w http.ResponseWriter, r http.Request) { var req CreateUserRequest if err := h.decode(r, &req); err != nil { h.respondError(w, err) return }
user, err := h.userService.CreateUser(r.Context(), service.CreateUserInput{ Email: req.Email, Password: req.Password, Name: req.Name, })
if err != nil { h.respondError(w, err) return }
h.respond(w, http.StatusCreated, UserResponse{ ID: user.ID, Email: user.Email, CreatedAt: user.CreatedAt, }) }
// Service layer (business logic) func (s UserService) CreateUser(ctx context.Context, input CreateUserInput) (domain.User, error) { // Validation if err := s.validateCreateInput(input); err != nil { return nil, err }
// Check existence existing, _ := s.repo.GetByEmail(ctx, input.Email) if existing != nil { return nil, ErrEmailTaken }
// Create domain object user := &domain.User{ ID: GenerateID(), Email: input.Email, Name: input.Name, }
// Business rule if err := user.SetPassword(input.Password); err != nil { return nil, err }
// Persist if err := s.repo.Create(ctx, user); err != nil { return nil, fmt.Errorf("create user: %w", err) }
// Emit event s.events.Publish(ctx, UserCreatedEvent{UserID: user.ID})
return user, nil }
### Interface Extraction
**Migration Steps:**
1. **Identify Concrete Dependencies**
go
// Before: Concrete type
type UserService struct {
db *sql.DB
client *http.Client
}
2. **Define Interface at Usage Point**
go
// service/interfaces.go
type UserRepository interface {
Create(ctx context.Context, user *domain.User) error
GetByID(ctx context.Context, id string) (*domain.User, error)
}
3. **Update Service**
go
// After: Interface dependency
type UserService struct {
repo UserRepository
events EventPublisher
}
4. **Implement Interface**
go
// storage/postgres/user_repo.go
type UserRepository struct {
db *sql.DB
}
func (r UserRepository) Create(ctx context.Context, user domain.User) error { // Implementation }
---
## Migration Patterns
### Gradual Service Extraction
go
// Phase 1: Create service alongside handler
type UserHandler struct {
db *sql.DB
// Add service
service *UserService
}
// Phase 2: Move logic to service method by method func (h UserHandler) GetUser(w http.ResponseWriter, r http.Request) { id := chi.URLParam(r, "id")
// Old way (comment out) // row := h.db.QueryRow("SELECT * FROM users WHERE id = ?", id)
// New way user, err := h.service.GetUser(r.Context(), id) if err != nil { h.respondError(w, err) return }
h.respond(w, http.StatusOK, user) }
// Phase 3: Remove direct DB access from handler type UserHandler struct { // db *sql.DB // Removed service *UserService }
### Parallel Implementation
go
// Keep old implementation while building new
package app
type Application struct { // Legacy OldUserHandler *handlers.UserHandler
// New architecture NewUserHandler *http.UserHandler }
// Route with feature flag func (a *Application) SetupRoutes() { r := chi.NewRouter()
if a.config.UseNewArchitecture { r.Mount("/api/users", a.NewUserHandler.Routes()) } else { r.Mount("/api/users", a.OldUserHandler.Routes()) } }
---
## Architecture Decision Records
### ADR Template
markdown
ADR-001: Adopt Domain-Driven Design
Status
Accepted
Context
Our application has grown to 50K+ lines with business logic scattered across HTTP handlers, making testing difficult and changes risky.
Decision
We will adopt a domain-driven design with clear boundaries:
Consequences
Positive
Negative
Risks
Migration Plan
### Common Architecture Decisions
markdown
ADR-002: Error Handling Strategy
Decision
All errors must be typed from day one. No fmt.Errorf() allowed.
Implementation
ADR-003: No Global State
Decision
No package-level variables except in main(). All dependencies injected.
Implementation
ADR-004: Hierarchical CLI Commands
Decision
Commands organized hierarchically in separate packages.
Implementation
---
## Legacy Code Transformation
### Strangler Fig Pattern (Gradual Migration)
The Strangler Fig Pattern allows you to gradually replace legacy systems by wrapping them with new implementations, similar to how a strangler fig vine gradually envelops a tree.
### Step-by-Step Migration
#### Step 1: Introduce Interfaces
go
// legacy/database.go
package legacy
var DB *sql.DB // Global database
// Start by wrapping in interface type Database interface { Query(query string, args ...interface{}) (*sql.Rows, error) Exec(query string, args ...interface{}) (sql.Result, error) }
// Adapter for gradual migration type DBAdapter struct { *sql.DB }
func GetDB() Database { return &DBAdapter{DB} }
#### Step 2: Extract Repository
go
// Add repository layer
package repository
type UserRepository struct { db legacy.Database }
func NewUserRepository() *UserRepository { return &UserRepository{ db: legacy.GetDB(), } }
func (r UserRepository) GetByID(id string) (User, error) { // Move SQL here from handlers }
#### Step 3: Create Service Layer
go
package service
type UserService struct { repo *repository.UserRepository }
func NewUserService() *UserService { return &UserService{ repo: repository.NewUserRepository(), } }
#### Step 4: Update Handlers
go
// Before
func GetUser(w http.ResponseWriter, r *http.Request) {
id := r.URL.Query().Get("id")
row := legacy.DB.QueryRow("SELECT * FROM users WHERE id = ?", id)
// ...
}
// After func (h UserHandler) GetUser(w http.ResponseWriter, r http.Request) { id := chi.URLParam(r, "id") user, err := h.service.GetUser(r.Context(), id) if err != nil { h.respondError(w, err) return } h.respond(w, http.StatusOK, user) }
#### Step 5: Remove Global State
go
// main.go
func main() {
// Initialize everything explicitly
db, err := database.New(config.Database)
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Wire dependencies userRepo := postgres.NewUserRepository(db) userService := service.NewUserService(userRepo) userHandler := http.NewUserHandler(userService)
// Start server server := http.NewServer(userHandler) server.Run() }
### Migration Checklist
- [ ] Map current architecture
- [ ] Identify bounded contexts
- [ ] Design target architecture
- [ ] Create migration plan
- [ ] Set up linters
- [ ] Migrate incrementally
- [ ] Add tests for migrated code
- [ ] Remove legacy code
- [ ] Document decisions
- [ ] Train team
---
## Quality Metrics
### Code Quality Metrics
go
// internal/metrics/quality.go
package metrics
type QualityReport struct { CyclomaticComplexity int CodeCoverage float64 TechnicalDebt time.Duration DependencyCount int InterfaceCount int GlobalVariables int InitFunctions int }
func Analyze(projectPath string) (*QualityReport, error) { // Use go/ast to analyze code // Count globals, init functions, etc. }
### Architecture Fitness Functions
go
// test/architecture_test.go
package test
import ( "go/ast" "go/parser" "go/token" "testing" )
func TestNoGlobalVariables(t *testing.T) { packages := []string{ "./internal/...", "./pkg/...", }
// Allowed package-level variables (constants are OK) allowlist := map[string]bool{ "_": true, // Blank identifier for imports "version": true, // Version strings are OK "buildTime": true, // Build metadata is OK }
for _, pkg := range packages { fset := token.NewFileSet() pkgs, err := parser.ParseDir(fset, pkg, nil, 0) require.NoError(t, err)
for _, pkg := range pkgs { for _, file := range pkg.Files { ast.Inspect(file, func(n ast.Node) bool { if decl, ok := n.(*ast.GenDecl); ok && decl.Tok == token.VAR { // Check if this is at file scope (package level) if isFileScope(decl, file) { for _, spec := range decl.Specs { if vspec, ok := spec.(*ast.ValueSpec); ok { for _, name := range vspec.Names { if !allowlist[name.Name] { t.Errorf("found package-level variable: %s in %s\n"+ "Package-level variables create global state.\n"+ "Use dependency injection instead.", name.Name, fset.Position(name.Pos())) } } } } } } return true }) } } } }
func isFileScope(decl ast.GenDecl, file ast.File) bool { // Check if the declaration is at the top level of the file for _, d := range file.Decls { if d == decl { return true } } return false }
func TestNoDomainImportsFromStorage(t *testing.T) { // Ensure domain package has no dependencies }
func TestInterfacesDefinedByConsumers(t *testing.T) { // Check interfaces are in service layer, not storage }
### Refactoring Metrics
Track progress:
- Lines migrated vs total
- Test coverage increase
- Reduction in cyclomatic complexity
- Decrease in global state
- Interface adoption rate
- Build time improvement
### Success Criteria
Migration complete when:
1. Zero global variables (except main)
2. No init() functions with side effects
3. 80%+ test coverage
4. All commands in separate packages
5. Clear architectural boundaries
6. No circular dependencies
7. All errors typed
8. Consistent patterns throughout
---
## Summary
Successful migration requires:
- **Incremental approach**: Don't rewrite everything at once
- **Clear boundaries**: Enforce with tooling
- **Team alignment**: Everyone understands the target
- **Continuous validation**: Architecture tests
- **Pragmatic decisions**: Perfect is enemy of good
Remember: Architecture is a journey, not a destination. Start with the most critical issues and improve continuously.
## Quick Reference Checklist
### Code Smell Identification & Remediation
- [ ] Eliminate all global state and package-level variables
- [ ] Remove init() functions with side effects
- [ ] Break down god objects and interfaces (>10 methods)
- [ ] Replace fmt.Errorf with typed error handling
- [ ] Organize flat command structures hierarchically
- [ ] Refactor business logic out of HTTP handlers
### Service Layer Extraction
- [ ] Create service layer interfaces in service package
- [ ] Move business logic from handlers to service methods
- [ ] Implement repository pattern with interfaces
- [ ] Separate domain objects from transport DTOs
- [ ] Add proper validation at service boundaries
- [ ] Implement event publishing for domain events
### Interface Extraction & Design
- [ ] Define interfaces at point of use (consumer defines)
- [ ] Keep interfaces small and focused (1-3 methods)
- [ ] Replace concrete dependencies with interfaces
- [ ] Use composition for larger interface needs
- [ ] Implement interface segregation principle
- [ ] Avoid god interfaces with many methods
### Migration Planning & Execution
- [ ] Map current architecture and identify pain points
- [ ] Create target architecture design and ADRs
- [ ] Plan incremental migration strategy
- [ ] Use Strangler Fig pattern for gradual replacement
- [ ] Implement parallel architecture during transition
- [ ] Create feature flags for new vs old implementations
### Legacy Code Transformation Strategy
- [ ] Start with critical paths and high-value areas
- [ ] Extract service layer while keeping existing handlers
- [ ] Introduce interfaces to wrap existing concrete types
- [ ] Move from global state to dependency injection
- [ ] Refactor on modification (boy scout rule)
- [ ] Maintain backward compatibility during transition
### Architecture Quality Gates
- [ ] Set up linting rules to prevent regressions
- [ ] Implement architecture fitness functions as tests
- [ ] Check for package dependency violations
- [ ] Verify no new global variables are introduced
- [ ] Validate interface definitions are in correct packages
- [ ] Monitor technical debt metrics over time
### Testing During Migration
- [ ] Add tests for existing functionality before refactoring
- [ ] Use adapter pattern to test legacy components
- [ ] Create integration tests for migration boundaries
- [ ] Test both old and new implementations in parallel
- [ ] Verify behavior doesn't change during refactoring
- [ ] Add performance tests for critical paths
### Team & Process Management
- [ ] Document migration decisions in ADRs
- [ ] Train team on new patterns and practices
- [ ] Establish code review guidelines for migrations
- [ ] Set migration milestones and success criteria
- [ ] Create guidelines for when to refactor vs rewrite
- [ ] Monitor progress with measurable metrics
### Migration Success Metrics
- [ ] Zero global variables (except main package)
- [ ] All commands in separate packages with proper structure
- [ ] 80%+ test coverage for business logic
- [ ] No circular dependencies between packages
- [ ] All errors are typed (no fmt.Errorf)
- [ ] Clear architectural boundaries enforced by tooling
### Post-Migration Validation
- [ ] Run full test suite to verify functionality
- [ ] Performance test to ensure no regressions
- [ ] Update documentation to reflect new architecture
- [ ] Remove legacy code and cleanup old patterns
- [ ] Set up monitoring for architectural violations
- [ ] Plan regular architecture reviews and improvements
---
# A. Appendix A: CLIFoundation Starter
## Overview
**CLIFoundation** is an architectural skeleton for Go CLI applications that demonstrates how the patterns from this guide fit together. It provides a solid structural foundation to build upon, showing the proper organization and integration of components.
### What This Is vs What You Build
**This Skeleton Provides:**
- ✅ Proper project structure and organization
- ✅ Dependency injection setup without global state
- ✅ Interface definitions showing proper boundaries
- ✅ Basic implementations demonstrating patterns
- ✅ Linter configuration enforcing best practices
- ✅ Example of how components wire together
**You Need to Add:**
- ❌ Actual business logic for your domain
- ❌ Complete error handling for all edge cases
- ❌ Monitoring, metrics, and distributed tracing
- ❌ Security measures (authentication, rate limiting)
- ❌ Performance optimizations for your use case
- ❌ Comprehensive test coverage
### What Makes This Special
- **Zero Global State**: Everything is dependency injected
- **Testable by Design**: Every component can be mocked and tested
- **Patterns Demonstrated**: Shows worker pools, pipelines, and proper error handling
- **Real Structure**: Includes migrations, health checks, and configuration
- **Best Practices Enforced**: Linter rules prevent common mistakes
## Quick Start
bash
Clone and rename
git clone https://github.com/yourusername/clifoundation myapp cd myapp
Update module name
go mod edit -module github.com/yourusername/myapp find . -type f -name '*.go' -exec sed -i '' 's|clifoundation|myapp|g' {} +
Install dependencies
go mod tidy
Run tests
make test
Build and run
make build ./bin/myapp --help
## Project Structure
clifoundation/
├── cmd/clifoundation/ # Application entrypoint
│ ├── main.go # Minimal main - just error handling
│ ├── root.go # Root command with app initialization
│ └── commands/ # Subcommands
│ ├── run.go # Main 'run' command
│ └── version.go # Version information
│
├── internal/ # Private application code
│ ├── app/ # Application orchestrator
│ │ └── app.go # DI container and lifecycle
│ │
│ ├── config/ # Configuration management
│ │ ├── config.go # Config structures
│ │ └── load.go # Viper-based loading
│ │
│ ├── domain/ # Core business entities
│ │ ├── errors.go # Domain-specific errors
│ │ ├── prompt.go # Prompt entity
│ │ └── conversation.go # Conversation entity
│ │
│ ├── service/ # Business logic layer
│ │ ├── interfaces.go # Service interfaces (CRITICAL!)
│ │ ├── prompt_service.go # Prompt processing logic
│ │ └── health_service.go # Health check orchestration
│ │
│ ├── pipeline/ # Processing pipeline stages
│ │ ├── stage.go # Stage interface
│ │ ├── validator.go # Input validation stage
│ │ ├── processor.go # Main processing stage
│ │ └── executor.go # Execution stage
│ │
│ ├── storage/ # Data persistence
│ │ ├── sqlite/
│ │ │ └── repository.go # SQLite implementation
│ │ └── migrations/
│ │ └── 001_init.sql
│ │
│ ├── pkg/ # Internal packages
│ │ ├── errors/ # Error handling utilities
│ │ ├── logging/ # Structured logging setup
│ │ └── shutdown/ # Graceful shutdown
│ │
│ └── ui/ # User interface
│ ├── output.go # Output formatting
│ └── progress.go # Progress indicators
│
├── test/ # Test files
│ ├── integration/ # Integration tests
│ └── testdata/ # Test fixtures
│
├── .golangci.yml # Linter configuration
├── Makefile # Build automation
├── go.mod
└── README.md
## Core Implementation Files
### `cmd/clifoundation/main.go`
Minimal entrypoint following [CLI patterns](go-practices-cli-config.md):
go
package main
import ( "fmt" "os" )
func main() { if err := Execute(); err != nil { fmt.Fprintf(os.Stderr, "Error: %v\n", err) os.Exit(1) } }
### `cmd/clifoundation/root.go`
Root command with proper initialization:
go
package main
import ( "context" "fmt"
"github.com/spf13/cobra" "github.com/yourusername/clifoundation/cmd/clifoundation/commands" "github.com/yourusername/clifoundation/internal/app" "github.com/yourusername/clifoundation/internal/config" "github.com/yourusername/clifoundation/internal/pkg/logging" )
var ( cfgFile string appContainer *app.App )
var rootCmd = &cobra.Command{
Use: "clifoundation",
Short: "A solid foundation for CLI applications",
Long: CLIFoundation demonstrates production-ready patterns
for building maintainable Go CLI applications.
,
PersistentPreRunE: initializeApp,
PersistentPostRunE: cleanupApp,
}
func Execute() error { return rootCmd.Execute() }
func init() { rootCmd.PersistentFlags().StringVar(&cfgFile, "config", "", "config file (default: $HOME/.clifoundation/config.yaml)")
// Add subcommands rootCmd.AddCommand( commands.NewRunCommand(&appContainer), commands.NewVersionCommand(), ) }
func initializeApp(cmd *cobra.Command, args []string) error { // Load configuration cfg, err := config.Load(cfgFile) if err != nil { return fmt.Errorf("load config: %w", err) }
// Initialize structured logging logger := logging.New(cfg.Log)
// Create application container with all dependencies appContainer, err = app.New(cfg, logger) if err != nil { return fmt.Errorf("initialize app: %w", err) }
logger.Info("application initialized", "version", Version, "config", cfgFile, )
return nil }
func cleanupApp(cmd *cobra.Command, args []string) error { if appContainer != nil { return appContainer.Shutdown(cmd.Context()) } return nil }
### `internal/app/app.go`
The DI container following [service architecture patterns](go-practices-service-architecture.md):
go
package app
import ( "context" "fmt" "log/slog"
"github.com/yourusername/clifoundation/internal/config" "github.com/yourusername/clifoundation/internal/pipeline" "github.com/yourusername/clifoundation/internal/service" "github.com/yourusername/clifoundation/internal/storage/sqlite" )
// App is the main application container holding all services type App struct { Config *config.Config Logger *slog.Logger
// Services PromptService service.PromptService HealthService service.HealthService
// Pipeline Pipeline *pipeline.Pipeline
// Cleanup functions cleanups []func() error }
// New creates a fully wired application func New(cfg config.Config, logger slog.Logger) (*App, error) { app := &App{ Config: cfg, Logger: logger, }
// Initialize database db, err := sqlite.New(cfg.Database) if err != nil { return nil, fmt.Errorf("init database: %w", err) } app.cleanups = append(app.cleanups, db.Close)
// Run migrations if err := sqlite.Migrate(db); err != nil { return nil, fmt.Errorf("run migrations: %w", err) }
// Create repositories conversationRepo := sqlite.NewConversationRepository(db, logger)
// Create pipeline stages stages := []pipeline.Stage{ pipeline.NewValidator(logger), pipeline.NewProcessor(logger), pipeline.NewExecutor(logger), }
// Create pipeline app.Pipeline = pipeline.New(stages, logger)
// Create services app.PromptService = service.NewPromptService( conversationRepo, app.Pipeline, logger, )
app.HealthService = service.NewHealthService( db, logger, )
return app, nil }
// Shutdown gracefully shuts down the application func (a *App) Shutdown(ctx context.Context) error { a.Logger.Info("shutting down application")
// Run cleanup functions in reverse order for i := len(a.cleanups) - 1; i >= 0; i-- { if err := a.cleanups[i](); err != nil { a.Logger.Error("cleanup failed", "error", err, "index", i, ) } }
return nil }
### `internal/service/interfaces.go`
Critical interface definitions following [interface design principles](go-practices-service-architecture.md#interface-design-principles):
go
package service
import ( "context" "io"
"github.com/yourusername/clifoundation/internal/domain" )
// ConversationRepository defines data access for conversations // This interface is defined by the service layer (consumer) type ConversationRepository interface { Create(ctx context.Context, conv *domain.Conversation) error GetByID(ctx context.Context, id string) (*domain.Conversation, error) List(ctx context.Context, limit, offset int) ([]*domain.Conversation, error) Update(ctx context.Context, conv *domain.Conversation) error Delete(ctx context.Context, id string) error }
// Pipeline processes prompts through stages type Pipeline interface { Execute(ctx context.Context, prompt domain.Prompt) (domain.Result, error) }
// PromptService handles prompt processing business logic type PromptService interface { Process(ctx context.Context, input string) (*domain.Result, error) GetHistory(ctx context.Context) ([]*domain.Conversation, error) }
// HealthService checks system health type HealthService interface { Check(ctx context.Context) (*domain.HealthStatus, error) }
// Backend represents an AI provider type Backend interface { Generate(ctx context.Context, prompt string) (string, error) StreamGenerate(ctx context.Context, prompt string) (<-chan string, <-chan error) HealthCheck(ctx context.Context) error }
### `internal/config/config.go`
Configuration following [CLI config patterns](go-practices-cli-config.md):
go
package config
import ( "fmt" "time" )
// Config holds all application configuration
type Config struct {
// Application settings
App AppConfig mapstructure:"app"
// Database configuration
Database DatabaseConfig mapstructure:"database"
// Logging configuration
Log LogConfig mapstructure:"log"
// Pipeline settings
Pipeline PipelineConfig mapstructure:"pipeline"
}
type AppConfig struct {
Name string mapstructure:"name"
Environment string mapstructure:"environment"
Timeout time.Duration mapstructure:"timeout"
}
type DatabaseConfig struct {
Path string mapstructure:"path"
}
type LogConfig struct {
Level string mapstructure:"level"
Format string mapstructure:"format"
}
type PipelineConfig struct {
MaxWorkers int mapstructure:"max_workers"
BufferSize int mapstructure:"buffer_size"
Timeout time.Duration mapstructure:"timeout"
}
// Validate ensures configuration is valid func (c *Config) Validate() error { if c.App.Name == "" { return fmt.Errorf("app.name is required") }
if c.Pipeline.MaxWorkers < 1 { c.Pipeline.MaxWorkers = 10 }
if c.Pipeline.BufferSize < 1 { c.Pipeline.BufferSize = 100 }
return nil }
### `internal/domain/errors.go`
Domain errors following [error handling patterns](go-practices-error-logging.md):
go
package domain
import "errors"
// Sentinel errors for the domain var ( // ErrNotFound indicates a requested resource doesn't exist ErrNotFound = errors.New("not found")
// ErrInvalidInput indicates invalid user input ErrInvalidInput = errors.New("invalid input")
// ErrQuotaExceeded indicates usage limits exceeded ErrQuotaExceeded = errors.New("quota exceeded") )
// ValidationError represents input validation failures type ValidationError struct { Field string Message string }
func (e *ValidationError) Error() string { return fmt.Sprintf("validation failed for %s: %s", e.Field, e.Message) }
// ProcessingError represents failures during processing type ProcessingError struct { Stage string Message string Cause error }
func (e *ProcessingError) Error() string { if e.Cause != nil { return fmt.Sprintf("processing failed at %s: %s: %v", e.Stage, e.Message, e.Cause) } return fmt.Sprintf("processing failed at %s: %s", e.Stage, e.Message) }
func (e *ProcessingError) Unwrap() error { return e.Cause }
### `internal/pipeline/stage.go`
Pipeline pattern implementation:
go
package pipeline
import ( "context" "log/slog"
"github.com/yourusername/clifoundation/internal/domain" )
// Stage represents a single processing stage type Stage interface { Name() string Process(ctx context.Context, data *domain.Prompt) error }
// Pipeline orchestrates multiple stages type Pipeline struct { stages []Stage logger *slog.Logger }
// New creates a new pipeline func New(stages []Stage, logger slog.Logger) Pipeline { return &Pipeline{ stages: stages, logger: logger, } }
// Execute runs all stages in sequence func (p Pipeline) Execute(ctx context.Context, prompt domain.Prompt) (*domain.Result, error) { p.logger.Debug("starting pipeline execution", "prompt_id", prompt.ID, "stages", len(p.stages), )
for _, stage := range p.stages { select { case <-ctx.Done(): return nil, ctx.Err() default: }
p.logger.Debug("executing stage", "stage", stage.Name(), "prompt_id", prompt.ID, )
if err := stage.Process(ctx, prompt); err != nil { return nil, &domain.ProcessingError{ Stage: stage.Name(), Message: "stage execution failed", Cause: err, } } }
return &domain.Result{ PromptID: prompt.ID, Output: prompt.ProcessedContent, }, nil }
### `internal/pipeline/file_includer.go`
Worker pool pattern from [concurrency guide](go-practices-concurrency.md#errgroup-pattern):
go
package pipeline
import ( "context" "fmt" "log/slog" "os" "path/filepath" "strings"
"golang.org/x/sync/errgroup" "github.com/yourusername/clifoundation/internal/domain" )
// FileIncluder processes file inclusion directives in parallel type FileIncluder struct { logger *slog.Logger maxWorkers int }
// NewFileIncluder creates a new file inclusion stage func NewFileIncluder(logger slog.Logger, maxWorkers int) FileIncluder { if maxWorkers < 1 { maxWorkers = 5 } return &FileIncluder{ logger: logger, maxWorkers: maxWorkers, } }
func (f *FileIncluder) Name() string { return "file_includer" }
// Process finds and includes files referenced in the prompt func (f FileIncluder) Process(ctx context.Context, prompt domain.Prompt) error { // Find all file references (e.g., @include:filepath) includes := f.findIncludes(prompt.Content) if len(includes) == 0 { return nil }
f.logger.Debug("processing file includes", "count", len(includes), "max_workers", f.maxWorkers, )
// Process files in parallel using errgroup g, ctx := errgroup.WithContext(ctx)
// Limit concurrency sem := make(chan struct{}, f.maxWorkers)
// Results channel type result struct { placeholder string content string } results := make(chan result, len(includes))
// Process each file for _, include := range includes { include := include // Capture loop variable
g.Go(func() error { // Acquire semaphore select { case sem <- struct{}{}: defer func() { <-sem }() case <-ctx.Done(): return ctx.Err() }
content, err := f.readFile(ctx, include.Path) if err != nil { return fmt.Errorf("read %s: %w", include.Path, err) }
results <- result{ placeholder: include.Placeholder, content: content, } return nil }) }
// Wait for all reads to complete if err := g.Wait(); err != nil { return err } close(results)
// Apply all replacements content := prompt.Content for res := range results { content = strings.ReplaceAll(content, res.placeholder, res.content) }
prompt.ProcessedContent = content return nil }
type includeRef struct { Placeholder string Path string }
func (f *FileIncluder) findIncludes(content string) []includeRef { // Simple pattern matching - in production, use regex var includes []includeRef
lines := strings.Split(content, "\n") for _, line := range lines { if strings.HasPrefix(line, "@include:") { path := strings.TrimPrefix(line, "@include:") includes = append(includes, includeRef{ Placeholder: line, Path: strings.TrimSpace(path), }) } }
return includes }
func (f *FileIncluder) readFile(ctx context.Context, path string) (string, error) { // Validate path path = filepath.Clean(path) if filepath.IsAbs(path) { return "", fmt.Errorf("absolute paths not allowed") }
// Read with size limit const maxSize = 10 1024 1024 // 10MB
info, err := os.Stat(path) if err != nil { return "", err }
if info.Size() > maxSize { return "", fmt.Errorf("file too large: %d bytes", info.Size()) }
data, err := os.ReadFile(path) if err != nil { return "", err }
return string(data), nil }
### `internal/backends/factory.go`
Factory pattern with singleflight from [patterns guide](go-practices-patterns.md#factory-pattern):
go
package backends
import ( "context" "fmt" "sync"
"golang.org/x/sync/singleflight" "github.com/yourusername/clifoundation/internal/domain" "github.com/yourusername/clifoundation/internal/service" )
// Factory creates backend instances type Factory struct { mu sync.RWMutex backends map[string]service.Backend group singleflight.Group creators map[string]Creator }
// Creator function for specific backend types type Creator func(config domain.BackendConfig) (service.Backend, error)
// NewFactory creates a backend factory func NewFactory() *Factory { f := &Factory{ backends: make(map[string]service.Backend), creators: make(map[string]Creator), }
// Register backend creators f.Register("openai", NewOpenAIBackend) f.Register("anthropic", NewAnthropicBackend) f.Register("mock", NewMockBackend)
return f }
// Register adds a new backend creator func (f *Factory) Register(name string, creator Creator) { f.mu.Lock() defer f.mu.Unlock() f.creators[name] = creator }
// GetBackend returns a backend instance, creating if necessary func (f *Factory) GetBackend(ctx context.Context, config domain.BackendConfig) (service.Backend, error) { key := f.backendKey(config)
// Check cache first f.mu.RLock() if backend, exists := f.backends[key]; exists { f.mu.RUnlock()
// Verify it's still healthy if err := backend.HealthCheck(ctx); err == nil { return backend, nil } // Unhealthy - will recreate below } else { f.mu.RUnlock() }
// Use singleflight to prevent multiple concurrent creates v, err, _ := f.group.Do(key, func() (interface{}, error) { return f.createBackend(ctx, config) })
if err != nil { return nil, err }
return v.(service.Backend), nil }
func (f *Factory) createBackend(ctx context.Context, config domain.BackendConfig) (service.Backend, error) { creator, exists := f.creators[config.Type] if !exists { return nil, fmt.Errorf("unknown backend type: %s", config.Type) }
backend, err := creator(config) if err != nil { return nil, fmt.Errorf("create %s backend: %w", config.Type, err) }
// Verify it works if err := backend.HealthCheck(ctx); err != nil { return nil, fmt.Errorf("backend health check failed: %w", err) }
// Cache for reuse f.mu.Lock() f.backends[f.backendKey(config)] = backend f.mu.Unlock()
return backend, nil }
func (f *Factory) backendKey(config domain.BackendConfig) string { return fmt.Sprintf("%s:%s", config.Type, config.Name) }
// Shutdown closes all backends func (f *Factory) Shutdown() error { f.mu.Lock() defer f.mu.Unlock()
var errs []error for key, backend := range f.backends { if closer, ok := backend.(interface{ Close() error }); ok { if err := closer.Close(); err != nil { errs = append(errs, fmt.Errorf("close %s: %w", key, err)) } } }
if len(errs) > 0 { return fmt.Errorf("shutdown errors: %v", errs) }
return nil }
### `internal/storage/sqlite/repository.go`
Repository pattern following [database patterns](go-practices-database.md#repository-pattern):
go
package sqlite
import ( "context" "database/sql" "errors" "log/slog"
"github.com/yourusername/clifoundation/internal/domain" "github.com/yourusername/clifoundation/internal/service" )
// Ensure we implement the interface var _ service.ConversationRepository = (*ConversationRepository)(nil)
// SQL queries as constants
const (
createConversation =
INSERT INTO conversations (id, prompt, response, created_at)
VALUES (?, ?, ?, ?)
getConversationByID =
SELECT id, prompt, response, created_at, updated_at
FROM conversations
WHERE id = ?
listConversations =
SELECT id, prompt, response, created_at, updated_at
FROM conversations
ORDER BY created_at DESC
LIMIT ? OFFSET ?
)
// ConversationRepository implements conversation storage type ConversationRepository struct { db *sql.DB logger *slog.Logger }
// NewConversationRepository creates a new repository func NewConversationRepository(db sql.DB, logger slog.Logger) *ConversationRepository { return &ConversationRepository{ db: db, logger: logger, } }
func (r ConversationRepository) Create(ctx context.Context, conv domain.Conversation) error { _, err := r.db.ExecContext(ctx, createConversation, conv.ID, conv.Prompt, conv.Response, conv.CreatedAt, )
if err != nil { r.logger.Error("failed to create conversation", "error", err, "id", conv.ID, ) return fmt.Errorf("create conversation: %w", err) }
return nil }
func (r ConversationRepository) GetByID(ctx context.Context, id string) (domain.Conversation, error) { var conv domain.Conversation
err := r.db.QueryRowContext(ctx, getConversationByID, id).Scan( &conv.ID, &conv.Prompt, &conv.Response, &conv.CreatedAt, &conv.UpdatedAt, )
if errors.Is(err, sql.ErrNoRows) { return nil, domain.ErrNotFound }
if err != nil { r.logger.Error("failed to get conversation", "error", err, "id", id, ) return nil, fmt.Errorf("get conversation: %w", err) }
return &conv, nil }
func (r ConversationRepository) List(ctx context.Context, limit, offset int) ([]domain.Conversation, error) { rows, err := r.db.QueryContext(ctx, listConversations, limit, offset) if err != nil { return nil, fmt.Errorf("query conversations: %w", err) } defer rows.Close()
var conversations []*domain.Conversation for rows.Next() { var conv domain.Conversation err := rows.Scan( &conv.ID, &conv.Prompt, &conv.Response, &conv.CreatedAt, &conv.UpdatedAt, ) if err != nil { return nil, fmt.Errorf("scan conversation: %w", err) } conversations = append(conversations, &conv) }
return conversations, rows.Err() }
### `Makefile`
Build automation following [testing patterns](go-practices-testing.md):
makefile
.PHONY: all build test lint clean
Variables
BINARY_NAME=clifoundation BINARYPATH=bin/$(BINARYNAME) GO_FILES=$(shell find . -name '*.go' -type f)
all: lint test build
build: go build -o $(BINARYPATH) ./cmd/$(BINARYNAME)
test: go test -race -v ./...
test-integration: go test -race -v -tags=integration ./test/integration/...
lint: golangci-lint run
clean: rm -rf bin/ go clean -testcache
Development helpers
dev-setup: go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest go mod download
run: build ./$(BINARY_PATH)
Database migrations
migrate-up: migrate -path internal/storage/migrations -database sqlite3://clifoundation.db up
migrate-down: migrate -path internal/storage/migrations -database sqlite3://clifoundation.db down
### `.golangci.yml`
Linter configuration enforcing best practices:
yaml
run:
timeout: 5m
linters: enable: - gofmt - goimports - govet - errcheck - staticcheck - unused - gosimple - ineffassign - typecheck - gosec - asciicheck - bodyclose - durationcheck - errorlint - exhaustive - exportloopref - nilerr - rowserrcheck - sqlclosecheck - tparallel - unconvert - unparam - wastedassign
linters-settings: errorlint: # Enforce errors.Is/As and no fmt.Errorf for wrapping errorf: true
issues: exclude-rules: - path: _test\.go linters: - errcheck - gosec
Custom rules to enforce patterns from the book
custom: rules: - name: no-fmt-errorf pattern: 'fmt\.Errorf' message: "Use errors.New or errors.Wrap from pkg/errors"
- name: no-global-vars pattern: '^var\s+\w+\s+' message: "Avoid global variables, use dependency injection" exclude: - _test.go - cmd/
### `test/integration/pipeline_test.go`
Integration test example with table-driven tests:
go
//go:build integration
package integration
import ( "context" "path/filepath" "testing" "time"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/yourusername/clifoundation/internal/app" "github.com/yourusername/clifoundation/internal/config" "github.com/yourusername/clifoundation/internal/domain" "github.com/yourusername/clifoundation/internal/pkg/logging" )
func TestPipelineIntegration(t *testing.T) { // Setup test app cfg := &config.Config{ Database: config.DatabaseConfig{ Path: filepath.Join(t.TempDir(), "test.db"), }, Pipeline: config.PipelineConfig{ MaxWorkers: 3, Timeout: 5 * time.Second, }, }
logger := logging.New(config.LogConfig{Level: "debug"})
testApp, err := app.New(cfg, logger) require.NoError(t, err) defer testApp.Shutdown(context.Background())
tests := []struct { name string input string setup func(t *testing.T) wantErr bool checkFunc func(t testing.T, result domain.Result) }{ { name: "simple prompt without includes", input: "Hello, world!", setup: func(t *testing.T) {}, checkFunc: func(t testing.T, result domain.Result) { assert.Equal(t, "Hello, world!", result.Output) }, }, { name: "prompt with file include", input: "Start\n@include:testdata/sample.txt\nEnd", setup: func(t *testing.T) { // Create test file err := os.WriteFile("testdata/sample.txt", []byte("included content"), 0644) require.NoError(t, err) }, checkFunc: func(t testing.T, result domain.Result) { assert.Contains(t, result.Output, "included content") assert.NotContains(t, result.Output, "@include:") }, }, { name: "invalid file include", input: "@include:/etc/passwd", wantErr: true, }, { name: "multiple file includes", input: "@include:file1.txt\n@include:file2.txt", setup: func(t *testing.T) { os.WriteFile("file1.txt", []byte("content1"), 0644) os.WriteFile("file2.txt", []byte("content2"), 0644) }, checkFunc: func(t testing.T, result domain.Result) { assert.Contains(t, result.Output, "content1") assert.Contains(t, result.Output, "content2") }, }, }
for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { // Setup if tt.setup != nil { tt.setup(t) }
// Execute ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) defer cancel()
result, err := testApp.PromptService.Process(ctx, tt.input)
// Check error if tt.wantErr { assert.Error(t, err) return } require.NoError(t, err)
// Check result if tt.checkFunc != nil { tt.checkFunc(t, result) }
// Verify it was saved history, err := testApp.PromptService.GetHistory(ctx) require.NoError(t, err) assert.NotEmpty(t, history) }) } }
// TestConcurrentPipeline verifies thread safety func TestConcurrentPipeline(t *testing.T) { cfg := &config.Config{ Database: config.DatabaseConfig{ Path: ":memory:", }, Pipeline: config.PipelineConfig{ MaxWorkers: 10, }, }
logger := logging.New(config.LogConfig{Level: "error"}) testApp, err := app.New(cfg, logger) require.NoError(t, err) defer testApp.Shutdown(context.Background())
// Run many concurrent requests const numRequests = 100 ctx := context.Background()
errCh := make(chan error, numRequests) for i := 0; i < numRequests; i++ { go func(n int) { _, err := testApp.PromptService.Process(ctx, fmt.Sprintf("Request %d", n)) errCh <- err }(i) }
// Collect results for i := 0; i < numRequests; i++ { err := <-errCh assert.NoError(t, err) }
// Verify all were processed history, err := testApp.PromptService.GetHistory(ctx) require.NoError(t, err) assert.Len(t, history, numRequests) }
## Pattern Application Checklist
This skeleton applies every major pattern from the guide:
### Architecture Patterns
- [x] **Service Layer Architecture** - Clean separation between handlers, services, and repositories
- [x] **Dependency Injection** - Container pattern in `app.go`, no global state
- [x] **Interface Segregation** - Small, focused interfaces defined by consumers
- [x] **Repository Pattern** - Database access abstracted behind interfaces
### Error Handling
- [x] **Domain Errors** - Typed errors in `domain/errors.go`
- [x] **Error Wrapping** - Using `pkg/errors` throughout
- [x] **Sentinel Errors** - For common cases like `ErrNotFound`
### Concurrency Patterns
- [x] **Context Propagation** - All operations accept context
- [x] **Graceful Shutdown** - Cleanup functions in app container
- [x] **Worker Pools** - Ready to implement in pipeline stages
### Database Patterns
- [x] **Migration Management** - Using golang-migrate
- [x] **Prepared Statements** - SQL queries as constants
- [x] **Connection Management** - Proper connection lifecycle
### CLI Patterns
- [x] **Cobra Command Structure** - Hierarchical commands
- [x] **Configuration Loading** - Viper with validation
- [x] **Structured Logging** - slog with proper levels
### Testing Patterns
- [x] **Testable Design** - Everything mockable via interfaces
- [x] **Integration Tests** - Separate test directory
- [x] **Linting** - Enforces patterns automatically
## Using as a Starter
1. **Clone and Rename**
bash
git clone <repo> myproject
cd myproject
go mod edit -module github.com/myorg/myproject
2. **Update Imports**
bash
find . -name "*.go" -exec sed -i '' 's|clifoundation|myproject|g' {} +
3. **Customize Domain**
- Edit `internal/domain/` for your entities
- Update `internal/service/interfaces.go` for your needs
4. **Add Your Logic**
- Implement services in `internal/service/`
- Add pipeline stages in `internal/pipeline/`
- Create commands in `cmd/clifoundation/commands/`
5. **Configure Linting**
- Adjust `.golangci.yml` for your standards
- Add custom rules as needed
## Complete Vertical Slice Example
Here's how a complete feature flows through all layers:
### Command Implementation
go
// cmd/clifoundation/commands/process.go
package commands
import ( "github.com/spf13/cobra" "github.com/yourusername/clifoundation/internal/app" )
func NewProcessCommand(appPtr *app.App) cobra.Command { return &cobra.Command{ Use: "process [prompt]", Short: "Process a prompt through the pipeline", Args: cobra.ExactArgs(1), RunE: func(cmd *cobra.Command, args []string) error { ctx := cmd.Context() prompt := args[0]
// Get app from pointer app := *appPtr if app == nil { return fmt.Errorf("application not initialized") }
// Process through service result, err := app.PromptService.Process(ctx, prompt) if err != nil { // Error is already logged by service return err }
// Display result fmt.Printf("Result: %s\n", result.Output) return nil }, } }
### Service Implementation
go
// internal/service/prompt_service.go
package service
import ( "context" "github.com/google/uuid" "github.com/yourusername/clifoundation/internal/domain" )
type promptService struct { repo ConversationRepository pipeline Pipeline logger *slog.Logger }
func NewPromptService(repo ConversationRepository, pipeline Pipeline, logger *slog.Logger) PromptService { return &promptService{ repo: repo, pipeline: pipeline, logger: logger, } }
func (s promptService) Process(ctx context.Context, input string) (domain.Result, error) { // Create prompt prompt := &domain.Prompt{ ID: uuid.New().String(), Content: input, }
// Process through pipeline result, err := s.pipeline.Execute(ctx, prompt) if err != nil { s.logger.Error("pipeline execution failed", "error", err, "prompt_id", prompt.ID, ) return nil, err }
// Save to history conv := &domain.Conversation{ ID: prompt.ID, Prompt: input, Response: result.Output, }
if err := s.repo.Create(ctx, conv); err != nil { // Log but don't fail - history is non-critical s.logger.Warn("failed to save conversation", "error", err, "id", conv.ID, ) }
return result, nil }
## Production Readiness Checklist
### What This Skeleton Provides ✅
- [x] Clean architecture with proper boundaries
- [x] Dependency injection without globals
- [x] Basic error handling patterns
- [x] Structured logging setup
- [x] Database repository pattern
- [x] Pipeline processing structure
- [x] Worker pool example (file includer)
- [x] Factory pattern with caching
- [x] Integration test examples
- [x] Linter configuration
### What You Must Add for Production ❌
- [ ] **Comprehensive Error Handling**
- Retry logic with backoff
- Circuit breakers for external services
- Graceful degradation strategies
- [ ] **Observability**
- Prometheus metrics
- OpenTelemetry tracing
- Health check endpoints
- Request ID propagation
- [ ] **Security**
- Input validation and sanitization
- Rate limiting
- Authentication/authorization
- Secrets management (Vault, KMS)
- [ ] **Performance**
- Connection pooling configuration
- Caching strategies (Redis)
- Request timeouts
- Resource limits
- [ ] **Operations**
- Dockerfile and container setup
- Kubernetes manifests
- CI/CD pipeline
- Documentation
## Summary
CLIFoundation provides a **solid architectural skeleton** that demonstrates how the patterns from this guide work together. It's not a complete application, but rather a foundation that:
1. **Shows proper structure** - How to organize code following Go best practices
2. **Demonstrates patterns** - Working examples of DI, pipelines, workers, and factories
3. **Enforces standards** - Linter rules that prevent common mistakes
4. **Provides a starting point** - Clone, rename, and build your business logic
Think of it as the **frame of a house** - structurally sound and built to code, but you still need to add the walls, plumbing, and electrical to make it livable.
By starting with this foundation, you avoid architectural mistakes and can focus on implementing your specific business requirements while maintaining clean, testable code.
---
---
# B. Appendix B: Production Patterns
## Overview
While [CLIFoundation](appendix-clifoundation.md) shows how Go code *should* be structured, this guide captures the messy realities of production systems. These patterns come from building and maintaining real applications like the Role CLI, where perfect architecture meets real-world constraints.
**Key Principle**: Perfect is the enemy of good. Ship working code, then iterate.
## Table of Contents
1. [Context Management in Practice](#context-management-in-practice)
2. [Error Aggregation Patterns](#error-aggregation-patterns)
3. [Multi-tier Caching Strategies](#multi-tier-caching-strategies)
4. [File Processing at Scale](#file-processing-at-scale)
5. [CLI UX Patterns](#cli-ux-patterns)
6. [Performance Optimization Reality](#performance-optimization-reality)
7. [State Management for LLM Apps](#state-management-for-llm-apps)
8. [Safety Patterns from Disasters](#safety-patterns-from-disasters)
9. [Pragmatic Refactoring](#pragmatic-refactoring)
10. [Monitoring What Matters](#monitoring-what-matters)
---
## Context Management in Practice
### The Reality
In production, context management is about more than timeouts. It's about graceful degradation, user experience, and system resilience.
### Pattern: Hierarchical Timeouts with Fallbacks
go
// From Role CLI: Configurable timeouts with defaults
type Config struct {
Timeout time.Duration yaml:"timeout" default:"30s"
BackendTimeout time.Duration yaml:"backend_timeout" default:"25s"
ChunkTimeout time.Duration yaml:"chunk_timeout" default:"5s"
}
func (s Service) Process(ctx context.Context, input string) (Result, error) { // User-configurable timeout ctx, cancel := context.WithTimeout(ctx, s.config.Timeout) defer cancel()
// Try primary backend result, err := s.tryPrimary(ctx) if err == nil { return result, nil }
// Fallback to cache on timeout if errors.Is(err, context.DeadlineExceeded) { s.logger.Warn("primary timeout, trying cache", "timeout", s.config.Timeout) if cached, err := s.cache.Get(ctx, input); err == nil { return cached, nil } }
// Final fallback: degraded mode return s.degradedMode(input), nil }
### Lesson: User Control
Users know their use cases better than you. Make timeouts configurable:
yaml
.role/config.yaml
timeout: 60s # User has slow internet backend_timeout: 55s chunk_size: 32KB # Smaller chunks for reliability
---
## Error Aggregation Patterns
### The Reality
Real applications process batches. Some items fail, some succeed. Users need both results and clear error reporting.
### Pattern: ErrorAggregator from Role CLI
go
type ErrorAggregator struct {
errors []FileError
mu sync.Mutex
}
type FileError struct { Path string Line int Message string Err error }
func (agg *ErrorAggregator) Add(path string, line int, err error) { agg.mu.Lock() defer agg.mu.Unlock()
agg.errors = append(agg.errors, FileError{ Path: path, Line: line, Message: err.Error(), Err: err, }) }
func (agg *ErrorAggregator) Summary() string { if len(agg.errors) == 0 { return "" }
// Group by file byFile := make(map[string][]FileError) for _, e := range agg.errors { byFile[e.Path] = append(byFile[e.Path], e) }
var buf strings.Builder buf.WriteString("Errors encountered:\n")
for file, errors := range byFile { buf.WriteString(fmt.Sprintf("\n%s:\n", file)) for _, e := range errors { if e.Line > 0 { buf.WriteString(fmt.Sprintf(" Line %d: %s\n", e.Line, e.Message)) } else { buf.WriteString(fmt.Sprintf(" %s\n", e.Message)) } } }
return buf.String() }
### Pattern: Partial Success Handling
go
type BatchResult struct {
Successful int
Failed int
Errors *ErrorAggregator
Results []*ProcessedItem
}
func (s Service) ProcessBatch(items []Item) BatchResult { result := &BatchResult{ Errors: NewErrorAggregator(), Results: make([]*ProcessedItem, 0, len(items)), }
for i, item := range items { processed, err := s.processItem(item) if err != nil { result.Failed++ result.Errors.Add(item.Path, i+1, err) continue }
result.Successful++ result.Results = append(result.Results, processed) }
return result }
// Usage provides clear feedback result := service.ProcessBatch(items) fmt.Printf("Processed %d/%d successfully\n", result.Successful, len(items)) if result.Failed > 0 { fmt.Println(result.Errors.Summary()) }
---
## Multi-tier Caching Strategies
### The Reality
Production caching isn't just key-value storage. It's about cache layers, invalidation, and debugging.
### Pattern: Role CLI's Two-tier Cache
go
type Cache struct {
permanent *lru.Cache // Long-lived, user-visible
debug *lru.Cache // Short-lived, for debugging
ttl time.Duration
}
func NewCache(size int) *Cache { return &Cache{ permanent: lru.New(size), debug: lru.New(size * 2), // Larger for debugging ttl: 5 * time.Minute, } }
func (c *Cache) Get(key string) (interface{}, bool) { // Check permanent cache first if val, ok := c.permanent.Get(key); ok { if !c.isExpired(val) { return val, true } c.permanent.Remove(key) }
// Check debug cache (longer retention) if val, ok := c.debug.Get(key); ok { // Re-promote to permanent if still valid if !c.isExpired(val) { c.permanent.Add(key, val) return val, true } }
return nil, false }
func (c *Cache) Set(key string, value interface{}) { wrapped := &cacheEntry{ value: value, timestamp: time.Now(), }
c.permanent.Add(key, wrapped) c.debug.Add(key, wrapped) // Also keep in debug cache }
### Pattern: Cache Key Design
go
// Poor: Collision-prone keys
key := fmt.Sprintf("user_%d", userID)
// Better: Namespaced, versioned keys key := fmt.Sprintf("v2:user:%d:profile:%s", userID, hash(profileData))
// Best: Structured cache keys type CacheKey struct { Type string ID string Version int Hash string }
func (k CacheKey) String() string { return fmt.Sprintf("%s:v%d:%s:%s", k.Type, k.Version, k.ID, k.Hash) }
---
## File Processing at Scale
### The Reality
Large file processing requires chunking, progress reporting, and memory management. The naive approach runs out of memory or provides poor UX.
### Pattern: Smart Chunking from Role CLI
go
// Package chunking handles UTF-8 aware file chunking
package chunking
type Chunker struct { ChunkSize int OverlapSize int PreserveWords bool }
func (c *Chunker) ChunkFile(path string) (<-chan Chunk, error) { file, err := os.Open(path) if err != nil { return nil, err }
chunks := make(chan Chunk)
go func() { defer close(chunks) defer file.Close()
reader := bufio.NewReaderSize(file, c.ChunkSize*2) position := 0 chunkNum := 0
for { chunk, err := c.readChunk(reader, position) if err == io.EOF { break } if err != nil { chunks <- Chunk{Error: err} return }
chunk.Number = chunkNum chunks <- chunk
// Overlap for context position += len(chunk.Content) - c.OverlapSize chunkNum++ } }()
return chunks, nil }
func (c Chunker) readChunk(reader bufio.Reader, start int) (Chunk, error) { buffer := make([]byte, c.ChunkSize) n, err := reader.Read(buffer) if err != nil { return Chunk{}, err }
content := buffer[:n]
// Ensure we don't break UTF-8 sequences if c.PreserveWords { content = c.adjustBoundaries(content) }
return Chunk{ Start: start, Content: string(content), }, nil }
// Adjust chunk boundaries to not break words/UTF-8 func (c *Chunker) adjustBoundaries(data []byte) []byte { // Find last complete UTF-8 character end := len(data) for i := end - 1; i >= end-4 && i >= 0; i-- { if utf8.RuneStart(data[i]) { r, size := utf8.DecodeRune(data[i:]) if r != utf8.RuneError { end = i + size break } } }
// Find last word boundary if c.PreserveWords { for i := end - 1; i >= 0 && end-i < 100; i-- { if unicode.IsSpace(rune(data[i])) { end = i + 1 break } } }
return data[:end] }
### Pattern: Progress Reporting
go
type ProgressReporter struct {
total int64
processed int64
mu sync.Mutex
ticker *time.Ticker
}
func (p *ProgressReporter) Start() { p.ticker = time.NewTicker(500 * time.Millisecond) go func() { for range p.ticker.C { p.report() } }() }
func (p *ProgressReporter) report() { p.mu.Lock() percent := float64(p.processed) / float64(p.total) * 100 p.mu.Unlock()
// Clear line and update fmt.Printf("\r[%-50s] %.1f%%", strings.Repeat("=", int(percent/2)), percent) }
---
## CLI UX Patterns
### The Reality
Good CLIs guide users, provide helpful errors, and respect their time.
### Pattern: Helpful Error Messages
go
// Bad: Cryptic error
if input == "" {
return errors.New("invalid input")
}
// Good: Actionable error
if input == "" {
return fmt.Errorf(no input provided
Please provide input via:
- Pipe: echo "text" | %s
- File: %s -f input.txt
- Argument: %s "your text"
For more help: %s --help
,
os.Args[0], os.Args[0], os.Args[0], os.Args[0])
}
### Pattern: Smart Defaults
go
// Detect terminal for output formatting
if isatty.IsTerminal(os.Stdout.Fd()) {
// Human-readable output
output = formatTable(results)
} else {
// Machine-readable for pipes
output = formatJSON(results)
}
// Smart concurrency defaults if workers == 0 { workers = runtime.NumCPU() if workers > 4 { workers = 4 // Reasonable default } }
### Pattern: Quiet and Verbose Modes
go
type Logger struct {
quiet bool
verbose bool
}
func (l *Logger) Info(msg string, args ...interface{}) { if !l.quiet { fmt.Printf(msg+"\n", args...) } }
func (l *Logger) Debug(msg string, args ...interface{}) { if l.verbose { fmt.Printf("[DEBUG] "+msg+"\n", args...) } }
// Usage respects user preference logger.Info("Processing %d files", len(files)) // Normal mode logger.Debug("Chunk size: %d", chunkSize) // Only in verbose
---
## Performance Optimization Reality
### The Reality
Measure first, optimize what matters. Real bottlenecks are rarely where you think.
### Case Study: Role CLI Worker Pool
go
// Before: Sequential processing - 3.5s for 10 files
for _, file := range files {
result, err := processFile(file)
// ...
}
// After: Worker pool - 0.7s for 10 files (5x speedup) type WorkerPool struct { workers int taskQueue chan Task results chan Result wg sync.WaitGroup }
func (wp *WorkerPool) Start() { for i := 0; i < wp.workers; i++ { wp.wg.Add(1) go wp.worker() } }
func (wp *WorkerPool) worker() { defer wp.wg.Done()
for task := range wp.taskQueue { result := task.Process() wp.results <- result } }
// Benchmarking made the difference clear func BenchmarkProcessing(b *testing.B) { files := generateTestFiles(10)
b.Run("sequential", func(b *testing.B) { for i := 0; i < b.N; i++ { processSequential(files) } })
b.Run("parallel", func(b *testing.B) { for i := 0; i < b.N; i++ { processParallel(files, 4) } }) }
// Results: // BenchmarkProcessing/sequential-8 1 3542ms/op // BenchmarkProcessing/parallel-8 5 743ms/op
### Lesson: Profile Before Optimizing
go
// Add profiling support
import _ "net/http/pprof"
func main() { if *profile { go func() { log.Println(http.ListenAndServe("localhost:6060", nil)) }() } // ... }
// Profile CPU usage // go tool pprof http://localhost:6060/debug/pprof/profile
// Profile memory // go tool pprof http://localhost:6060/debug/pprof/heap
---
## State Management for LLM Apps
### The Reality
LLM applications have unique state management needs: context windows, conversation history, and memory optimization.
### Pattern: memory.md Files
go
type MemoryManager struct {
file string
mu sync.RWMutex
maxSize int
}
func (m *MemoryManager) Update(section, content string) error { m.mu.Lock() defer m.mu.Unlock()
memory, err := m.load() if err != nil { memory = &Memory{ Tasks: []Task{}, Reference: make(map[string]string), } }
// Update section memory.Reference[section] = content memory.UpdatedAt = time.Now()
// Prune if too large if m.size(memory) > m.maxSize { m.prune(memory) }
return m.save(memory) }
// Auto-summarize when context gets large func (m *MemoryManager) Summarize(llm LLMClient) error { memory, _ := m.load()
if m.size(memory) < m.maxSize/2 { return nil // No need to summarize yet }
summary, err := llm.Summarize(memory.String()) if err != nil { return err }
// Archive old content archive := fmt.Sprintf("memory-archive-%s.md", time.Now().Format("20060102")) os.WriteFile(archive, []byte(memory.String()), 0644)
// Replace with summary memory.Reference = map[string]string{ "summary": summary, }
return m.save(memory) }
---
## Safety Patterns from Disasters
### The Reality
One wrong command can destroy hours of work. These patterns come from painful experience.
### The $10K Lesson: rm Wildcards
go
// NEVER: Use wildcards with destructive operations
func cleanupFiles(pattern string) error {
// This destroyed production data:
// cmd := exec.Command("rm", "-f", pattern+"*")
// ALWAYS: List first, confirm, then delete specific files files, err := filepath.Glob(pattern + "*") if err != nil { return err }
fmt.Printf("Will delete %d files:\n", len(files)) for _, f := range files { fmt.Printf(" %s\n", f) }
fmt.Print("Continue? [y/N]: ") var response string fmt.Scanln(&response)
if response != "y" { return errors.New("cancelled by user") }
// Delete specific files, not patterns for _, f := range files { if err := os.Remove(f); err != nil { log.Printf("Failed to remove %s: %v", f, err) } }
return nil }
### Pattern: Backup Before Modify
go
func modifyFile(path string, modifier func([]byte) ([]byte, error)) error {
// Create timestamped backup
backup := fmt.Sprintf("%s.backup.%s", path, time.Now().Format("20060102_150405"))
data, err := os.ReadFile(path) if err != nil { return err }
// Save backup if err := os.WriteFile(backup, data, 0644); err != nil { return fmt.Errorf("backup failed: %w", err) }
// Modify modified, err := modifier(data) if err != nil { return fmt.Errorf("modification failed: %w", err) }
// Write atomically tmp := path + ".tmp" if err := os.WriteFile(tmp, modified, 0644); err != nil { return err }
return os.Rename(tmp, path) }
---
## Pragmatic Refactoring
### The Reality
Perfect rewrites rarely succeed. Targeted improvements deliver value.
### The Three-Option Approach
When facing technical debt, always consider three options:
#### Option 1: Complete Rewrite (Rarely Best)
- Months of work
- High risk
- Often abandoned
#### Option 2: Minimal Changes (Often Insufficient)
- Quick fixes
- Debt remains
- Problems resurface
#### Option 3: Targeted Refactoring (Usually Optimal)
- 2-3 day effort
- High-impact improvements
- Maintains momentum
### Case Study: Role CLI Refactoring
go
// Original: Global state everywhere
var (
cache *Cache
client *Client
config *Config
)
// Option 3 approach: Add DI where it matters most type App struct { cache *Cache client *Client config *Config }
// Gradual migration func main() { // Phase 1: Create app container app := &App{ cache: cache, // Reuse existing globals client: client, config: config, }
// Phase 2: Update high-value paths runCommand.app = app
// Phase 3: Deprecate globals over time // (But if they work, maybe never) }
### Pattern: Refactor on the Way
go
// When adding a feature, improve what you touch
func (s *Service) AddNewFeature() error {
// Noticed this while implementing feature
if s.db == nil {
// Old: panic("db not initialized")
// New: Proper error
return errors.New("service not initialized: missing database")
}
// Actual feature implementation // ...
// Small improvement while here defer s.logMetrics() // Added observability }
---
## Monitoring What Matters
### The Reality
You can't improve what you don't measure, but measuring everything creates noise.
### Pattern: Business Metrics Over Technical Metrics
go
type Metrics struct {
// What matters to users
RequestsProcessed counter
ProcessingTime histogram
CacheHitRate gauge
ErrorsByType map[string]counter
// Not just technical stats UserWaitTime histogram PartialSuccessRate gauge }
func (m *Metrics) RecordRequest(start time.Time, err error) { duration := time.Since(start)
m.RequestsProcessed.Inc() m.ProcessingTime.Observe(duration.Seconds())
// Business logic for error categorization if err != nil { switch { case errors.Is(err, context.DeadlineExceeded): m.ErrorsByType["timeout"].Inc() case errors.Is(err, ErrRateLimit): m.ErrorsByType["rate_limit"].Inc() default: m.ErrorsByType["other"].Inc() } }
// What users experience if duration > 5*time.Second { m.UserWaitTime.Observe(duration.Seconds()) } }
### Pattern: Debug Mode for Development
go
type DebugMode struct {
enabled bool
mu sync.Mutex
events []DebugEvent
}
func (d *DebugMode) Log(event string, data map[string]interface{}) { if !d.enabled { return }
d.mu.Lock() defer d.mu.Unlock()
d.events = append(d.events, DebugEvent{ Time: time.Now(), Event: event, Data: data, })
// Also log immediately in debug mode log.Printf("[DEBUG] %s: %+v", event, data) }
// Dump debug info on error func (d *DebugMode) DumpOnError() { if !d.enabled { return }
fmt.Println("\n=== Debug Trace ===")
for _, e := range d.events {
fmt.Printf("%s: %s %+v\n",
e.Time.Format("15:04:05.000"),
e.Event,
e.Data)
}
}
``
---
## Summary: Production Wisdom
### The 10 Commandments of Production Go
1. **Measure First** - Profile before optimizing
2. **Fail Gracefully** - Degraded mode beats no mode
3. **Respect User Time** - Progress feedback matters
4. **Cache Wisely** - Invalidation is the hard part
5. **Chunk Large Operations** - Memory is finite
6. **Make Timeouts Configurable** - Users know their networks
7. **Log Actionably** - Errors should guide fixes
8. **Backup Before Modifying** - Ctrl+Z doesn't work in production
9. **Refactor Gradually** - Perfect is the enemy of good
10. **Monitor What Matters** - Business metrics over technical stats
### Final Thought
The gap between clean architecture and production code isn't a failing - it's reality. Good code ships, then improves. Start with patterns from CLIFoundation, then add these production patterns as you need them.
Remember: Every "ugly" workaround in production code has a story. Sometimes that story is "it works, users are happy, and we have bigger problems to solve."
---
---
## 📊 Guide Statistics
- **Total Lines**: 18951
- **Total Words**: 55637
- **Main Sections**: 18
- **Build Date**: Thu Jul 31 16:05:35 EDT 2025
- **Go Version**: 1.21+ (with 1.18+ generics)
---
## 🎯 Success Metrics
A project following this guide should achieve:
- ✅ **Zero
fmt.Errorf usage** - All typed errors
- ✅ **Zero
printf/println** - All structured logging
- ✅ **80%+ test coverage** - With table-driven tests
- ✅ **Sub-second startup** - With proper DI and minimal globals
- ✅ **Clean architecture** - No circular dependencies
- ✅ **Type safety** - Leveraging Go 1.18+ generics
- ✅ **Production readiness** - Logging, metrics, graceful shutdown
---
## 🔗 Original Source Files
This complete guide was generated from the following source files:
1.
go-practices-error-logging.md - Error handling and core principles
2.
go-practices-service-architecture.md - Service design with generics
3.
go-practices-code-organization.md - Project structure and organization
4.
go-practices-testing.md - Testing strategies and quality assurance
5.
go-practices-database.md - Database patterns and repository design
6.
go-practices-http.md - HTTP server and client patterns
7.
go-practices-concurrency.md - Concurrency and performance patterns
8.
go-practices-cli-config.md - CLI design and configuration
9.
go-practices-patterns.md - Common design patterns
10.
go-practices-migration.md - Migration and refactoring guide
11.
appendix-clifoundation.md - CLIFoundation starter template
12.
appendix-production-patterns.md - Real-world production patterns
**Generated**: Thu Jul 31 16:05:35 EDT 2025
**Build Script**:
build-complete-guide.sh`
This complete guide represents the culmination of battle-tested patterns from production Go systems, enhanced with modern Go features and optimized for both human developers and AI assistants.
🚀 Ready to build production-ready Go applications!