Documentation
¶
Overview ¶
Package datadogwriter provides a Datadog Logs API writer for the Iris logging library.
This package implements the iris.SyncWriter interface to enable high-performance log shipping to Datadog. It supports batching, retry logic, and comprehensive configuration options for production use.
Basic Usage ¶
config := datadogwriter.Config{
APIKey: "your-datadog-api-key",
Site: "datadoghq.com", // or "datadoghq.eu"
Service: "my-service",
Environment: "production",
Version: "1.0.0",
}
writer, err := datadogwriter.New(config)
if err != nil {
log.Fatal(err)
}
defer writer.Close()
logger := iris.New(iris.WithSyncWriter(writer))
logger.Info("Hello from Iris to Datadog!")
Configuration ¶
The Config struct provides extensive customization options:
- APIKey: Required Datadog API key for authentication
- Site: Datadog site (datadoghq.com, datadoghq.eu, etc.)
- Service, Environment, Version: Standard Datadog tags
- BatchSize: Number of logs to batch before sending (default: 1000)
- FlushInterval: Maximum time before flushing incomplete batches (default: 1s)
- Timeout: HTTP request timeout (default: 10s)
- OnError: Optional error callback function
- MaxRetries: Number of retry attempts (default: 3)
- RetryDelay: Delay between retries (default: 100ms)
Performance ¶
This writer is optimized for high-throughput logging:
- Batches multiple log entries in single HTTP requests
- Uses time-based flushing to ensure timely delivery
- Employs efficient JSON marshaling for Datadog's format
- Implements retry logic with exponential backoff
- Thread-safe for concurrent logging operations
Error Handling ¶
The writer includes comprehensive error handling:
- Configurable retry logic for transient failures
- Optional error callback for monitoring integration
- Graceful degradation on persistent failures
- Proper resource cleanup on shutdown
Integration ¶
This package integrates seamlessly with the Iris ecosystem:
iris (core) → SyncWriter interface → iris-writer-datadog → Datadog Logs API
The external architecture ensures zero dependencies in core Iris while providing powerful log aggregation capabilities for Datadog users.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Config ¶
type Config struct {
// APIKey is the Datadog API key for authentication
APIKey string
// Site is the Datadog site (e.g., "datadoghq.com", "datadoghq.eu")
Site string
// Service name to tag logs with
Service string
// Environment to tag logs with
Environment string
// Version to tag logs with
Version string
// Source to tag logs with (e.g., "go", "application")
Source string
// Hostname to tag logs with
Hostname string
// Additional tags to attach to all logs
Tags map[string]string
// BatchSize is the maximum number of log entries to batch before sending
BatchSize int
// FlushInterval is the maximum time to wait before flushing incomplete batches
FlushInterval time.Duration
// Timeout for HTTP requests to Datadog
Timeout time.Duration
// OnError is an optional callback for handling errors
OnError func(error)
// MaxRetries is the number of retry attempts for failed requests
MaxRetries int
// RetryDelay is the delay between retry attempts
RetryDelay time.Duration
// EnableCompression enables gzip compression for HTTP requests to reduce bandwidth
EnableCompression bool
}
Config holds the configuration for the Datadog writer
type LogEntry ¶
type LogEntry struct {
Timestamp int64 `json:"timestamp"`
Level string `json:"status"`
Message string `json:"message"`
Service string `json:"service,omitempty"`
Source string `json:"ddsource,omitempty"`
Tags string `json:"ddtags,omitempty"`
Hostname string `json:"hostname,omitempty"`
Env string `json:"env,omitempty"`
Version string `json:"version,omitempty"`
Fields map[string]any `json:",inline"`
}
LogEntry represents a single log entry for Datadog