Documentation
¶
Overview ¶
Package iris provides a high-performance, structured logging library for Go applications.
Iris is designed for production environments where performance, security, and reliability are critical. It offers zero-allocation logging paths, automatic memory management, and comprehensive security features including secure field handling and log injection prevention.
Key Features ¶
- Smart API with zero-configuration setup and automatic optimization
- High-performance structured logging with zero-allocation fast paths
- Automatic memory management with buffer pooling and ring buffer architecture
- Comprehensive security features including field sanitization and injection prevention
- Multiple output formats: JSON, text, and console with smart formatting
- Dynamic configuration with hot-reload capabilities
- Built-in caller information and stack trace support
- Backpressure handling and automatic scaling
- OpenTelemetry integration support
- Extensive field types with type-safe APIs
Smart API - Zero Configuration ¶
The revolutionary Smart API automatically detects optimal settings for your environment:
// Smart API: Everything auto-configured
logger, err := iris.New(iris.Config{})
logger.Start()
logger.Info("Hello world", iris.String("user", "alice"))
Smart features include:
- Architecture detection (SingleRing vs ThreadedRings based on CPU count)
- Capacity optimization (8KB per CPU core, bounded 8KB-64KB)
- Encoder selection (Text for development, JSON for production)
- Level detection (from environment or development mode)
- Time optimization (121x faster cached time)
Quick Start ¶
Basic usage with Smart API (recommended):
logger, err := iris.New(iris.Config{})
if err != nil {
panic(err)
}
logger.Start()
defer logger.Sync()
logger.Info("Application started", iris.String("version", "1.0.0"))
Development mode with debug logging:
logger, err := iris.New(iris.Config{}, iris.Development())
logger.Start()
logger.Debug("Debug information visible")
Configuration ¶
While Smart API handles most scenarios, you can override specific settings:
// Override only what you need, rest is auto-detected
config := iris.Config{
Output: myCustomWriter, // Custom output
Level: iris.ErrorLevel, // Error level only
// Everything else: auto-optimized
}
logger, err := iris.New(config)
Environment variable support:
export IRIS_LEVEL=debug # Automatically detected by Smart API
Performance Optimizations ¶
Iris includes several performance optimizations automatically enabled by Smart API:
- Time caching for high-frequency logging scenarios (121x faster than time.Now())
- Buffer pooling to minimize garbage collection
- Ring buffer architecture for lock-free writes
- Smart idle strategies for CPU optimization
- Zero-allocation fast paths for common operations
- Architecture auto-detection based on system resources
Security Features ¶
Security is built into every aspect of Iris:
- Field sanitization prevents log injection attacks
- Secret field redaction protects sensitive data
- Caller verification prevents stack manipulation
- Safe string handling prevents buffer overflows
Field Types ¶
Iris supports a comprehensive set of field types with type-safe constructors:
logger.Info("User operation",
iris.String("user_id", "12345"),
iris.Int64("timestamp", time.Now().Unix()),
iris.Duration("elapsed", time.Since(start)),
iris.Error("error", err),
iris.Secret("password", "[REDACTED]"),
)
Advanced Usage ¶
For advanced scenarios, Iris provides:
- Custom encoders for specialized output formats
- Hierarchical loggers with inherited fields
- Sampling for high-volume scenarios
- Integration with monitoring systems
- Custom sink implementations
- Manual configuration overrides when needed
Error Handling ¶
Iris uses non-blocking error handling to maintain performance:
logger, err := iris.New(iris.Config{})
if err != nil {
// Handle configuration errors
}
logger.Start()
if dropped := logger.Dropped(); dropped > 0 {
// Handle dropped log entries
}
Performance Comparison ¶
Smart API delivers significant performance improvements:
- Hot Path Allocations: 1-3 allocs/op (67% reduction)
- Encoding Performance: 324-537 ns/op (40-60% improvement)
- Memory per Record: 2.5KB (75% reduction)
- Configuration: Zero lines vs 15-20 lines manually
Best Practices ¶
- Use Smart API for all new projects (iris.New(iris.Config{}))
- Prefer structured fields instead of formatted messages
- Use typed field constructors (String, Int64, etc.)
- Leverage environment variables for deployment configuration
- Monitor dropped log entries in high-load scenarios
- Use iris.Development() for local development
- Use iris.Secret() for sensitive data fields
For comprehensive documentation and examples, see: https://github.com/agilira/iris
Index ¶
- Constants
- Variables
- func AllLevelNames() []string
- func CIFriendlyRetryCount(normalRetries int) int
- func CIFriendlySleep(normalDuration time.Duration)
- func CIFriendlyTimeout(normalTimeout time.Duration) time.Duration
- func EstimateBinarySize(rec *Record) int
- func FreeStack(stack *Stack)
- func GetErrorCode(err error) errors.ErrorCode
- func GetUserMessage(err error) string
- func IsCIEnvironment() bool
- func IsFileSyncer(ws WriteSyncer) bool
- func IsLoggerError(err error, code errors.ErrorCode) bool
- func IsNopSyncer(ws WriteSyncer) bool
- func IsRetryableError(err error) bool
- func IsValidLevel(level Level) bool
- func NewAtomicLevelFromConfig(config *Config) *atomicLevel
- func NewLoggerError(code errors.ErrorCode, message string) *errors.Error
- func NewLoggerErrorWithField(code errors.ErrorCode, message, field, value string) *errors.Error
- func NewReaderLogger(config Config, readers []SyncReader, opts ...Option) (*readerLogger, error)
- func RecoverWithError(code errors.ErrorCode) *errors.Error
- func SafeExecute(fn func() error, operation string) error
- func SetErrorHandler(handler ErrorHandler)
- func WrapLoggerError(originalErr error, code errors.ErrorCode, message string) *errors.Error
- type Architecture
- type AtomicLevel
- type AutoScalingConfig
- type AutoScalingLogger
- func (asl *AutoScalingLogger) Close() error
- func (asl *AutoScalingLogger) Debug(msg string, fields ...Field)
- func (asl *AutoScalingLogger) Error(msg string, fields ...Field)
- func (asl *AutoScalingLogger) GetCurrentMode() AutoScalingMode
- func (asl *AutoScalingLogger) GetScalingStats() AutoScalingStats
- func (asl *AutoScalingLogger) Info(msg string, fields ...Field)
- func (asl *AutoScalingLogger) Start() error
- func (asl *AutoScalingLogger) Warn(msg string, fields ...Field)
- type AutoScalingMetrics
- type AutoScalingMode
- type AutoScalingStats
- type BinaryDecoder
- type BinaryEncoder
- type Config
- type ConsoleEncoder
- type ContextExtractor
- type ContextKey
- type ContextLogger
- func (cl *ContextLogger) Debug(msg string, fields ...Field)
- func (cl *ContextLogger) Error(msg string, fields ...Field)
- func (cl *ContextLogger) Fatal(msg string, fields ...Field)
- func (cl *ContextLogger) Info(msg string, fields ...Field)
- func (cl *ContextLogger) Warn(msg string, fields ...Field)
- func (cl *ContextLogger) With(fields ...Field) *ContextLogger
- func (cl *ContextLogger) WithAdditionalContext(ctx context.Context, extractor *ContextExtractor) *ContextLogger
- type Depth
- type DynamicConfigWatcher
- type Encoder
- type ErrorHandler
- type Field
- func Binary(k string, v []byte) Field
- func Bool(k string, v bool) Field
- func Bytes(k string, v []byte) Field
- func Dur(k string, v time.Duration) Field
- func Err(err error) Field
- func ErrorField(err error) Field
- func Errors(k string, errs []error) Field
- func Float32(k string, v float32) Field
- func Float64(k string, v float64) Field
- func Int(k string, v int) Field
- func Int16(k string, v int16) Field
- func Int32(k string, v int32) Field
- func Int64(k string, v int64) Field
- func Int8(k string, v int8) Field
- func NamedErr(k string, err error) Field
- func NamedError(k string, err error) Field
- func Object(k string, val interface{}) Field
- func Secret(k, v string) Field
- func Str(k, v string) Field
- func String(k, v string) Field
- func Stringer(k string, val interface{ ... }) Field
- func Time(k string, v time.Time) Field
- func TimeField(k string, v time.Time) Field
- func Uint(k string, v uint) Field
- func Uint16(k string, v uint16) Field
- func Uint32(k string, v uint32) Field
- func Uint64(k string, v uint64) Field
- func Uint8(k string, v uint8) Field
- func (f Field) BoolValue() bool
- func (f Field) BytesValue() []byte
- func (f Field) DurationValue() time.Duration
- func (f Field) FloatValue() float64
- func (f Field) IntValue() int64
- func (f Field) IsBool() bool
- func (f Field) IsBytes() bool
- func (f Field) IsDuration() bool
- func (f Field) IsFloat() bool
- func (f Field) IsInt() bool
- func (f Field) IsString() bool
- func (f Field) IsTime() bool
- func (f Field) IsUint() bool
- func (f Field) Key() string
- func (f Field) StringValue() string
- func (f Field) TimeValue() time.Time
- func (f Field) Type() kind
- func (f Field) UintValue() uint64
- type Hook
- type IdleStrategy
- type JSONEncoder
- type Level
- func (l Level) Enabled(min Level) bool
- func (l Level) IsDPanic() bool
- func (l Level) IsDebug() bool
- func (l Level) IsError() bool
- func (l Level) IsFatal() bool
- func (l Level) IsInfo() bool
- func (l Level) IsPanic() bool
- func (l Level) IsWarn() bool
- func (l Level) MarshalText() ([]byte, error)
- func (l Level) String() string
- func (l *Level) UnmarshalText(b []byte) error
- type LevelFlag
- type Logger
- func (l *Logger) AtomicLevel() *AtomicLevel
- func (l *Logger) Close() error
- func (l *Logger) DPanic(msg string, fields ...Field) bool
- func (l *Logger) Debug(msg string, fields ...Field) bool
- func (l *Logger) Debugf(format string, args ...any) bool
- func (l *Logger) Error(msg string, fields ...Field) bool
- func (l *Logger) Errorf(format string, args ...any) bool
- func (l *Logger) Fatal(msg string, fields ...Field)
- func (l *Logger) Info(msg string, fields ...Field) bool
- func (l *Logger) InfoFields(msg string, fields ...Field) bool
- func (l *Logger) Infof(format string, args ...any) bool
- func (l *Logger) Level() Level
- func (l *Logger) Named(name string) *Logger
- func (l *Logger) Panic(msg string, fields ...Field) bool
- func (l *Logger) SetLevel(min Level)
- func (l *Logger) Start()
- func (l *Logger) Stats() map[string]int64
- func (l *Logger) Sync() error
- func (l *Logger) Warn(msg string, fields ...Field) bool
- func (l *Logger) Warnf(format string, args ...any) bool
- func (l *Logger) With(fields ...Field) *Logger
- func (l *Logger) WithContext(ctx context.Context) *ContextLogger
- func (l *Logger) WithContextExtractor(ctx context.Context, extractor *ContextExtractor) *ContextLogger
- func (l *Logger) WithContextValue(ctx context.Context, key ContextKey, fieldName string) *ContextLogger
- func (l *Logger) WithOptions(opts ...Option) *Logger
- func (l *Logger) WithRequestID(ctx context.Context) *ContextLogger
- func (l *Logger) WithTraceID(ctx context.Context) *ContextLogger
- func (l *Logger) WithUserID(ctx context.Context) *ContextLogger
- func (l *Logger) Write(fill func(*Record)) bool
- type Option
- type ProcessorFunc
- type Record
- type Ring
- type Sampler
- type Stack
- type SyncReader
- type SyncWriter
- type TextEncoder
- type TokenBucketSampler
- type WriteSyncer
Constants ¶
const ( // Core logging errors ErrCodeLoggerCreation errors.ErrorCode = "IRIS_LOGGER_CREATION" ErrCodeLoggerNotFound errors.ErrorCode = "IRIS_LOGGER_NOT_FOUND" ErrCodeLoggerDisabled errors.ErrorCode = "IRIS_LOGGER_DISABLED" ErrCodeLoggerClosed errors.ErrorCode = "IRIS_LOGGER_CLOSED" // Configuration errors ErrCodeInvalidConfig errors.ErrorCode = "IRIS_INVALID_CONFIG" ErrCodeInvalidLevel errors.ErrorCode = "IRIS_INVALID_LEVEL" ErrCodeInvalidFormat errors.ErrorCode = "IRIS_INVALID_FORMAT" ErrCodeInvalidOutput errors.ErrorCode = "IRIS_INVALID_OUTPUT" // Field and encoding errors ErrCodeInvalidField errors.ErrorCode = "IRIS_INVALID_FIELD" ErrCodeEncodingFailed errors.ErrorCode = "IRIS_ENCODING_FAILED" ErrCodeFieldTypeMismatch errors.ErrorCode = "IRIS_FIELD_TYPE_MISMATCH" ErrCodeBufferOverflow errors.ErrorCode = "IRIS_BUFFER_OVERFLOW" // Writer and output errors ErrCodeWriterNotAvailable errors.ErrorCode = "IRIS_WRITER_NOT_AVAILABLE" ErrCodeWriteFailed errors.ErrorCode = "IRIS_WRITE_FAILED" ErrCodeFlushFailed errors.ErrorCode = "IRIS_FLUSH_FAILED" ErrCodeSyncFailed errors.ErrorCode = "IRIS_SYNC_FAILED" // Performance and resource errors ErrCodeMemoryAllocation errors.ErrorCode = "IRIS_MEMORY_ALLOCATION" ErrCodePoolExhausted errors.ErrorCode = "IRIS_POOL_EXHAUSTED" ErrCodeTimeout errors.ErrorCode = "IRIS_TIMEOUT" ErrCodeResourceLimit errors.ErrorCode = "IRIS_RESOURCE_LIMIT" // Ring buffer errors ErrCodeRingInvalidCapacity errors.ErrorCode = "IRIS_RING_INVALID_CAPACITY" ErrCodeRingInvalidBatchSize errors.ErrorCode = "IRIS_RING_INVALID_BATCH_SIZE" ErrCodeRingMissingProcessor errors.ErrorCode = "IRIS_RING_MISSING_PROCESSOR" ErrCodeRingClosed errors.ErrorCode = "IRIS_RING_CLOSED" ErrCodeRingBuildFailed errors.ErrorCode = "IRIS_RING_BUILD_FAILED" // Hook and middleware errors ErrCodeHookExecution errors.ErrorCode = "IRIS_HOOK_EXECUTION" ErrCodeMiddlewareChain errors.ErrorCode = "IRIS_MIDDLEWARE_CHAIN" ErrCodeFilterFailed errors.ErrorCode = "IRIS_FILTER_FAILED" // File and rotation errors ErrCodeFileOpen errors.ErrorCode = "IRIS_FILE_OPEN" ErrCodeFileWrite errors.ErrorCode = "IRIS_FILE_WRITE" ErrCodeFileRotation errors.ErrorCode = "IRIS_FILE_ROTATION" ErrCodePermissionDenied errors.ErrorCode = "IRIS_PERMISSION_DENIED" )
LoggerError codes - specific error codes for the iris logging library
const ErrCodeLoggerExecution errors.ErrorCode = "IRIS_LOGGER_EXECUTION"
ErrCodeLoggerExecution represents the error code for logger execution failures
Variables ¶
var ( // ErrLoggerNotStarted is returned when logging operations are attempted on a non-started logger ErrLoggerNotStarted = errors.New(ErrCodeLoggerNotFound, "logger not started - call Start() first") // ErrLoggerClosed is returned when logging operations are attempted on a closed logger ErrLoggerClosed = errors.New(ErrCodeLoggerClosed, "logger is closed") // ErrLoggerCreationFailed is returned when logger creation fails ErrLoggerCreationFailed = errors.New(ErrCodeLoggerCreation, "failed to create logger") )
Logger errors
var BalancedStrategy = NewProgressiveIdleStrategy()
BalancedStrategy provides good performance for most production workloads. Uses progressive strategy that adapts to workload patterns. Equivalent to NewProgressiveIdleStrategy().
var DefaultContextExtractor = &ContextExtractor{ Keys: map[ContextKey]string{ RequestIDKey: "request_id", TraceIDKey: "trace_id", SpanIDKey: "span_id", UserIDKey: "user_id", SessionIDKey: "session_id", }, MaxDepth: 10, }
DefaultContextExtractor provides sensible defaults for common use cases.
var EfficientStrategy = NewSleepingIdleStrategy(time.Millisecond, 0)
EfficientStrategy minimizes CPU usage for low-throughput scenarios. Uses 1ms sleep with no initial spinning. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 0).
var HybridStrategy = NewSleepingIdleStrategy(time.Millisecond, 1000)
HybridStrategy provides a good compromise between latency and CPU usage. Spins briefly then sleeps for 1ms. Equivalent to NewSleepingIdleStrategy(time.Millisecond, 1000).
var SpinningStrategy = NewSpinningIdleStrategy()
SpinningStrategy provides ultra-low latency with maximum CPU usage. Equivalent to NewSpinningIdleStrategy().
Functions ¶
func AllLevelNames ¶
func AllLevelNames() []string
AllLevelNames returns a slice of all valid level names. This is useful for generating help text and validation messages.
func CIFriendlyRetryCount ¶
CIFriendlyRetryCount returns an appropriate retry count for the given operation In CI environments, retry counts are increased to account for scheduler variability
func CIFriendlySleep ¶
CIFriendlySleep sleeps for an appropriate duration In CI environments, sleep durations are increased to allow for slower scheduling
func CIFriendlyTimeout ¶
CIFriendlyTimeout returns an appropriate timeout for the given operation In CI environments, timeouts are increased to account for resource constraints
func EstimateBinarySize ¶ added in v1.1.0
EstimateBinarySize estimates the binary encoding size for a record.
This is useful for buffer pre-allocation and capacity planning.
Parameters:
- rec: Record to estimate size for
Returns:
- int: Estimated byte size of binary encoding
func GetErrorCode ¶
GetErrorCode extracts the error code from an error
func GetUserMessage ¶
GetUserMessage extracts a user-friendly message from an error
func IsCIEnvironment ¶
func IsCIEnvironment() bool
IsCIEnvironment returns true if running in a CI environment
func IsFileSyncer ¶
func IsFileSyncer(ws WriteSyncer) bool
IsFileSyncer checks if a WriteSyncer is backed by a file. This can be useful for conditional logic based on the underlying writer type, such as applying different buffering strategies.
func IsLoggerError ¶
IsLoggerError checks if an error is an iris logger error
func IsNopSyncer ¶
func IsNopSyncer(ws WriteSyncer) bool
IsNopSyncer checks if a WriteSyncer uses no-op synchronization. This can help optimize write patterns when sync operations are known to be no-ops.
func IsRetryableError ¶
IsRetryableError checks if an error is retryable
func IsValidLevel ¶
IsValidLevel checks if the given level is a valid predefined level.
func NewAtomicLevelFromConfig ¶
func NewAtomicLevelFromConfig(config *Config) *atomicLevel
NewAtomicLevelFromConfig creates a new atomicLevel initialized with the config's level. This function bridges the gap between static configuration and dynamic level management.
func NewLoggerError ¶
NewLoggerError creates a new logger-specific error with standard context
func NewLoggerErrorWithField ¶
NewLoggerErrorWithField creates a logger error with field and value information
func NewReaderLogger ¶ added in v1.1.0
func NewReaderLogger(config Config, readers []SyncReader, opts ...Option) (*readerLogger, error)
NewReaderLogger creates a logger that processes both direct logging calls and background readers. The underlying Logger performance is preserved while external log sources are processed asynchronously.
Parameters:
- config: Standard Iris logger configuration
- readers: External log sources to process in background
- opts: Standard Iris logger options
Returns:
- *readerLogger: Extended logger with reader support
- error: Configuration or setup error
Performance: Zero impact on direct logging, background readers operate in separate goroutines feeding into the same high-performance ring buffer.
func RecoverWithError ¶
RecoverWithError recovers from a panic and converts it to a logger error
func SafeExecute ¶
SafeExecute executes a function safely, handling any panics
func SetErrorHandler ¶
func SetErrorHandler(handler ErrorHandler)
SetErrorHandler sets a custom error handler for the iris logging system This allows applications to customize how logging errors are handled
Types ¶
type Architecture ¶
type Architecture int
Architecture represents the ring buffer architecture type
const ( // SingleRing uses a single Zephyros ring for maximum single-thread performance // Best for: benchmarks, single-producer scenarios, maximum single-thread throughput // Performance: ~25ns/op single-thread, limited concurrency scaling SingleRing Architecture = iota // ThreadedRings uses ThreadedZephyros with multiple rings for multi-producer scaling // Best for: production, multi-producer scenarios, high concurrency // Performance: ~35ns/op per thread, excellent scaling (4x+ improvement with multiple producers) ThreadedRings )
func ParseArchitecture ¶
func ParseArchitecture(s string) (Architecture, error)
ParseArchitecture parses a string into an Architecture
func (Architecture) String ¶
func (a Architecture) String() string
String returns the string representation of the architecture
type AtomicLevel ¶
type AtomicLevel struct {
// contains filtered or unexported fields
}
AtomicLevel provides atomic operations on Level values. This is useful for dynamically changing log levels in concurrent environments.
func NewAtomicLevel ¶
func NewAtomicLevel(level Level) *AtomicLevel
NewAtomicLevel creates a new AtomicLevel with the given initial level.
func (*AtomicLevel) Enabled ¶
func (al *AtomicLevel) Enabled(level Level) bool
Enabled checks if the given level is enabled atomically. This is a high-performance method for checking levels in hot paths.
func (*AtomicLevel) Level ¶
func (al *AtomicLevel) Level() Level
Level returns the current level atomically.
func (*AtomicLevel) MarshalText ¶
func (al *AtomicLevel) MarshalText() ([]byte, error)
MarshalText implements encoding.TextMarshaler for AtomicLevel.
func (*AtomicLevel) SetLevel ¶
func (al *AtomicLevel) SetLevel(level Level)
SetLevel sets the level atomically.
func (*AtomicLevel) String ¶
func (al *AtomicLevel) String() string
String returns the string representation of the current level.
func (*AtomicLevel) UnmarshalText ¶
func (al *AtomicLevel) UnmarshalText(b []byte) error
UnmarshalText implements encoding.TextUnmarshaler for AtomicLevel.
type AutoScalingConfig ¶
type AutoScalingConfig struct {
// Scaling thresholds (inspired by Lethe's shouldScaleToMPSC)
ScaleToMPSCWriteThreshold uint64 // Min writes/sec to consider MPSC (e.g., 1000)
ScaleToMPSCContentionRatio uint32 // Min contention % to scale to MPSC (e.g., 10 = 10%)
ScaleToMPSCLatencyThreshold time.Duration // Max latency before scaling to MPSC (e.g., 1ms)
ScaleToMPSCGoroutineCount uint32 // Min active goroutines for MPSC (e.g., 3)
// Scale down thresholds
ScaleToSingleWriteThreshold uint64 // Max writes/sec to scale back to Single (e.g., 100)
ScaleToSingleContentionRatio uint32 // Max contention % for Single mode (e.g., 1%)
ScaleToSingleLatencyMax time.Duration // Max latency for Single mode (e.g., 100µs)
// Measurement and stability
MeasurementWindow time.Duration // How often to check metrics (e.g., 100ms)
ScalingCooldown time.Duration // Min time between scale operations (e.g., 1s)
StabilityRequirement int // Consecutive measurements before scaling (e.g., 3)
}
AutoScalingConfig defines auto-scaling behavior
func DefaultAutoScalingConfig ¶
func DefaultAutoScalingConfig() AutoScalingConfig
DefaultAutoScalingConfig returns production-ready auto-scaling configuration
type AutoScalingLogger ¶
type AutoScalingLogger struct {
// contains filtered or unexported fields
}
AutoScalingLogger implements an auto-scaling logging architecture
func NewAutoScalingLogger ¶
func NewAutoScalingLogger(cfg Config, scalingConfig AutoScalingConfig, opts ...Option) (*AutoScalingLogger, error)
NewAutoScalingLogger creates an auto-scaling logger
func (*AutoScalingLogger) Close ¶
func (asl *AutoScalingLogger) Close() error
Close gracefully shuts down auto-scaling logger
func (*AutoScalingLogger) Debug ¶
func (asl *AutoScalingLogger) Debug(msg string, fields ...Field)
Debug logs at Debug level with automatic scaling
func (*AutoScalingLogger) Error ¶
func (asl *AutoScalingLogger) Error(msg string, fields ...Field)
Error logs at Error level with automatic scaling
func (*AutoScalingLogger) GetCurrentMode ¶
func (asl *AutoScalingLogger) GetCurrentMode() AutoScalingMode
GetCurrentMode returns the current scaling mode
func (*AutoScalingLogger) GetScalingStats ¶
func (asl *AutoScalingLogger) GetScalingStats() AutoScalingStats
GetScalingStats returns auto-scaling performance statistics
func (*AutoScalingLogger) Info ¶
func (asl *AutoScalingLogger) Info(msg string, fields ...Field)
Info logs at Info level with automatic scaling
func (*AutoScalingLogger) Start ¶
func (asl *AutoScalingLogger) Start() error
Start begins auto-scaling operations
func (*AutoScalingLogger) Warn ¶
func (asl *AutoScalingLogger) Warn(msg string, fields ...Field)
Warn logs at Warn level with automatic scaling
type AutoScalingMetrics ¶
type AutoScalingMetrics struct {
// contains filtered or unexported fields
}
AutoScalingMetrics tracks performance metrics for scaling decisions
type AutoScalingMode ¶
type AutoScalingMode uint32
AutoScalingMode represents the current scaling mode
const ( // SingleRingMode represents ultra-fast single-threaded logging (~25ns/op) // Best for: Low contention, single producers, benchmarks SingleRingMode AutoScalingMode = iota // MPSCMode represents multi-producer high-contention mode (~35ns/op per thread) // Best for: High contention, multiple goroutines, high throughput MPSCMode )
func (AutoScalingMode) String ¶
func (m AutoScalingMode) String() string
type AutoScalingStats ¶
type AutoScalingStats struct {
CurrentMode AutoScalingMode
TotalScaleOperations uint64
ScaleToMPSCCount uint64
ScaleToSingleCount uint64
TotalWrites uint64
ContentionCount uint64
ActiveGoroutines uint32
}
AutoScalingStats provides auto-scaling performance insights
type BinaryDecoder ¶ added in v1.1.0
type BinaryDecoder struct{}
BinaryDecoder provides utilities for reading binary-encoded log data.
Note: Full decoder implementation would be in a separate package or external tool for log analysis. This is a minimal helper for testing.
func (*BinaryDecoder) DecodeMagic ¶ added in v1.1.0
func (d *BinaryDecoder) DecodeMagic(data []byte) (bool, int)
DecodeMagic validates the magic header of a binary log record.
Parameters:
- data: Binary data to validate
Returns:
- bool: true if magic header is valid
- int: number of bytes consumed
func (*BinaryDecoder) ReadVarint ¶ added in v1.1.0
ReadVarint reads a variable-length unsigned integer from data.
Parameters:
- data: Binary data to read from
- offset: Starting position in data
Returns:
- uint64: Decoded integer value
- int: New offset after reading
- error: Decoding error if any
type BinaryEncoder ¶ added in v1.1.0
type BinaryEncoder struct {
// IncludeLoggerName controls whether to include logger name in output
IncludeLoggerName bool
// IncludeCaller controls whether to include caller information
IncludeCaller bool
// IncludeStack controls whether to include stack traces
IncludeStack bool
// UseUnixNano uses Unix nanoseconds instead of RFC3339 for timestamps
UseUnixNano bool
}
BinaryEncoder implements ultra-fast binary encoding for log records.
The binary format is designed for maximum performance and minimal size: - Varint encoding for variable-length integers - Type-prefixed fields for self-describing format - Little-endian byte order for modern CPU efficiency - Magic header for format validation
Performance Characteristics: - ~20ns/op encoding time (faster than JSON) - 30-50% smaller output than JSON - Zero reflection overhead - Minimal memory allocations
Use Cases: - High-frequency trading systems - Real-time analytics pipelines - Log aggregation over network - Storage-constrained environments
func NewBinaryEncoder ¶ added in v1.1.0
func NewBinaryEncoder() *BinaryEncoder
NewBinaryEncoder creates a new binary encoder with optimal defaults.
Default configuration: - Logger name: included - Caller info: excluded (for performance) - Stack traces: excluded (for performance) - Timestamps: Unix nanoseconds (for performance)
Returns:
- *BinaryEncoder: Configured binary encoder instance
func NewCompactBinaryEncoder ¶ added in v1.1.0
func NewCompactBinaryEncoder() *BinaryEncoder
NewCompactBinaryEncoder creates a binary encoder optimized for minimal size.
Compact configuration excludes all optional fields: - Logger name: excluded - Caller info: excluded - Stack traces: excluded - Timestamps: Unix nanoseconds
Use for bandwidth-constrained or storage-limited environments.
Returns:
- *BinaryEncoder: Minimal binary encoder instance
func (*BinaryEncoder) Encode ¶ added in v1.1.0
Encode writes a log record to the buffer in binary format.
The encoding process: 1. Write magic header and version 2. Encode timestamp (Unix nano or RFC3339) 3. Encode log level as single byte 4. Conditionally encode logger name, caller, stack 5. Encode message if present 6. Encode all structured fields
Parameters:
- rec: Log record to encode
- now: Timestamp for this log entry
- buf: Buffer to write encoded data to
Performance: ~20ns/op with zero allocations
type Config ¶
type Config struct {
// Ring buffer configuration (power-of-two recommended for Capacity)
// Capacity determines the maximum number of log entries that can be buffered
// before blocking or dropping occurs. Larger values improve throughput but
// increase memory usage.
Capacity int64
// BatchSize controls how many log entries are processed together.
// Higher values improve throughput but may increase latency.
// Optimal values are typically 8-64 depending on workload.
BatchSize int64
// Architecture determines the ring buffer architecture type
// SingleRing: Maximum single-thread performance (~25ns/op) - best for benchmarks
// ThreadedRings: Multi-producer scaling (~35ns/op per thread) - best for production
// Default: SingleRing for benchmark compatibility
Architecture Architecture
// NumRings specifies the number of rings for ThreadedRings architecture
// Only used when Architecture = ThreadedRings
// Higher values provide better parallelism but use more memory
// Default: 4 (optimal for most multi-core systems)
NumRings int
// BackpressurePolicy determines the behavior when the ring buffer is full
// DropOnFull: Drops new messages for maximum performance (default)
// BlockOnFull: Blocks caller until space is available (guaranteed delivery)
BackpressurePolicy zephyroslite.BackpressurePolicy
// IdleStrategy controls CPU usage when no log records are being processed
// Different strategies provide various trade-offs between latency and CPU usage:
// - SpinningIdleStrategy: Ultra-low latency, ~100% CPU usage
// - SleepingIdleStrategy: Balanced CPU/latency, ~1-10% CPU usage
// - YieldingIdleStrategy: Moderate reduction, ~10-50% CPU usage
// - ChannelIdleStrategy: Minimal CPU usage, ~microsecond latency
// - ProgressiveIdleStrategy: Adaptive strategy for variable workloads (default)
IdleStrategy zephyroslite.IdleStrategy
// Output and formatting configuration
// Output specifies where log entries are written. Must implement WriteSyncer
// for proper synchronization guarantees.
Output WriteSyncer
// Encoder determines the output format (JSON, Console, etc.)
// The encoder converts log records to their final byte representation
Encoder Encoder
// Level sets the minimum logging level. Messages below this level
// are filtered out early for maximum performance.
Level Level // default: Info
// TimeFn allows custom time source for timestamps.
// Default: time.Now for real-time logging
// Can be overridden for testing or performance optimization
TimeFn func() time.Time
// Optional performance tuning
// Sampler controls log sampling for high-volume scenarios
// Can be nil to disable sampling
Sampler Sampler
// Name provides a human-readable identifier for this logger instance
// Useful for debugging and metrics collection
Name string
}
Config represents the core configuration for an iris logger instance. This structure centralizes all logging parameters with intelligent defaults and performance optimizations. All fields are designed for zero-copy operations and minimal memory allocation.
Performance considerations: - Capacity should be a power-of-two for optimal ring buffer performance - BatchSize affects throughput vs latency trade-offs - TimeFn allows for custom time sources (useful for testing and optimization)
Thread-safety: Config structs are immutable after logger creation
func LoadConfigFromEnv ¶
LoadConfigFromEnv loads logger configuration from environment variables
func LoadConfigFromJSON ¶
LoadConfigFromJSON loads logger configuration from a JSON file
func LoadConfigMultiSource ¶
LoadConfigMultiSource loads configuration from multiple sources with precedence: 1. Environment variables (highest priority) 2. JSON file 3. Default values (lowest priority)
func TestConfig ¶ added in v1.1.0
func TestConfig() Config
TestConfig returns a basic configuration optimized for testing across platforms
This function provides a consistent base configuration that works reliably on all platforms including macOS, which has different memory characteristics.
Returns:
- Config: Platform-optimized configuration for testing
func TestConfigSmall ¶ added in v1.1.0
func TestConfigSmall() Config
TestConfigSmall returns a minimal configuration for unit tests
This configuration uses the smallest viable ring buffer size across all platforms for tests that need minimal resource usage.
Returns:
- Config: Minimal configuration for resource-constrained tests
func TestConfigWithOutput ¶ added in v1.1.0
func TestConfigWithOutput(output WriteSyncer) Config
TestConfigWithOutput returns a test configuration with specified output
Parameters:
- output: Output destination for log messages
Returns:
- Config: Platform-optimized configuration with custom output
func (*Config) Clone ¶
Clone creates a deep copy of the configuration. This is useful for creating derived configurations without affecting the original.
type ConsoleEncoder ¶
type ConsoleEncoder struct {
// TimeFormat specifies the Go time layout for timestamps.
// Default: time.RFC3339Nano for precise development timing.
// Popular alternatives: time.Kitchen, time.Stamp, custom layouts.
TimeFormat string
// LevelCasing controls the case of level text in output.
// Values: "upper" (default: INFO, ERROR) or "lower" (info, error).
// Affects readability and consistency with your preferred style.
LevelCasing string
// EnableColor enables ANSI color codes for different log levels.
// Default: false (safe for all terminals and log files).
// Enable only in interactive terminals that support colors.
EnableColor bool
}
ConsoleEncoder implements human-readable console output for development and debugging.
This encoder is optimized for interactive terminals and development workflows. It provides clean, readable output with optional color support to enhance the debugging experience.
Features:
- Configurable timestamp formatting (supports any Go time layout)
- Level text casing control (uppercase/lowercase)
- Optional ANSI color codes for different log levels
- Clean field formatting for easy visual scanning
- Terminal-friendly output without excessive escaping
Output Format:
2025-09-06T14:30:45.123456789Z INFO User action field=value
Use Cases: - Development and debugging environments - CLI applications requiring human-readable logs - Interactive terminals and development tools - Local testing and troubleshooting
func NewColorConsoleEncoder ¶
func NewColorConsoleEncoder() *ConsoleEncoder
NewColorConsoleEncoder creates a console encoder with ANSI colors enabled.
This variant is specifically designed for interactive terminals that support ANSI color codes. Colors help differentiate log levels at a glance during development and debugging.
Color scheme: - ERROR: Red (high visibility for critical issues) - WARN: Yellow (attention-grabbing for warnings) - INFO: Default (normal text for regular information) - DEBUG: Cyan (distinct but subtle for debug info)
Use only in: - Interactive development terminals - IDEs with color support - Terminal applications for developers
Avoid in: - Log files (colors become escape sequences) - Non-interactive environments - Systems without ANSI support
Returns:
- *ConsoleEncoder: Console encoder with colors enabled
func NewConsoleEncoder ¶
func NewConsoleEncoder() *ConsoleEncoder
NewConsoleEncoder creates a new console encoder with development-friendly defaults.
Default configuration: - TimeFormat: time.RFC3339Nano (precise for development) - LevelCasing: "upper" (traditional log format) - EnableColor: false (safe for all environments)
These defaults work well in most development environments and can be safely used in both terminals and log files.
Returns:
- *ConsoleEncoder: Configured console encoder instance
type ContextExtractor ¶
type ContextExtractor struct {
// Keys maps context keys to field names in log output
Keys map[ContextKey]string
// MaxDepth limits how deep to search in context chain (default: 10)
MaxDepth int
}
ContextExtractor defines which context keys should be extracted and logged. This prevents the performance overhead of scanning all context values.
type ContextKey ¶
type ContextKey string
ContextKey represents a key type for context values that should be logged.
const ( RequestIDKey ContextKey = "request_id" TraceIDKey ContextKey = "trace_id" SpanIDKey ContextKey = "span_id" UserIDKey ContextKey = "user_id" SessionIDKey ContextKey = "session_id" )
Common context keys for standardized logging
type ContextLogger ¶
type ContextLogger struct {
// contains filtered or unexported fields
}
ContextLogger wraps a Logger with pre-extracted context fields. This avoids context.Value() calls in the hot logging path.
func (*ContextLogger) Debug ¶
func (cl *ContextLogger) Debug(msg string, fields ...Field)
Debug logs a message at debug level with context fields
func (*ContextLogger) Error ¶
func (cl *ContextLogger) Error(msg string, fields ...Field)
Error logs a message at error level with context fields
func (*ContextLogger) Fatal ¶
func (cl *ContextLogger) Fatal(msg string, fields ...Field)
Fatal logs a message at fatal level with context fields and exits
func (*ContextLogger) Info ¶
func (cl *ContextLogger) Info(msg string, fields ...Field)
Info logs a message at info level with context fields
func (*ContextLogger) Warn ¶
func (cl *ContextLogger) Warn(msg string, fields ...Field)
Warn logs a message at warn level with context fields
func (*ContextLogger) With ¶
func (cl *ContextLogger) With(fields ...Field) *ContextLogger
With creates a new ContextLogger with additional fields. This preserves both context fields and manually added fields.
func (*ContextLogger) WithAdditionalContext ¶
func (cl *ContextLogger) WithAdditionalContext(ctx context.Context, extractor *ContextExtractor) *ContextLogger
WithAdditionalContext extracts additional context values without losing existing ones.
type DynamicConfigWatcher ¶
type DynamicConfigWatcher struct {
// contains filtered or unexported fields
}
DynamicConfigWatcher manages dynamic configuration changes using Argus Provides real-time hot reload of Iris logger configuration with audit trail
func EnableDynamicLevel ¶
func EnableDynamicLevel(logger *Logger, configPath string) (*DynamicConfigWatcher, error)
EnableDynamicLevel creates and starts a config watcher for the given logger and config file This is a convenience function that combines NewDynamicConfigWatcher + Start
Example:
logger, err := iris.New(config)
if err != nil {
return err
}
watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
log.Printf("Dynamic level disabled: %v", err)
} else {
defer watcher.Stop()
log.Println("✅ Dynamic level changes enabled!")
}
func NewDynamicConfigWatcher ¶
func NewDynamicConfigWatcher(configPath string, atomicLevel *AtomicLevel) (*DynamicConfigWatcher, error)
NewDynamicConfigWatcher creates a new dynamic config watcher for iris logger This enables runtime log level changes by watching the configuration file
Parameters:
- configPath: Path to the JSON configuration file to watch
- atomicLevel: The atomic level instance from iris logger
Example usage:
logger, err := iris.New(config)
if err != nil {
return err
}
watcher, err := iris.NewDynamicConfigWatcher("config.json", logger.Level())
if err != nil {
return err
}
defer watcher.Stop()
if err := watcher.Start(); err != nil {
return err
}
Now when you modify config.json and change the "level" field, the logger will automatically update its level without restart!
func (*DynamicConfigWatcher) IsRunning ¶
func (w *DynamicConfigWatcher) IsRunning() bool
IsRunning returns true if the watcher is currently active
func (*DynamicConfigWatcher) Start ¶
func (w *DynamicConfigWatcher) Start() error
Start begins watching the configuration file for changes
func (*DynamicConfigWatcher) Stop ¶
func (w *DynamicConfigWatcher) Stop() error
Stop stops watching the configuration file
type ErrorHandler ¶
ErrorHandler represents a function that handles errors within the logging system
func GetErrorHandler ¶
func GetErrorHandler() ErrorHandler
GetErrorHandler returns the current error handler
type Field ¶
type Field struct {
// K is the field key/name
K string
// T indicates the type of data stored in this field
T kind
// I64 stores signed integers, bools (as 0/1), durations, and timestamps
I64 int64
// U64 stores unsigned integers
U64 uint64
// F64 stores floating-point numbers
F64 float64
// Str stores string values
Str string
// B stores byte slices
B []byte
// Obj stores arbitrary objects (errors, stringers, etc.)
Obj interface{}
}
Field represents a key-value pair with type information for structured logging. It uses a union-like approach to minimize memory allocation and maximize performance. The T field indicates which of the value fields (I64, U64, F64, Str, B, Obj) contains the actual data.
func Bool ¶
Bool creates a boolean field. Internally stored as int64 (1 for true, 0 for false) for efficiency.
func Bytes ¶
Bytes creates a byte slice field. Useful for binary data, encoded strings, or raw bytes.
func Dur ¶
Dur creates a duration field from time.Duration. Stored as int64 nanoseconds for precision and efficiency.
func Err ¶
Err creates an error field with key "error". If err is nil, returns a field with empty string (compatible but not elided).
func ErrorField ¶
ErrorField creates an error field for logging errors. Equivalent to NamedErr("error", err) but uses the proper error type for potential optimization.
func Float64 ¶
Float64 creates a 64-bit floating-point field. Suitable for decimal numbers and scientific notation.
func Int ¶
Int creates a signed integer field from an int value. The int is converted to int64 for consistent storage.
func Int64 ¶
Int64 creates a signed 64-bit integer field. Use this for large integers or when you specifically need int64.
func NamedErr ¶
NamedErr creates an error field with a custom key. If err is nil, returns a field with empty string (compatible but not elided).
func NamedError ¶
NamedError creates an error field with a custom key using proper error type.
func Secret ¶
Secret creates a field for sensitive data that will be automatically redacted. The actual value is stored but will appear as "[REDACTED]" in log output. Use this for passwords, API keys, tokens, personal data, or any sensitive information.
Example:
logger.Info("User login", iris.Secret("password", userPassword))
// Output: {"level":"info","msg":"User login","password":"[REDACTED]"}
Security: This prevents accidental exposure of sensitive data in logs while maintaining the field structure for debugging purposes.
func Str ¶
Str creates a string field for logging. This is one of the most commonly used field types.
func TimeField ¶
TimeField creates a timestamp field from time.Time. Stored as Unix nanoseconds for high precision and compact representation.
func Uint64 ¶
Uint64 creates an unsigned 64-bit integer field. Use this for non-negative values that may exceed int64 range.
func (Field) BoolValue ¶
BoolValue returns the boolean value if the field is a bool, false otherwise.
func (Field) BytesValue ¶
BytesValue returns the byte slice value if the field is bytes, nil otherwise.
func (Field) DurationValue ¶
DurationValue returns the time.Duration value if the field is a duration, 0 otherwise.
func (Field) FloatValue ¶
FloatValue returns the float64 value if the field is a float, 0.0 otherwise.
func (Field) IsDuration ¶
IsDuration returns true if the field contains duration data.
func (Field) StringValue ¶
StringValue returns the string value if the field is a string, empty string otherwise.
type Hook ¶
type Hook func(rec *Record)
Hook represents a function executed in the consumer thread after log record processing.
Hooks are executed in the consumer thread to avoid contention with producer threads. This design ensures maximum performance for logging operations while still allowing powerful post-processing capabilities.
Hook functions receive the fully populated Record after encoding but before the buffer is returned to the pool. This allows for:
- Metrics collection
- Log forwarding to external systems
- Custom processing based on log content
- Development-time debugging
Performance Notes:
- Executed in single consumer thread (no locks needed)
- Called after encoding is complete
- Should avoid blocking operations to maintain throughput
Thread Safety: Hooks are called from single consumer thread only
type IdleStrategy ¶
type IdleStrategy = zephyroslite.IdleStrategy
IdleStrategy defines the interface for consumer idle behavior. This type alias exposes the internal interface for configuration purposes.
func NewChannelIdleStrategy ¶
func NewChannelIdleStrategy(timeout time.Duration) IdleStrategy
NewChannelIdleStrategy creates an efficient blocking wait strategy. This strategy puts the consumer goroutine into an efficient wait state using Go channels, providing near-zero CPU usage when idle.
Parameters:
- timeout: Maximum time to wait before checking for shutdown (0 = no timeout)
Best for: Minimum CPU usage with acceptable latency for low-throughput scenarios CPU Usage: Near 0% when idle Latency: ~microseconds (channel wake-up time)
Note: This strategy works best with lower throughput workloads where the overhead of channel operations is acceptable.
Examples:
// No timeout - maximum efficiency NewChannelIdleStrategy(0) // With timeout for responsive shutdown NewChannelIdleStrategy(100*time.Millisecond)
func NewProgressiveIdleStrategy ¶
func NewProgressiveIdleStrategy() IdleStrategy
NewProgressiveIdleStrategy creates an adaptive idle strategy. This strategy automatically adjusts its behavior based on work patterns, starting with spinning for ultra-low latency and progressively reducing CPU usage as idle time increases.
This is the default strategy, providing good performance for most workloads without requiring manual tuning.
Best for: Variable workload patterns where both low latency and low CPU usage are important CPU Usage: Adaptive - starts high, reduces over time when idle Latency: Starts at minimum, increases gradually when idle
Behavior:
- Hot spin for first 1000 iterations (minimum latency)
- Occasional yielding up to 10000 iterations
- Progressive sleep with exponential backoff
- Resets to hot spin when work is found
Example:
config := &Config{
IdleStrategy: NewProgressiveIdleStrategy(),
// ... other config
}
func NewSleepingIdleStrategy ¶
func NewSleepingIdleStrategy(sleepDuration time.Duration, maxSpins int) IdleStrategy
NewSleepingIdleStrategy creates a CPU-efficient idle strategy with controlled latency. This strategy reduces CPU usage by sleeping when no work is available, with optional initial spinning for hybrid behavior.
Parameters:
- sleepDuration: How long to sleep when no work is found (e.g., time.Millisecond)
- maxSpins: Number of spin iterations before sleeping (0 = sleep immediately)
Best for: Balanced CPU usage and latency in production environments CPU Usage: ~1-10% depending on sleep duration and spin count Latency: ~1-10ms depending on sleep duration
Examples:
// Low CPU usage, higher latency NewSleepingIdleStrategy(5*time.Millisecond, 0) // Hybrid: spin briefly then sleep NewSleepingIdleStrategy(time.Millisecond, 1000)
func NewSpinningIdleStrategy ¶
func NewSpinningIdleStrategy() IdleStrategy
NewSpinningIdleStrategy creates an ultra-low latency idle strategy. This strategy provides the minimum possible latency by continuously checking for work without ever yielding the CPU.
Best for: Ultra-low latency requirements where CPU consumption is not a concern CPU Usage: ~100% of one core when idle Latency: Minimum possible (~nanoseconds)
Example:
config := &Config{
IdleStrategy: NewSpinningIdleStrategy(),
// ... other config
}
func NewYieldingIdleStrategy ¶
func NewYieldingIdleStrategy(maxSpins int) IdleStrategy
NewYieldingIdleStrategy creates a moderate CPU reduction strategy. This strategy spins for a configurable number of iterations before yielding to the Go scheduler, providing a middle ground between spinning and sleeping approaches.
Parameters:
- maxSpins: Number of spins before yielding to scheduler
Best for: Moderate CPU reduction while maintaining reasonable latency CPU Usage: ~10-50% depending on max spins configuration Latency: ~microseconds to low milliseconds
Examples:
// More aggressive yielding (lower CPU, higher latency) NewYieldingIdleStrategy(100) // Conservative yielding (higher CPU, lower latency) NewYieldingIdleStrategy(10000)
type JSONEncoder ¶
type JSONEncoder struct {
// TimeKey specifies the JSON key for timestamps (default: "ts")
TimeKey string
// LevelKey specifies the JSON key for log levels (default: "level")
LevelKey string
// MsgKey specifies the JSON key for log messages (default: "msg")
MsgKey string
// RFC3339 controls timestamp format:
// true: RFC3339 string format (default, human-readable)
// false: Unix nanoseconds integer (compact, faster)
RFC3339 bool
}
JSONEncoder implements NDJSON (newline-delimited JSON) encoding with zero-reflection.
The encoder produces one JSON object per log record, separated by newlines. This format is ideal for log processing systems and streaming applications.
Performance Features: - Zero reflection overhead using pre-compiled encoding paths - Reusable buffer allocation for minimal GC pressure - Optimized time formatting with caching - Direct byte buffer writing without intermediate strings
Output Format:
{"ts":"2025-09-06T14:30:45.123Z","level":"info","msg":"User action","field":"value"}
Use Cases: - Log aggregation systems (ELK stack, Splunk) - Structured logging for APIs and microservices - Machine-readable logs for automated processing - Integration with JSON-based monitoring tools
func NewJSONEncoder ¶
func NewJSONEncoder() *JSONEncoder
NewJSONEncoder creates a new JSON encoder with standard defaults.
Default configuration: - TimeKey: "ts" - LevelKey: "level" - MsgKey: "msg" - RFC3339: true (human-readable timestamps)
The defaults follow common logging conventions and work well with most log processing systems.
Returns:
- *JSONEncoder: Configured JSON encoder instance
type Level ¶
type Level int32
Level represents the severity level of a log message. Levels are ordered from least to most severe: Debug < Info < Warn < Error < DPanic < Panic < Fatal
Performance Notes: - Level is implemented as int32 for fast comparisons - Atomic operations used for thread-safe level changes - Zero allocation for level checks via inlined comparisons
const ( Debug Level = iota - 1 // Debug information, typically disabled in production Info // General information messages Warn // Warning messages for potentially harmful situations Error // Error messages for failure conditions DPanic // Development panic - panics in development, errors in production Panic // Panic level - logs message then panics Fatal // Fatal level - logs message then calls os.Exit(1) // StacktraceDisabled is a sentinel value used to disable stack trace collection StacktraceDisabled Level = -999 )
Log levels in order of increasing severity
func AllLevels ¶
func AllLevels() []Level
AllLevels returns a slice of all valid levels in ascending order. This is useful for documentation, validation, and testing.
func ParseLevel ¶
ParseLevel parses a string representation of a level and returns the corresponding Level. It handles common aliases and is case-insensitive. Returns Info level for empty strings as a sensible default.
func (Level) Enabled ¶
Enabled determines if this level is enabled given a minimum level. This is a critical hot path function optimized for maximum performance.
func (Level) IsDPanic ¶
IsDPanic returns true if the level is DPanic. Convenience method for checking development panic level.
func (Level) IsDebug ¶
IsDebug returns true if the level is Debug. Convenience method for frequently checked debug level.
func (Level) IsError ¶
IsError returns true if the level is Error. Convenience method for frequently checked error level.
func (Level) IsFatal ¶
IsFatal returns true if the level is Fatal. Convenience method for checking fatal level.
func (Level) IsInfo ¶
IsInfo returns true if the level is Info. Convenience method for frequently checked info level.
func (Level) IsPanic ¶
IsPanic returns true if the level is Panic. Convenience method for checking panic level.
func (Level) IsWarn ¶
IsWarn returns true if the level is Warn. Convenience method for frequently checked warn level.
func (Level) MarshalText ¶
MarshalText implements encoding.TextMarshaler for JSON/XML serialization. This method is optimized to avoid allocations in the common case.
func (Level) String ¶
String returns the string representation of the level. This is used for human-readable output and serialization.
func (*Level) UnmarshalText ¶
UnmarshalText implements encoding.TextUnmarshaler for JSON/XML deserialization. This method provides detailed error information for debugging.
type LevelFlag ¶
type LevelFlag struct {
// contains filtered or unexported fields
}
LevelFlag is a command-line flag implementation for Level. It implements the flag.Value interface for easy CLI integration.
func NewLevelFlag ¶
NewLevelFlag creates a new LevelFlag pointing to the given Level.
func (*LevelFlag) Set ¶
Set parses and sets the level from a string. This method is called by the flag package when parsing command-line arguments.
type Logger ¶
type Logger struct {
// contains filtered or unexported fields
}
Logger provides ultra-high performance logging with zero-allocation structured fields.
The Logger uses a lock-free MPSC (Multiple Producer, Single Consumer) ring buffer for maximum throughput. Multiple goroutines can log concurrently while a single background goroutine processes and outputs the log records.
Thread Safety:
- All logging methods (Debug, Info, Warn, Error) are thread-safe
- Multiple goroutines can log concurrently without locks
- Configuration changes (SetLevel) are atomic and thread-safe
Performance Features:
- Zero allocations for structured logging with pre-allocated fields
- Lock-free atomic operations for level checking
- Intelligent sampling to reduce log volume
- Efficient buffer pooling to minimize GC pressure
- Adaptive batching based on log volume
- Context inheritance with With() for repeated fields
Lifecycle:
- Create with New() - configures but doesn't start processing
- Call Start() to begin background processing
- Use logging methods (Debug, Info, etc.) for actual logging
- Call Close() for graceful shutdown with guaranteed log processing
func New ¶
New creates a new high-performance logger with the specified configuration and options.
The logger is created but not started - call Start() to begin processing. This separation allows for configuration verification and testing setup before actual log processing begins.
Parameters:
- cfg: Logger configuration with output, encoding, and performance settings
- opts: Optional configuration functions for advanced features
The configuration is validated and enhanced with intelligent defaults:
- Missing TimeFn defaults to time.Now
- Zero BatchSize gets auto-sized based on Capacity
- Nil Output or Encoder will cause an error
Returns:
- *Logger: Configured logger ready for Start()
- error: Configuration validation error
Example:
logger, err := iris.New(iris.Config{
Level: iris.Info,
Output: os.Stdout,
Encoder: iris.NewJSONEncoder(),
Capacity: 8192,
}, iris.WithCaller(), iris.Development())
if err != nil {
return err
}
logger.Start()
func NewMagicLogger ¶ added in v1.1.1
NewMagicLogger creates a logger with automatic Lethe optimization when available This is the Magic API that provides seamless integration between Iris and Lethe.
When Lethe is imported:
- Automatic zero-copy optimization via WriteOwned()
- Intelligent buffer sizing based on Lethe's recommendations
- Hot-reload configuration support
- Advanced rotation with compression
When Lethe is not available:
- Graceful fallback to standard file logging
- Same API, no configuration changes needed
Parameters:
- filename: Path to log file (will be created if needed)
- level: Minimum log level
- opts: Optional Iris configuration overrides
Returns a fully configured Logger ready for high-performance logging.
func (*Logger) AtomicLevel ¶
func (l *Logger) AtomicLevel() *AtomicLevel
AtomicLevel returns a pointer to the logger's atomic level.
This method provides access to the underlying atomic level structure, which can be used with dynamic configuration watchers like Argus to enable runtime level changes without logger restarts.
Returns:
- *AtomicLevel: Pointer to the atomic level instance
Example usage with dynamic config watching:
watcher, err := iris.EnableDynamicLevel(logger, "config.json")
if err != nil {
log.Printf("Dynamic level disabled: %v", err)
} else {
defer watcher.Stop()
log.Println("✅ Dynamic level changes enabled!")
}
Thread Safety: The returned AtomicLevel is thread-safe
func (*Logger) Close ¶
Close gracefully shuts down the logger.
This method stops the background processing goroutine and ensures all buffered log records are processed before shutdown. The shutdown is deterministic - Close() will not return until all pending logs have been written to the output.
After Close() is called:
- All subsequent logging operations will fail silently
- The ring buffer becomes unusable
- All buffered records are guaranteed to be processed
The method is idempotent - calling Close() multiple times is safe.
Close flushes any pending log data and closes the logger Close should be called when the logger is no longer needed
Performance Characteristics:
- Blocks until all pending records are processed
- Automatically syncs output before closing
- Cannot be used after Close() is called
Thread Safety: Safe to call from multiple goroutines
func (*Logger) DPanic ¶
DPanic logs a message at a special development panic level.
DPanic (Development Panic) logs at Error level but panics if the logger is in development mode. This allows for aggressive error detection during development while maintaining stability in production.
Behavior:
- Development mode: Logs and then panics
- Production mode: Logs only (no panic)
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Performance: Same as Error level logging with conditional panic Zap-compat: DPanic/Panic/Fatal con livelli dedicati
func (*Logger) Debug ¶
Debug logs a message at Debug level with structured fields.
Debug level is intended for detailed diagnostic information useful during development and troubleshooting. These messages are typically disabled in production environments.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) Error ¶
Error logs a message at Error level with structured fields.
Error level is intended for error events that allow the application to continue running. These messages indicate failures that need immediate attention but don't crash the application.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) Info ¶
Info logs a message at Info level with structured fields.
Info level is intended for general information about program execution. These messages provide insight into application flow and important events.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Zero allocations for simple messages, optimized fast path for messages with fields
func (*Logger) InfoFields ¶
InfoFields logs a message at Info level with structured fields.
This method supports structured logging with key-value pairs for detailed context. Use the simpler Info() method for messages without fields to achieve zero allocations.
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) Level ¶
Level atomically reads the current minimum logging level.
Returns the current minimum level threshold used for filtering log messages. Messages below this level are discarded early for maximum performance.
Returns:
- Level: Current minimum logging level
Performance Notes:
- Atomic load operation
- Zero allocations
- Sub-nanosecond read performance
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Named ¶
Named creates a new logger with the specified name.
Named loggers are useful for organizing logs by component, module, or functionality. The name typically appears in log output to help with filtering and analysis.
Parameters:
- name: Name to assign to the new logger instance
Returns:
- *Logger: New logger instance with the specified name
Example:
dbLogger := logger.Named("database")
apiLogger := logger.Named("api")
dbLogger.Info("Connection established") // Includes "database" context
Performance Notes:
- String assignment only (minimal overhead)
- Name is included in log output by encoder
- Zero allocations during normal operation
Thread Safety: Safe to call from multiple goroutines
func (*Logger) SetLevel ¶
SetLevel atomically changes the minimum logging level.
This method allows dynamic level adjustment during runtime without restarting the logger. Level changes take effect immediately for subsequent log operations.
Parameters:
- min: New minimum level (Debug, Info, Warn, Error)
Performance Notes:
- Atomic operation with no locks or allocations
- Sub-nanosecond level changes
- Thread-safe concurrent access
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Start ¶
func (l *Logger) Start()
Start begins background processing of log records.
This method starts the consumer goroutine that processes log records from the ring buffer and writes them to the configured output. The method is idempotent - calling Start() multiple times is safe and has no effect after the first call.
The consumer goroutine will continue processing until Close() is called. All logging operations require Start() to be called first, otherwise log records will accumulate in the ring buffer without being processed.
Performance Notes:
- Uses lock-free atomic operations for state management
- Single consumer goroutine eliminates lock contention
- Processing begins immediately after Start() returns
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Stats ¶
Stats returns comprehensive performance statistics for monitoring.
This method provides real-time metrics about logger performance, buffer utilization, and operational health. The statistics are collected atomically and can be safely called from multiple goroutines.
Returns:
- map[string]int64: Performance metrics including:
- Ring buffer statistics (capacity, utilization, etc.)
- Dropped message count
- Processing throughput metrics
- Memory usage indicators
The returned map contains:
- "dropped": Number of messages dropped due to ring buffer full
- "writer_position": Current writer position in ring buffer
- "reader_position": Current reader position in ring buffer
- "buffer_size": Ring buffer capacity
- "items_buffered": Number of items waiting to be processed
- "utilization_percent": Buffer utilization percentage
- Additional ring buffer specific statistics
Performance: Atomic reads with zero allocations for metric collection
func (*Logger) Sync ¶
Sync flushes any buffered log entries.
This method ensures that all buffered log entries are written to their destination. It's useful before program termination or when immediate log delivery is required.
Returns:
- error: Any error encountered during synchronization
Performance Notes:
- May block until all buffers are flushed
- Should be called sparingly in hot paths
- Automatically called during Close()
Thread Safety: Safe to call from multiple goroutines
func (*Logger) Warn ¶
Warn logs a message at Warn level with structured fields.
Warn level is intended for potentially harmful situations that don't prevent the application from continuing. These messages indicate conditions that should be investigated.
Parameters:
- msg: Primary log message
- fields: Structured key-value pairs (zero-allocation)
Returns:
- bool: true if successfully logged, false if dropped or filtered
Performance: Optimized for zero allocations with pre-allocated field storage
func (*Logger) With ¶
With creates a new logger with additional structured fields.
This method creates a new logger instance that automatically includes the specified fields in every log message. This is useful for adding context that applies to multiple log statements, such as request IDs, user IDs, or component names.
Parameters:
- fields: Structured fields to include in all log messages
Returns:
- *Logger: New logger instance with pre-populated fields
Implementation Note: The fields are stored in the logger and applied to each log record during the logging operation.
Example:
requestLogger := logger.With(
iris.String("request_id", reqID),
iris.String("user_id", userID),
)
requestLogger.Info("Processing request") // Includes request_id and user_id
Performance Notes:
- Fields are stored once in logger instance
- Applied during each log operation (small overhead)
- Zero allocations for field storage in logger
Thread Safety: Safe to call from multiple goroutines
func (*Logger) WithContext ¶
func (l *Logger) WithContext(ctx context.Context) *ContextLogger
WithContext creates a new ContextLogger with fields extracted from context. This is the recommended way to use context integration - extract once, log many times with the same context.
Performance: O(k) where k is number of configured keys, not context depth.
func (*Logger) WithContextExtractor ¶
func (l *Logger) WithContextExtractor(ctx context.Context, extractor *ContextExtractor) *ContextLogger
WithContextExtractor creates a ContextLogger with custom extraction rules.
func (*Logger) WithContextValue ¶
func (l *Logger) WithContextValue(ctx context.Context, key ContextKey, fieldName string) *ContextLogger
WithContextValue creates a ContextLogger with a single context value. Optimized for cases where you only need one context field.
func (*Logger) WithOptions ¶
WithOptions creates a new logger with the specified options applied.
This method clones the current logger and applies additional configuration options. The original logger is unchanged, ensuring immutable configuration and thread safety. The new logger shares the same ring buffer and output configuration but can have different caller, hook, and development settings.
Parameters:
- opts: Option functions to apply to the new logger instance
Returns:
- *Logger: New logger instance with applied options
Example:
devLogger := logger.WithOptions(
iris.WithCaller(),
iris.AddStacktrace(iris.Error),
iris.Development(),
)
Performance Notes:
- Clones logger configuration (minimal allocation)
- Shares ring buffer and output resources
- Options are applied once during creation
Thread Safety: Safe to call from multiple goroutines
func (*Logger) WithRequestID ¶
func (l *Logger) WithRequestID(ctx context.Context) *ContextLogger
WithRequestID extracts request ID with minimal allocations. Optimized for the most common use case.
func (*Logger) WithTraceID ¶
func (l *Logger) WithTraceID(ctx context.Context) *ContextLogger
WithTraceID extracts trace ID for distributed tracing.
func (*Logger) WithUserID ¶
func (l *Logger) WithUserID(ctx context.Context) *ContextLogger
WithUserID extracts user ID from context for user-specific logging.
func (*Logger) Write ¶
Write provides zero-allocation logging with a fill function.
This is the fastest logging method, allowing direct manipulation of a pre-allocated Record in the ring buffer. The fill function is called with a pointer to a Record that should be populated with log data.
Parameters:
- fill: Function to populate the log record (zero allocations)
Returns:
- bool: true if record was successfully queued, false if ring buffer full
Performance Features:
- Zero heap allocations during normal operation
- Direct record manipulation in ring buffer
- Lock-free atomic operations
- Fastest possible logging path
Example:
success := logger.Write(func(r *Record) {
r.Level = iris.Error
r.Msg = "Critical system error"
r.AddField(iris.String("component", "database"))
})
Thread Safety: Safe to call from multiple goroutines
type Option ¶
type Option func(*loggerOptions)
Option represents a function that modifies logger options during construction.
Options use the functional options pattern to provide a clean, extensible API for logger configuration. Each Option function modifies the options structure in place during logger creation or cloning.
Pattern Benefits:
- Backward compatible API evolution
- Clear, self-documenting configuration
- Composable option sets
- Type-safe configuration
Usage:
logger := logger.WithOptions(
iris.WithCaller(),
iris.AddStacktrace(iris.Error),
iris.Development(),
)
func AddStacktrace ¶
AddStacktrace enables stack trace capture for log levels at or above the specified minimum.
Stack traces provide detailed call stack information for debugging complex issues. They are automatically captured for severe log levels (typically Error and above) to aid in troubleshooting.
Parameters:
- min: Minimum log level for stack trace capture (Debug, Info, Warn, Error)
Performance Impact:
- Stack trace capture is expensive (runtime.Stack() call)
- Only enabled for specified log levels to minimize overhead
- Stack traces are captured in producer thread but processed in consumer
Returns:
- Option: Configuration function to enable stack trace capture
Example:
// Capture stack traces for Error level and above
logger := logger.WithOptions(iris.AddStacktrace(iris.Error))
logger.Error("critical error") // Will include stack trace
logger.Warn("warning") // No stack trace
func Development ¶
func Development() Option
Development enables development-specific behaviors for enhanced debugging.
Development mode changes logger behavior to be more suitable for development and testing environments:
- DPanic level causes panic() in addition to logging
- Enhanced error reporting and validation
- More verbose debugging information
This option should typically be disabled in production environments for optimal performance and stability.
Returns:
- Option: Configuration function to enable development mode
Example:
logger := logger.WithOptions(iris.Development())
logger.DPanic("development panic") // Will panic in dev mode, log in production
func WithCaller ¶
func WithCaller() Option
WithCaller enables caller information capture for log records.
When enabled, the logger will capture the file name, line number, and function name of the calling code for each log record. This information is added to the log output for debugging and troubleshooting.
Performance Impact:
- Adds runtime.Caller() call per log operation
- Minimal allocation for caller information
- Skip level optimization reduces overhead
Returns:
- Option: Configuration function to enable caller capture
Example:
logger := logger.WithOptions(iris.WithCaller())
logger.Info("message") // Will include caller info
func WithCallerSkip ¶
WithCallerSkip sets the number of stack frames to skip for caller detection.
This option is useful when the logger is wrapped by helper functions and you want the caller information to point to the actual calling code rather than the wrapper function.
Parameters:
- n: Number of stack frames to skip (negative values are treated as 0)
Common Skip Values:
- 0: Direct caller of log method
- 1: Skip one wrapper function
- 2+: Skip multiple wrapper layers
Returns:
- Option: Configuration function to set caller skip level
Example:
// Skip helper function to show actual caller
logger := logger.WithOptions(
iris.WithCaller(),
iris.WithCallerSkip(1),
)
func WithHook ¶
WithHook adds a post-processing hook to the logger.
Hooks are functions executed in the consumer thread after log records are processed but before buffers are returned to the pool. This design ensures zero contention with producer threads while enabling powerful post-processing.
Hook Use Cases:
- Metrics collection based on log content
- Log forwarding to external systems
- Custom alerting on specific log patterns
- Development-time debugging and validation
Parameters:
- h: Hook function to execute (nil hooks are ignored)
Performance Notes:
- Hooks are executed sequentially in consumer thread
- Should avoid blocking operations to maintain throughput
- No allocation overhead in producer threads
Returns:
- Option: Configuration function to add the hook
Example:
metricHook := func(rec *Record) {
if rec.Level >= iris.Error {
errorCounter.Inc()
}
}
logger := logger.WithOptions(iris.WithHook(metricHook))
func WithSampler ¶ added in v1.1.0
WithSampler enables log sampling with the specified sampler.
Sampling is used to reduce log volume in high-throughput scenarios by selectively allowing only a subset of log messages to be processed. This is particularly useful for preventing log storms and managing system resources while maintaining visibility into application behavior.
Parameters:
- s: Sampler implementation (nil disables sampling)
Common Use Cases:
- Rate limiting in high-volume production systems
- Preventing log storms during error conditions
- Managing log storage costs
- Maintaining system performance under load
Returns:
- Option: Configuration function to set the sampler
Example:
// Create a token bucket sampler: 100 burst, 10/sec sustained rate
sampler := iris.NewTokenBucketSampler(100, 10, time.Second)
logger := logger.WithOptions(iris.WithSampler(sampler))
// High-volume logging will be automatically rate-limited
for i := 0; i < 1000; i++ {
logger.Info("high volume message", iris.Int("id", i))
}
type ProcessorFunc ¶
type ProcessorFunc func(record *Record)
ProcessorFunc defines the signature for record processing functions
This function is called for each log record that flows through the ring buffer. It should be efficient and avoid blocking operations to maintain high throughput.
Parameters:
- record: The log record to process (guaranteed non-nil)
Performance Notes:
- Called from the consumer thread only (single-threaded)
- Should avoid allocations and blocking operations
- Can safely access shared state (no concurrent access)
type Record ¶
type Record struct {
Level Level // Log level
Msg string // Log message
Logger string // Logger name
Caller string // Caller information (file:line)
Stack string // Stack trace
// contains filtered or unexported fields
}
Record represents a log entry with optimized field storage
func NewRecord ¶
NewRecord creates a new Record with the specified level and message. Uses pre-allocated field storage to avoid heap allocations during logging.
func (*Record) AddField ¶
AddField adds a structured field to this record. Returns false if the field array is full (32 fields max - optimal for performance).
func (*Record) FieldCount ¶
FieldCount returns the number of fields in this record.
type Ring ¶
type Ring struct {
// contains filtered or unexported fields
}
Ring provides ultra-high performance logging with embedded Zephyros Light
The Ring uses the embedded ZephyrosLight engine to provide optimal performance for logging operations while eliminating external dependencies and maintaining the core features needed for high-performance logging.
Embedded Zephyros Light Features:
- Single ring architecture optimized for logging
- ~15-20ns/op performance (vs 9ns commercial, 25ns previous)
- Zero heap allocations during normal operation
- Lock-free atomic operations for maximum throughput
- Fixed batch processing (simplified vs adaptive)
Architecture Simplification:
- SingleRing only (ThreadedRings removed - commercial feature)
- Simplified configuration (fewer options, better defaults)
- Embedded implementation (no external dependencies)
Performance Characteristics:
- Zero heap allocations during normal operation
- Lock-free atomic operations for maximum throughput
- Fixed batching optimized for logging workloads
- Simplified spinning strategy for low latency
func (*Ring) Close ¶
func (r *Ring) Close()
Close gracefully shuts down the ring buffer
This method signals the consumer to stop processing and ensures all buffered records are processed before shutdown. It is safe to call multiple times and from multiple goroutines.
After Close() is called:
- Write() will return false for all subsequent calls
- Loop() will process all remaining records and then exit
- The ring buffer becomes unusable
Shutdown Guarantees:
- All buffered records are processed before shutdown
- Multiple Close() calls are safe (idempotent)
- Deterministic shutdown behavior for testing
func (*Ring) Flush ¶
Flush ensures all pending writes are visible to the consumer
In the embedded ZephyrosLight architecture, this method ensures that all writes from producer threads are visible to the consumer thread. This is primarily useful for testing and ensuring deterministic behavior.
Note: In normal operation, flushing is automatic and this method exists primarily for API compatibility and testing scenarios.
func (*Ring) Loop ¶
func (r *Ring) Loop()
Loop starts the record processing loop (CONSUMER THREAD ONLY)
This method should be called from exactly one goroutine to consume and process log records. The embedded ZephyrosLight implements an optimized spinning strategy for balanced performance and CPU usage.
The loop continues until Close() is called, after which it processes all remaining records before exiting.
Performance Features:
- Fixed batching optimized for logging workloads
- Simplified idle strategy to minimize CPU usage
- Guaranteed processing of all records during shutdown
Warning: Only call this method from one goroutine per ring buffer. Multiple consumers will cause race conditions and data loss.
func (*Ring) ProcessBatch ¶
ProcessBatch processes a single batch of records and returns the count
This method is useful for custom consumer implementations that need fine-grained control over processing timing. It processes up to batchSize records in a single call using the embedded ZephyrosLight engine.
Returns:
- int: Number of records processed in this batch (0 if no records available)
Note: This is a lower-level method. Most applications should use Loop() which handles the complete consumer lifecycle automatically.
func (*Ring) Stats ¶
Stats returns detailed performance statistics for monitoring and debugging
The returned map contains real-time metrics about the embedded ZephyrosLight ring buffer's performance and current state. This is useful for monitoring, alerting, and performance optimization.
Returned Statistics:
- "writer_position": Last claimed sequence number
- "reader_position": Current reader position
- "buffer_size": Total ring buffer capacity
- "items_buffered": Number of records waiting to be processed
- "items_processed": Total records processed
- "items_dropped": Total records dropped due to full buffer
- "closed": Ring buffer closed state (0=open, 1=closed)
- "capacity": Configured ring capacity
- "batch_size": Configured batch size
- "utilization_percent": Buffer utilization percentage
- "engine": "zephyros_light" (embedded engine identifier)
Returns:
- map[string]int64: Real-time performance statistics
Example:
stats := ring.Stats()
fmt.Printf("Buffer utilization: %d%%\n", stats["utilization_percent"])
fmt.Printf("Items buffered: %d\n", stats["items_buffered"])
func (*Ring) Write ¶
Write adds a log record to the ring buffer using zero-allocation pattern
The fill function is called with a pointer to a pre-allocated Record in the embedded Zephyros Light ring buffer. This avoids any heap allocations during logging operations while providing excellent performance.
The function is thread-safe and can be called concurrently from multiple goroutines. The embedded ZephyrosLight uses atomic operations for lock-free performance.
Performance: Target ~15-20ns/op with embedded Zephyros Light engine
Parameters:
- fill: Function to populate the log record (called with pre-allocated Record)
Returns:
- bool: true if record was successfully written, false if ring is full or closed
Performance Notes:
- Zero heap allocations during normal operation
- Lock-free atomic operations for maximum throughput
- Returns false instead of blocking when ring is full
- Optimized for high-frequency logging scenarios
Example:
success := ring.Write(func(r *Record) {
r.Level = ErrorLevel
r.Message = "Critical error occurred"
r.Timestamp = time.Now()
})
type Sampler ¶
type Sampler interface {
// Allow determines if a log entry at the given level should be processed.
// Returns true if the entry should be logged, false if it should be dropped.
Allow(level Level) bool
}
Sampler defines the interface for log sampling strategies. Implementations control which log entries are allowed through to prevent overwhelming downstream systems.
type Stack ¶
type Stack struct {
// contains filtered or unexported fields
}
Stack represents a captured stack trace with program counters
func CaptureStack ¶
CaptureStack captures a stack trace of the specified depth, skipping frames. skip=0 identifies the caller of CaptureStack. The caller must call FreeStack on the returned stack after using it.
func (*Stack) FormatStack ¶
FormatStack formats the entire stack trace into a string using buffer pooling
type SyncReader ¶ added in v1.1.0
type SyncReader interface {
// Read retrieves the next log record from the external logging system.
// Returns nil when no more records are available or context is cancelled.
// Implementations should block until a record is available or context expires.
Read(ctx context.Context) (*Record, error)
// Close releases any resources associated with the reader.
// Should be called when the reader is no longer needed.
io.Closer
}
SyncReader provides the ability to read log records from external logging systems and integrate them into Iris's high-performance processing pipeline. This interface enables Iris to act as a universal logging accelerator for existing logger implementations.
The SyncReader operates in background goroutines and feeds records into Iris's lock-free ring buffer, allowing existing loggers (slog, logrus, zap) to benefit from Iris's performance and advanced features without code changes.
Performance considerations: - Read() operates in separate goroutines to avoid blocking Iris's hot path - Implementations should handle backpressure gracefully - Context cancellation should be respected for clean shutdowns
type SyncWriter ¶ added in v1.1.0
type SyncWriter interface {
// WriteRecord writes a structured log record to the destination.
// Should handle the record asynchronously to avoid blocking Iris's hot path.
WriteRecord(record *Record) error
// Close releases any resources and flushes pending data.
// Should ensure all data is safely written before returning.
io.Closer
}
SyncWriter provides enhanced writer capabilities for external output destinations such as Loki, Kafka, Prometheus, etc. This interface enables modular output architecture where specialized writers are maintained as separate modules.
SyncWriter extends basic io.Writer with structured record processing, allowing external writer modules to access Iris's rich Record format with fields, levels, and metadata while maintaining zero dependencies in the core library.
Performance considerations: - WriteRecord() should be non-blocking or implement internal buffering - Implementations should handle backpressure gracefully - Background processing recommended for network/disk operations
type TextEncoder ¶
type TextEncoder struct {
// TimeFormat specifies the Go time layout for timestamps.
// Default: time.RFC3339 for standard compliance.
// Common alternatives: time.RFC3339Nano, time.Kitchen, custom layouts.
TimeFormat string
// QuoteValues determines whether string values are quoted.
// Default: true for security (prevents value parsing ambiguity).
// Set to false only in trusted environments for cleaner output.
QuoteValues bool
// SanitizeKeys determines whether field keys are sanitized.
// Default: true for security (prevents key-based injection).
// Set to false only when keys are guaranteed to be safe.
SanitizeKeys bool
}
TextEncoder provides secure human-readable text encoding for log records.
This encoder implements comprehensive security measures to prevent log injection attacks and ensure safe output in production environments. All field keys and values are sanitized to prevent malicious manipulation of log data.
Security Features:
- Field key sanitization prevents injection via malformed keys
- Value sanitization with proper quoting and escaping
- Control character neutralization (prevents terminal manipulation)
- Newline injection protection (prevents log splitting)
- Unicode direction override protection (prevents text reversal attacks)
Output Format:
time=2025-09-06T14:30:45Z level=info msg="User action" field=value
Use Cases: - Production logging in security-sensitive environments - System logs that may contain untrusted input - Compliance and audit logging requiring tamper resistance - Human-readable logs that still need machine parsing
func NewTextEncoder ¶
func NewTextEncoder() *TextEncoder
NewTextEncoder creates a new secure text encoder with production-safe defaults.
Default configuration prioritizes security: - TimeFormat: time.RFC3339 (standard, sortable) - QuoteValues: true (prevents parsing ambiguity) - SanitizeKeys: true (prevents key injection)
These defaults are suitable for production environments where log data may contain untrusted input or require security compliance.
Returns:
- *TextEncoder: Configured secure text encoder instance
type TokenBucketSampler ¶
type TokenBucketSampler struct {
// contains filtered or unexported fields
}
TokenBucketSampler implements rate limiting using a token bucket algorithm. Provides burst capacity with sustained rate limiting for high-volume logging.
func NewTokenBucketSampler ¶
func NewTokenBucketSampler(capacity, refill int64, every time.Duration) *TokenBucketSampler
NewTokenBucketSampler creates a new token bucket sampler with the specified parameters. Validates inputs and sets reasonable defaults for invalid values.
Parameters:
- capacity: Maximum number of tokens (burst capacity)
- refill: Number of tokens added per refill period
- every: Time duration between refills
Returns a configured sampler ready for concurrent use.
func (*TokenBucketSampler) Allow ¶
func (s *TokenBucketSampler) Allow(_ Level) bool
Allow implements the Sampler interface using token bucket rate limiting. Thread-safe implementation that refills tokens based on elapsed time and consumes tokens for allowed log entries.
Parameters:
- level: Log level (unused in this implementation, all levels treated equally)
Returns true if logging should proceed, false if rate limited.
type WriteSyncer ¶
WriteSyncer combines io.Writer with the ability to synchronize written data to persistent storage. This interface is essential for ensuring data durability in logging scenarios where data loss is unacceptable.
Performance considerations: - Sync() should be called judiciously as it may involve expensive syscalls - Implementations should be thread-safe for concurrent logging scenarios - Zero allocations in hot paths for maximum throughput
func AddSync ¶
func AddSync(w io.Writer) WriteSyncer
AddSync is an alias for WrapWriter for familiarity with zap
func MultiWriteSyncer ¶
func MultiWriteSyncer(writers ...WriteSyncer) WriteSyncer
MultiWriteSyncer creates a WriteSyncer that duplicates writes to multiple writers
func MultiWriter ¶
func MultiWriter(writers ...io.Writer) WriteSyncer
MultiWriter accepts io.Writer interfaces, wraps them and creates a MultiWriteSyncer
func NewFileSyncer ¶
func NewFileSyncer(file *os.File) WriteSyncer
NewFileSyncer creates a WriteSyncer specifically for file operations. This function provides explicit file syncing capabilities and should be used when you need guaranteed durability for file-based logging.
Performance: Direct file operations with explicit sync control
func NewNopSyncer ¶
func NewNopSyncer(w io.Writer) WriteSyncer
NewNopSyncer creates a WriteSyncer that performs no synchronization. This is useful for scenarios where sync is handled externally or where the underlying writer doesn't support/need synchronization.
Performance: Zero-cost wrapper with inline no-op sync
func WrapWriter ¶
func WrapWriter(w io.Writer) WriteSyncer
WrapWriter intelligently converts any io.Writer into a WriteSyncer. This function provides automatic detection and wrapping of different writer types to ensure optimal performance and correct synchronization behavior.
Type-specific optimizations: - *os.File: Uses fileSyncer for explicit sync() syscalls - WriteSyncer: Returns as-is (already implements interface) - Other writers: Uses nopSyncer (no-op sync for non-file writers)
Performance: Zero allocations for WriteSyncer inputs, minimal overhead for type switching in other cases.
Usage patterns:
- File logging: WrapWriter(file) -> fileSyncer (with sync)
- Buffer logging: WrapWriter(buffer) -> nopSyncer (no sync needed)
- Network logging: WrapWriter(conn) -> nopSyncer (sync at protocol level)