Documentation
¶
Index ¶
- Variables
- func GetGORMTx(ctx context.Context) *gorm.DB
- func IsErrKeyNotFound(err error) bool
- func WithGORMTx(ctx context.Context, tx *gorm.DB) context.Context
- type BigCache
- type BigCacheConfig
- type Cache
- type Client
- type ClientOption
- func EntryWithTTL[T any](freshTTL, staleTTL time.Duration) ClientOption[*Entry[T]]
- func NotFoundWithTTL[T any](cache Cache[time.Time], freshTTL time.Duration, staleTTL time.Duration) ClientOption[T]
- func WithDoubleCheck[T any](mode DoubleCheckMode) ClientOption[T]
- func WithFetchConcurrency[T any](concurrency int) ClientOption[T]
- func WithFetchTimeout[T any](timeout time.Duration) ClientOption[T]
- func WithLogger[T any](logger *slog.Logger) ClientOption[T]
- func WithNotFound[T any](cache Cache[time.Time], checkStale func(time.Time) State) ClientOption[T]
- func WithServeStale[T any](serveStale bool) ClientOption[T]
- func WithStale[T any](fn func(T) State) ClientOption[T]
- type DoubleCheckMode
- type Entry
- type ErrKeyNotFound
- type GORMCache
- type GORMCacheConfig
- type MockClock
- type RedisCache
- type RedisCacheConfig
- type RistrettoCache
- type RistrettoCacheConfig
- type State
- type SyncMap
- type Upstream
- type UpstreamFunc
Constants ¶
This section is empty.
Variables ¶
var ( DefaultFetchTimeout = 60 * time.Second DefaultFetchConcurrency = 1 NowFunc = time.Now )
Functions ¶
func GetGORMTx ¶
GetGORMTx retrieves the GORM transaction from the context. Returns nil if no transaction is attached to the context.
func IsErrKeyNotFound ¶
IsErrKeyNotFound checks if the error is an ErrKeyNotFound
Types ¶
type BigCache ¶
type BigCache struct {
// contains filtered or unexported fields
}
BigCache is a cache implementation using BigCache It only supports []byte values as BigCache is designed for raw byte storage
func NewBigCache ¶
func NewBigCache(ctx context.Context, config BigCacheConfig) (*BigCache, error)
NewBigCache creates a new BigCache-based cache
type BigCacheConfig ¶
BigCacheConfig holds configuration for BigCache
type Cache ¶
type Cache[T any] interface { Upstream[T] Set(ctx context.Context, key string, value T) error Del(ctx context.Context, key string) error }
Cache defines the interface for a generic key-value cache with read and write capabilities
func JSONTransform ¶
JSONTransform creates a TransformCache that uses JSON encoding/decoding to convert between Cache[[]byte] and Cache[T]
func StringJSONTransform ¶
StringJSONTransform creates a TransformCache that uses JSON encoding/decoding to convert between Cache[string] and Cache[T]
type Client ¶
type Client[T any] struct { // contains filtered or unexported fields }
Client manages cache operations with automatic upstream fetching
func NewClient ¶
func NewClient[T any](backend Cache[T], upstream Upstream[T], opts ...ClientOption[T]) *Client[T]
NewClient creates a new client that manages the backend cache and fetches from upstream.
func (*Client[T]) Del ¶
Del removes a value from the cache and propagates deletion through cache layers.
Cache Layer Propagation: Del will propagate through all cache layers where upstream implements Cache[T], automatically stopping when upstream doesn't implement Cache[T] (e.g. UpstreamFunc for databases). This ensures consistency across multi-level cache architectures.
Examples:
Single-level (L1 -> Database): client.Del(ctx, key) // Deletes from L1 only Multi-level (L1 -> L2 -> Database): l1Client.Del(ctx, key) // Deletes from L1 and L2, stops at Database
This supports both write-through and cache-aside patterns, as the chain naturally terminates when upstream is not a Cache[T] implementation.
func (*Client[T]) Set ¶
Set stores a value in the cache and propagates through cache layers.
Cache Layer Propagation: Set will propagate through all cache layers where upstream implements Cache[T], automatically stopping when upstream doesn't implement Cache[T] (e.g. UpstreamFunc for databases). This ensures consistency across multi-level cache architectures.
Examples:
Single-level cache-aside pattern (L1 -> Database): db.Update(user) // Update database first client.Set(ctx, key, user) // Then update L1 cache only Multi-level cache-aside pattern (L1 -> L2 -> Database): db.Update(user) // Update database first l1Client.Set(ctx, key, user) // Then update L1 and L2, stops at Database
The type-based propagation automatically handles both write-through (multi-level caches) and cache-aside (with data source) patterns correctly.
type ClientOption ¶
ClientOption is a functional option for configuring a Client
func EntryWithTTL ¶
func EntryWithTTL[T any](freshTTL, staleTTL time.Duration) ClientOption[*Entry[T]]
EntryWithTTL is a convenience function to configure entry caching with TTL freshTTL: how long data stays fresh staleTTL: how long data stays stale (additional time after freshTTL) Entries in [0, freshTTL) are fresh, [freshTTL, freshTTL+staleTTL) are stale
func NotFoundWithTTL ¶
func NotFoundWithTTL[T any](cache Cache[time.Time], freshTTL time.Duration, staleTTL time.Duration) ClientOption[T]
NotFoundWithTTL is a convenience function to configure not-found caching with TTL freshTTL: how long the not-found result stays fresh staleTTL: how long the not-found result stays stale (additional time after freshTTL) Entries in [0, freshTTL) are fresh, [freshTTL, freshTTL+staleTTL) are stale
func WithDoubleCheck ¶
func WithDoubleCheck[T any](mode DoubleCheckMode) ClientOption[T]
WithDoubleCheck configures the double-check optimization mode.
Default: DoubleCheckAuto (smart detection based on notFoundCache configuration)
Background: Double-check works together with singleflight to reduce redundant upstream calls:
- Singleflight: Deduplicates concurrent requests for the same key (same moment)
- Double-check: Handles slightly staggered requests in race window (near-miss timing)
Double-check queries backend (and notFoundCache if configured) one more time before going to upstream. This addresses the race window where:
- Request A writes to cache
- Request B misses cache (A's write not yet visible or in-flight)
- Request B enters fetch path and would normally query upstream
- Double-check catches A's write, avoiding redundant upstream query
Effectiveness (see TestDoubleCheckRaceWindowProbability for controlled test):
- Test simulates worst-case scenario: two-wave concurrent pattern with precise timing
- Test results: ~40% redundant fetches without double-check, 0% with double-check
- Real-world impact: typically much lower race window probability, actual benefit varies
Effectiveness depends on:
- Concurrent access patterns (higher concurrency = more benefit)
- Race window duration (network latency, cache propagation delay)
- Cost ratio between double-check and upstream query
Modes:
- DoubleCheckDisabled: Skip double-check Use when: backend query cost >= upstream cost, or backend is unreliable/slow, or without notFoundCache in scenarios where upstream frequently returns not-found (double-check cannot catch not-found without notFoundCache, reducing effectiveness)
- DoubleCheckEnabled: Always double-check (adds query cost, reduces upstream calls) Use when: upstream is significantly more expensive than backend queries
- DoubleCheckAuto: Smart detection based on notFoundCache (default) Enables when notFoundCache exists (double-check covers both found and not-found scenarios), disables otherwise (double-check only covers found scenario, limited effectiveness)
Cost-benefit analysis:
Cost = backend_query [+ notFoundCache_query if configured] Benefit = Avoid upstream_query when hitting race window Worth enabling when: upstream_cost >> (backend_cost + notFoundCache_cost)
Recommendations by scenario:
- Memory cache -> DB: DoubleCheckEnabled (DB ≫ memory, ~10000x difference)
- Redis -> DB: DoubleCheckEnabled (DB ≫ Redis, ~10-50x difference)
- Redis (+ notFoundCache) -> Redis: DoubleCheckDisabled (cost ≈ benefit)
- Default/Uncertain: DoubleCheckAuto (smart heuristic)
func WithFetchConcurrency ¶
func WithFetchConcurrency[T any](concurrency int) ClientOption[T]
WithFetchConcurrency sets the maximum number of concurrent fetch operations per key.
Philosophy: Concurrent exploration + Result convergence
- Exploration phase: When cache misses, allow N concurrent fetches to maximize throughput
- Convergence phase: Once any fetch completes, all subsequent requests reuse that result
Behavior:
- concurrency = 1 (default): Full singleflight, all requests wait for single fetch
- concurrency > 1: Requests distributed across N slots, allowing moderate redundancy
Example: WithFetchConcurrency(5) allows up to 5 concurrent upstream fetches for the same key.
func WithFetchTimeout ¶
func WithFetchTimeout[T any](timeout time.Duration) ClientOption[T]
WithFetchTimeout sets the timeout for upstream fetch operations
func WithLogger ¶
func WithLogger[T any](logger *slog.Logger) ClientOption[T]
WithLogger sets the logger for the client. If not set, slog.Default() is used.
func WithNotFound ¶
WithNotFound configures not-found caching with a custom staleness check
func WithServeStale ¶
func WithServeStale[T any](serveStale bool) ClientOption[T]
WithServeStale configures whether to serve stale data while refreshing asynchronously
func WithStale ¶
func WithStale[T any](fn func(T) State) ClientOption[T]
WithStale sets the function to check if cached data is stale
type DoubleCheckMode ¶
type DoubleCheckMode int
DoubleCheckMode defines the double-check optimization strategy
const ( // DoubleCheckDisabled turns off double-check optimization DoubleCheckDisabled DoubleCheckMode = iota // DoubleCheckEnabled always performs double-check before upstream fetch DoubleCheckEnabled // DoubleCheckAuto enables double-check based on configuration (default): // - Enabled when notFoundCache exists (can leverage it to catch not-found in race window) // - Disabled when no notFoundCache (cost limited to backend only) DoubleCheckAuto )
type ErrKeyNotFound ¶
type ErrKeyNotFound struct {
Cached bool // whether this NotFound result was cached before
CacheState State // the state of the cached NotFound entry (only meaningful when Cached=true)
}
ErrKeyNotFound indicates that the requested key was not found in the cache
func (*ErrKeyNotFound) Error ¶
func (e *ErrKeyNotFound) Error() string
Error returns a string representation of the error
type GORMCache ¶
type GORMCache[T any] struct { // contains filtered or unexported fields }
GORMCache is a cache implementation using GORM
func NewGORMCache ¶
func NewGORMCache[T any](config *GORMCacheConfig) *GORMCache[T]
NewGORMCache creates a new GORM-based cache with configuration
type GORMCacheConfig ¶
type GORMCacheConfig struct {
// DB is the GORM database connection
DB *gorm.DB
// TableName is the name of the cache table
TableName string
// KeyPrefix is the prefix for all keys (optional)
KeyPrefix string
}
GORMCacheConfig holds configuration for GORMCache
type MockClock ¶
type MockClock struct {
// contains filtered or unexported fields
}
MockClock provides a controllable time source for testing
func NewMockClock ¶
NewMockClock creates a new mock clock starting at the given time
type RedisCache ¶
type RedisCache[T any] struct { // contains filtered or unexported fields }
RedisCache is a cache implementation using Redis
func NewRedisCache ¶
func NewRedisCache[T any](config *RedisCacheConfig) *RedisCache[T]
NewRedisCache creates a new Redis-based cache with configuration
func (*RedisCache[T]) Del ¶
func (r *RedisCache[T]) Del(ctx context.Context, key string) error
Del removes a value from the cache
type RedisCacheConfig ¶
type RedisCacheConfig struct {
// Client is the Redis client (supports both single and cluster)
Client redis.UniversalClient
// KeyPrefix is the prefix for all keys (optional)
KeyPrefix string
// TTL is the time-to-live for cache entries
// Zero means no expiration
TTL time.Duration
}
RedisCacheConfig holds configuration for RedisCache
type RistrettoCache ¶
type RistrettoCache[T any] struct { // contains filtered or unexported fields }
RistrettoCache is a cache implementation using ristretto
func NewRistrettoCache ¶
func NewRistrettoCache[T any](config *RistrettoCacheConfig[T]) (*RistrettoCache[T], error)
NewRistrettoCache creates a new ristretto-based cache
func (*RistrettoCache[T]) Close ¶
func (r *RistrettoCache[T]) Close() error
Close closes the cache and stops all background goroutines
func (*RistrettoCache[T]) Del ¶
func (r *RistrettoCache[T]) Del(_ context.Context, key string) error
Del removes a value from the cache Note: Del immediately removes the item from storage (see cache.go:376-377). However, it also sends a deletion flag to setBuf to handle operation ordering. If a concurrent Set(new key) is in progress, the order is:
- Set sends itemNew to setBuf (but doesn't update storedItems yet)
- Del immediately removes from storedItems (removes nothing if key is new)
- Del sends itemDelete to setBuf
- setBuf processes: itemNew (adds key) → itemDelete (removes key)
Between steps 4's itemNew and itemDelete processing, there's a race window where Get might find the key. Calling Wait() ensures itemDelete is processed.
type RistrettoCacheConfig ¶
type RistrettoCacheConfig[T any] struct { // Config is the ristretto configuration *ristretto.Config[string, T] // TTL is the time-to-live for cache entries // Zero means no expiration TTL time.Duration }
RistrettoCacheConfig holds configuration for RistrettoCache
func DefaultRistrettoCacheConfig ¶
func DefaultRistrettoCacheConfig[T any]() *RistrettoCacheConfig[T]
DefaultRistrettoCacheConfig returns a default configuration