cachex

package module
v0.0.0-...-8e67536 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 10, 2025 License: MIT Imports: 19 Imported by: 0

README

Cachex

A high-performance, feature-rich Go caching library with generics, layered caching, and serve-stale mechanism.

Go Reference Go Report Card License

English | 中文文档

Features

  • 🛡️ Cache Stampede Protection - Singleflight + DoubleCheck mechanisms eliminate redundant fetches, preventing traffic surge when hot keys expire
  • 🚫 Cache Penetration Defense - Not-Found caching mechanism prevents malicious queries from overwhelming the database
  • 🔄 Serve-Stale - Serves stale data while asynchronously refreshing, ensuring high availability and low latency
  • 🎪 Layered Caching - Flexible multi-level caching (L1 Memory + L2 Redis), Client can also be used as upstream
  • 🚀 High Performance - Sub-microsecond latency, 79x~1729x throughput amplification, zero error rate
  • 🎯 Type-Safe - Go generics provide compile-time type safety, avoiding runtime type errors
  • ⏱️ Flexible TTL - Independent fresh and stale TTL configuration for precise data lifecycle control
  • 🔧 Extensible - Clean interface design makes it easy to implement custom cache backends

Quick Start

Installation
go get github.com/theplant/cachex
Basic Example
package main

import (
    "context"
    "fmt"
    "time"

    "github.com/theplant/cachex"
)

type Product struct {
    ID    string
    Name  string
    Price int64
}

func main() {
    // Create data cache
    cacheConfig := cachex.DefaultRistrettoCacheConfig[*cachex.Entry[*Product]]()
    cacheConfig.TTL = 30 * time.Second // 5s fresh + 25s stale
    cache, _ := cachex.NewRistrettoCache(cacheConfig)
    defer cache.Close()

    // Create not-found cache
    notFoundConfig := cachex.DefaultRistrettoCacheConfig[time.Time]()
    notFoundConfig.TTL = 6 * time.Second // 1s fresh + 5s stale
    notFoundCache, _ := cachex.NewRistrettoCache(notFoundConfig)
    defer notFoundCache.Close()

    // Define upstream data source
    upstream := cachex.UpstreamFunc[*cachex.Entry[*Product]](
        func(ctx context.Context, key string) (*cachex.Entry[*Product], error) {
            // Fetch from database or API
            // Return cachex.ErrKeyNotFound for non-existent keys
            product := &Product{ID: key, Name: "Product " + key, Price: 9900}
            return &cachex.Entry[*Product]{
                Data:     product,
                CachedAt: time.Now(),
            }, nil
        },
    )

    // Create client with all features enabled
    client := cachex.NewClient(
        cache,
        upstream,
        cachex.EntryWithTTL[*Product](5*time.Second, 25*time.Second), // 5s fresh, 25s stale
        cachex.NotFoundWithTTL[*cachex.Entry[*Product]](notFoundCache, 1*time.Second, 5*time.Second),
        cachex.WithServeStale[*cachex.Entry[*Product]](true),
    )

    // Use the cache
    ctx := context.Background()
    entry, _ := client.Get(ctx, "product-123")
    fmt.Printf("Product: %+v\n", entry.Data)
}

Architecture

sequenceDiagram
    participant App as Application
    participant Client as cachex.Client
    participant Cache as BackendCache
    participant NFCache as NotFoundCache
    participant SF as Singleflight
    participant Upstream

    App->>Client: Get(key)
    Client->>Cache: Get(key)

    alt Cache Hit + Fresh
        Cache-->>Client: value (fresh)
        Client-->>App: Return value
    else Cache Hit + Stale (serveStale=true)
        Cache-->>Client: value (stale)
        Client-->>App: Return stale value
        Client->>SF: Async refresh
        SF->>Upstream: Fetch(key)
        Upstream-->>SF: new value
        SF->>NFCache: Del(key)
        SF->>Cache: Set(key, value)
    else Cache Hit + Stale (serveStale=false) or Rotten
        Cache-->>Client: value (stale/rotten)
        Note over Client: Skip NotFoundCache, fetch directly<br/>(backend has data)
        Client->>SF: Fetch(key)
        SF->>Upstream: Fetch(key)
        Upstream-->>SF: value
        SF->>NFCache: Del(key)
        SF->>Cache: Set(key, value)
        SF-->>Client: value
        Client-->>App: Return value
    else Cache Miss
        Cache-->>Client: miss
        Client->>NFCache: Check NotFoundCache (if configured)
        alt NotFound Hit + Fresh
            NFCache-->>Client: not found (fresh)
            Client-->>App: Return ErrKeyNotFound
        else NotFound Hit + Stale (serveStale=true)
            NFCache-->>Client: not found (stale)
            Client-->>App: Return ErrKeyNotFound (stale)
            Client->>SF: Async recheck
            SF->>Upstream: Fetch(key)
            alt Key Still Not Found
                Upstream-->>SF: ErrKeyNotFound
                SF->>Cache: Del(key)
                SF->>NFCache: Set(key, timestamp)
            else Key Now Exists
                Upstream-->>SF: value
                SF->>NFCache: Del(key)
                SF->>Cache: Set(key, value)
            end
        else NotFound Hit + Stale (serveStale=false) or Rotten or Miss
            NFCache-->>Client: stale/rotten/miss
            Client->>SF: Fetch(key)
            SF->>Upstream: Fetch(key)
            alt Key Exists
                Upstream-->>SF: value
                SF->>NFCache: Del(key)
                SF->>Cache: Set(key, value)
                SF-->>Client: value
                Client-->>App: Return value
            else Key Not Found
                Upstream-->>SF: ErrKeyNotFound
                SF->>Cache: Del(key)
                SF->>NFCache: Set(key, timestamp)
                SF-->>Client: ErrKeyNotFound
                Client-->>App: Return ErrKeyNotFound
            end
        end
    end
Core Components
  • Client - Orchestrates caching logic, TTL, and refresh strategies (Client itself implements Cache interface and can also be used as upstream)
  • BackendCache - Storage layer (Ristretto, Redis, GORM, or custom), also serves as Upstream interface
  • NotFoundCache - Dedicated cache for non-existent keys to prevent cache penetration
  • Upstream - Data source (database, API, another Client, or custom)
  • Singleflight - Deduplicates concurrent requests for the same key (primary defense against cache stampede)
  • DoubleCheck - Re-checks backend and notFoundCache before upstream fetch to catch concurrent writes (eliminates race window)
  • Entry - Wrapper with timestamp for time-based staleness checks

Cache Backends

Ristretto (In-Memory)

High-performance, TinyLFU-based in-memory cache.

config := cachex.DefaultRistrettoCacheConfig[*Product]()
config.TTL = 30 * time.Second
cache, err := cachex.NewRistrettoCache(config)
defer cache.Close()
Redis

Distributed cache with customizable serialization.

cache := cachex.NewRedisCache[*Product](
    redisClient,
    "product:",     // key prefix
    30*time.Second, // TTL
)
GORM (Database)

Use your database as a cache layer (useful for persistence).

cache := cachex.NewGORMCache(
    db,
    "cache_products",
    30*time.Second,
)
Custom Cache

Implement the Cache[T] interface:

type Cache[T any] interface {
    Set(ctx context.Context, key string, value T, ttl time.Duration) error
    Get(ctx context.Context, key string) (T, error)
    Del(ctx context.Context, key string) error
}

Important: When a key does not exist, the Get method must return cachex.ErrKeyNotFound error, so the Client can correctly distinguish between cache misses and other error conditions.

Advanced Features

Layered Caching

Combine multiple cache layers for optimal performance. Client implements both Cache[T] and Upstream[T] interfaces, allowing it to be used directly as upstream for the next layer:

// L2: Redis cache with database upstream
l2Cache := cachex.NewRedisCache[*cachex.Entry[*Product]](
    redisClient, "product:", 10*time.Minute,
)

dbUpstream := cachex.UpstreamFunc[*cachex.Entry[*Product]](
    func(ctx context.Context, key string) (*cachex.Entry[*Product], error) {
        product, err := fetchFromDB(ctx, key)
        if err != nil {
            return nil, err
        }
        return &cachex.Entry[*Product]{
            Data:     product,
            CachedAt: time.Now(),
        }, nil
    },
)

l2Client := cachex.NewClient(
    l2Cache,
    dbUpstream,
    cachex.EntryWithTTL[*Product](1*time.Minute, 9*time.Minute),
)

// L1: In-memory cache with L2 client as upstream
// Client can be used directly as upstream for the next layer
l1Cache, _ := cachex.NewRistrettoCache(
    cachex.DefaultRistrettoCacheConfig[*cachex.Entry[*Product]](),
)
defer l1Cache.Close()

l1Client := cachex.NewClient(
    l1Cache,
    l2Client, // Client implements Upstream[T], use directly
    cachex.EntryWithTTL[*Product](5*time.Second, 25*time.Second),
    cachex.WithServeStale[*cachex.Entry[*Product]](true),
)

// Read: L1 miss → L2 → Database (if L2 also misses)
product, _ := l1Client.Get(ctx, "product-123")
Write Propagation

When you use a Client as the upstream for another Client, write operations (Set/Del) automatically propagate through all cache layers, stopping naturally when upstream doesn't implement Cache[T]:

L1 Cache → L2 Cache → L3 Cache → Database
   ✅        ✅         ✅          ❌ (auto-stop)

The propagation works through type-based detection: if upstream implements Cache[T] interface, writes propagate; if upstream doesn't implement Cache[T] (e.g. UpstreamFunc for data sources), propagation stops.

Pattern Support:

This design naturally supports both caching patterns:

  • Write-Through Pattern (Multi-Level Caches):

    // All cache layers stay in sync
    l1Client.Set(ctx, key, value)  // → L1 → L2 → ... → (stops at data source)
    
  • Cache-Aside Pattern (Cache + Database):

    // Update database first, then cache
    db.Update(user)
    l1Client.Set(ctx, userID, user)  // Only updates cache layers, not DB
    

The key insight: cache writes propagate through Cache[T] chains but stop when upstream doesn't implement Cache[T], making it safe and correct for both patterns.

Not-Found Caching

Prevent repeated lookups for non-existent keys:

notFoundCache, _ := cachex.NewRistrettoCache(
    cachex.DefaultRistrettoCacheConfig[time.Time](),
)
defer notFoundCache.Close()

client := cachex.NewClient(
    dataCache,
    upstream,
    cachex.EntryWithTTL[*Product](5*time.Second, 25*time.Second),
    cachex.NotFoundWithTTL[*cachex.Entry[*Product]](
        notFoundCache,
        1*time.Second,  // fresh TTL
        5*time.Second,  // stale TTL
    ),
)
Custom Staleness Logic

Define custom staleness checks:

client := cachex.NewClient(
    cache,
    upstream,
    cachex.WithStale[*Product](func(p *Product) cachex.State {
        age := time.Since(p.UpdatedAt)
        if age < 5*time.Second {
            return cachex.StateFresh
        }
        if age < 5*time.Second + 25*time.Second {
            return cachex.StateStale
        }
        return cachex.StateRotten
    }),
    cachex.WithServeStale[*Product](true),
)
Type Transformation

Transform between different cache types:

// Cache stores JSON strings
stringCache := cachex.NewRedisCache[string](client, "user:", time.Hour)

// Transform to User objects
userCache := cachex.JSONTransform[string, *User](stringCache)

// Use as Cache[*User]
user, err := userCache.Get(ctx, "user:123")

Performance

See BENCHMARK.md for detailed results.

Key Metrics (10K products, Pareto traffic distribution, cold start)
Scenario Concurrency Application QPS Cache Hit Rate P50 P99 DB Conn Pool DB QPS DB Utilization Amplification Errors
High Perf DB 600 504,989 99.81% 291ns 3.3µs 100 982.5 88.4% 514.0x 0%
Cloud DB 100 55,222 99.61% 833ns 12µs 20 213.8 90.9% 235.0x 0%
Shared DB 100 7,306 98.59% 791ns 831ms 13 103.0 99.0% 70.2x 0%
Constrained DB 100 695 94.01% 1.3µs 2.04s 8 41.6 98.8% 16.7x 0%

💡 Cold Start Performance: Cachex achieves 94%+ cache hit rate even during cold start without pre-warming. With cache pre-warming, throughput can increase dramatically (99%+ hit rate → minimal DB load).

🔥 Test Environment Simulation: All benchmark scenarios use realistic database connection pool simulation (semaphore-based), accurately simulating real-world database behavior.

📊 Throughput Amplification = Application QPS / Theoretical DB Capacity, where Theoretical DB Capacity = Conn Pool / (Latency / 1000ms).

FAQ

Q: When should I use Entry[T] vs custom staleness?

A: Use Entry[T] with EntryWithTTL for simple time-based expiration. Use custom staleness checkers when you need domain-specific logic (e.g., checking a version field).

Q: How does cache stampede protection work?

A: Cachex uses a two-layer defense based on the philosophy of concurrent exploration + result convergence:

  1. Singleflight with Concurrency Control (Primary):

    • Exploration phase: When cache misses, WithFetchConcurrency allows N concurrent fetches to maximize throughput
    • Default (N=1): Full deduplication - only one fetch, others wait (99%+ redundancy elimination)
    • N > 1: Moderate redundancy - requests distributed across N slots for higher throughput
  2. DoubleCheck (Supplementary):

    • Handles the narrow race window where Request B checks the cache (miss) before Request A completes its write
    • Works across all singleflight slots, enabling fast convergence after first successful fetch
    • Auto-enabled by default when notFoundCache is configured (smart detection)
    • Configure with WithDoubleCheck(DoubleCheckEnabled/Disabled/Auto) based on your scenario
Q: What's the difference between fresh and stale TTL?

A: Fresh TTL defines how long data is considered fresh. Stale TTL defines an additional period during which data can be served as stale (with async refresh). Total lifetime = freshTTL + staleTTL.

Q: Should I cache all database queries?

A: No. Cache frequently accessed, relatively static data. Avoid caching:

  • Data that changes frequently (< 1s freshness requirement)
  • User-specific data with high cardinality
  • Large objects that don't fit in memory efficiently

License

This project is licensed under the MIT License - see the LICENSE file for details.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	DefaultFetchTimeout     = 60 * time.Second
	DefaultFetchConcurrency = 1
	NowFunc                 = time.Now
)

Functions

func GetGORMTx

func GetGORMTx(ctx context.Context) *gorm.DB

GetGORMTx retrieves the GORM transaction from the context. Returns nil if no transaction is attached to the context.

func IsErrKeyNotFound

func IsErrKeyNotFound(err error) bool

IsErrKeyNotFound checks if the error is an ErrKeyNotFound

func WithGORMTx

func WithGORMTx(ctx context.Context, tx *gorm.DB) context.Context

WithGORMTx attaches a GORM transaction to the context. All GORMCache operations using this context will execute within the transaction. The transaction must be committed or rolled back by the caller.

Types

type BigCache

type BigCache struct {
	// contains filtered or unexported fields
}

BigCache is a cache implementation using BigCache It only supports []byte values as BigCache is designed for raw byte storage

func NewBigCache

func NewBigCache(ctx context.Context, config BigCacheConfig) (*BigCache, error)

NewBigCache creates a new BigCache-based cache

func (*BigCache) Close

func (b *BigCache) Close() error

Close closes the cache and releases resources

func (*BigCache) Del

func (b *BigCache) Del(_ context.Context, key string) error

Del removes a value from the cache

func (*BigCache) Get

func (b *BigCache) Get(_ context.Context, key string) ([]byte, error)

Get retrieves a value from the cache

func (*BigCache) Set

func (b *BigCache) Set(_ context.Context, key string, value []byte) error

Set stores a value in the cache

type BigCacheConfig

type BigCacheConfig struct {
	bigcache.Config
}

BigCacheConfig holds configuration for BigCache

type Cache

type Cache[T any] interface {
	Upstream[T]
	Set(ctx context.Context, key string, value T) error
	Del(ctx context.Context, key string) error
}

Cache defines the interface for a generic key-value cache with read and write capabilities

func JSONTransform

func JSONTransform[T any](cache Cache[[]byte]) Cache[T]

JSONTransform creates a TransformCache that uses JSON encoding/decoding to convert between Cache[[]byte] and Cache[T]

func StringJSONTransform

func StringJSONTransform[T any](cache Cache[string]) Cache[T]

StringJSONTransform creates a TransformCache that uses JSON encoding/decoding to convert between Cache[string] and Cache[T]

func Transform

func Transform[A, B any](
	cache Cache[A],
	encode func(B) (A, error),
	decode func(A) (B, error),
) Cache[B]

Transform creates a new TransformCache with custom encode/decode functions

type Client

type Client[T any] struct {
	// contains filtered or unexported fields
}

Client manages cache operations with automatic upstream fetching

func NewClient

func NewClient[T any](backend Cache[T], upstream Upstream[T], opts ...ClientOption[T]) *Client[T]

NewClient creates a new client that manages the backend cache and fetches from upstream.

func (*Client[T]) Del

func (c *Client[T]) Del(ctx context.Context, key string) error

Del removes a value from the cache and propagates deletion through cache layers.

Cache Layer Propagation: Del will propagate through all cache layers where upstream implements Cache[T], automatically stopping when upstream doesn't implement Cache[T] (e.g. UpstreamFunc for databases). This ensures consistency across multi-level cache architectures.

Examples:

Single-level (L1 -> Database):
  client.Del(ctx, key)  // Deletes from L1 only

Multi-level (L1 -> L2 -> Database):
  l1Client.Del(ctx, key)  // Deletes from L1 and L2, stops at Database

This supports both write-through and cache-aside patterns, as the chain naturally terminates when upstream is not a Cache[T] implementation.

func (*Client[T]) Get

func (c *Client[T]) Get(ctx context.Context, key string) (T, error)

Get retrieves a value from the cache or upstream

func (*Client[T]) Set

func (c *Client[T]) Set(ctx context.Context, key string, value T) error

Set stores a value in the cache and propagates through cache layers.

Cache Layer Propagation: Set will propagate through all cache layers where upstream implements Cache[T], automatically stopping when upstream doesn't implement Cache[T] (e.g. UpstreamFunc for databases). This ensures consistency across multi-level cache architectures.

Examples:

Single-level cache-aside pattern (L1 -> Database):
  db.Update(user)           // Update database first
  client.Set(ctx, key, user) // Then update L1 cache only

Multi-level cache-aside pattern (L1 -> L2 -> Database):
  db.Update(user)             // Update database first
  l1Client.Set(ctx, key, user) // Then update L1 and L2, stops at Database

The type-based propagation automatically handles both write-through (multi-level caches) and cache-aside (with data source) patterns correctly.

type ClientOption

type ClientOption[T any] func(*Client[T])

ClientOption is a functional option for configuring a Client

func EntryWithTTL

func EntryWithTTL[T any](freshTTL, staleTTL time.Duration) ClientOption[*Entry[T]]

EntryWithTTL is a convenience function to configure entry caching with TTL freshTTL: how long data stays fresh staleTTL: how long data stays stale (additional time after freshTTL) Entries in [0, freshTTL) are fresh, [freshTTL, freshTTL+staleTTL) are stale

func NotFoundWithTTL

func NotFoundWithTTL[T any](cache Cache[time.Time], freshTTL time.Duration, staleTTL time.Duration) ClientOption[T]

NotFoundWithTTL is a convenience function to configure not-found caching with TTL freshTTL: how long the not-found result stays fresh staleTTL: how long the not-found result stays stale (additional time after freshTTL) Entries in [0, freshTTL) are fresh, [freshTTL, freshTTL+staleTTL) are stale

func WithDoubleCheck

func WithDoubleCheck[T any](mode DoubleCheckMode) ClientOption[T]

WithDoubleCheck configures the double-check optimization mode.

Default: DoubleCheckAuto (smart detection based on notFoundCache configuration)

Background: Double-check works together with singleflight to reduce redundant upstream calls:

  • Singleflight: Deduplicates concurrent requests for the same key (same moment)
  • Double-check: Handles slightly staggered requests in race window (near-miss timing)

Double-check queries backend (and notFoundCache if configured) one more time before going to upstream. This addresses the race window where:

  1. Request A writes to cache
  2. Request B misses cache (A's write not yet visible or in-flight)
  3. Request B enters fetch path and would normally query upstream
  4. Double-check catches A's write, avoiding redundant upstream query

Effectiveness (see TestDoubleCheckRaceWindowProbability for controlled test):

  • Test simulates worst-case scenario: two-wave concurrent pattern with precise timing
  • Test results: ~40% redundant fetches without double-check, 0% with double-check
  • Real-world impact: typically much lower race window probability, actual benefit varies

Effectiveness depends on:

  • Concurrent access patterns (higher concurrency = more benefit)
  • Race window duration (network latency, cache propagation delay)
  • Cost ratio between double-check and upstream query

Modes:

  • DoubleCheckDisabled: Skip double-check Use when: backend query cost >= upstream cost, or backend is unreliable/slow, or without notFoundCache in scenarios where upstream frequently returns not-found (double-check cannot catch not-found without notFoundCache, reducing effectiveness)
  • DoubleCheckEnabled: Always double-check (adds query cost, reduces upstream calls) Use when: upstream is significantly more expensive than backend queries
  • DoubleCheckAuto: Smart detection based on notFoundCache (default) Enables when notFoundCache exists (double-check covers both found and not-found scenarios), disables otherwise (double-check only covers found scenario, limited effectiveness)

Cost-benefit analysis:

Cost = backend_query [+ notFoundCache_query if configured]
Benefit = Avoid upstream_query when hitting race window

Worth enabling when: upstream_cost >> (backend_cost + notFoundCache_cost)

Recommendations by scenario:

  • Memory cache -> DB: DoubleCheckEnabled (DB ≫ memory, ~10000x difference)
  • Redis -> DB: DoubleCheckEnabled (DB ≫ Redis, ~10-50x difference)
  • Redis (+ notFoundCache) -> Redis: DoubleCheckDisabled (cost ≈ benefit)
  • Default/Uncertain: DoubleCheckAuto (smart heuristic)

func WithFetchConcurrency

func WithFetchConcurrency[T any](concurrency int) ClientOption[T]

WithFetchConcurrency sets the maximum number of concurrent fetch operations per key.

Philosophy: Concurrent exploration + Result convergence

  • Exploration phase: When cache misses, allow N concurrent fetches to maximize throughput
  • Convergence phase: Once any fetch completes, all subsequent requests reuse that result

Behavior:

  • concurrency = 1 (default): Full singleflight, all requests wait for single fetch
  • concurrency > 1: Requests distributed across N slots, allowing moderate redundancy

Example: WithFetchConcurrency(5) allows up to 5 concurrent upstream fetches for the same key.

func WithFetchTimeout

func WithFetchTimeout[T any](timeout time.Duration) ClientOption[T]

WithFetchTimeout sets the timeout for upstream fetch operations

func WithLogger

func WithLogger[T any](logger *slog.Logger) ClientOption[T]

WithLogger sets the logger for the client. If not set, slog.Default() is used.

func WithNotFound

func WithNotFound[T any](cache Cache[time.Time], checkStale func(time.Time) State) ClientOption[T]

WithNotFound configures not-found caching with a custom staleness check

func WithServeStale

func WithServeStale[T any](serveStale bool) ClientOption[T]

WithServeStale configures whether to serve stale data while refreshing asynchronously

func WithStale

func WithStale[T any](fn func(T) State) ClientOption[T]

WithStale sets the function to check if cached data is stale

type DoubleCheckMode

type DoubleCheckMode int

DoubleCheckMode defines the double-check optimization strategy

const (
	// DoubleCheckDisabled turns off double-check optimization
	DoubleCheckDisabled DoubleCheckMode = iota

	// DoubleCheckEnabled always performs double-check before upstream fetch
	DoubleCheckEnabled

	// DoubleCheckAuto enables double-check based on configuration (default):
	// - Enabled when notFoundCache exists (can leverage it to catch not-found in race window)
	// - Disabled when no notFoundCache (cost limited to backend only)
	DoubleCheckAuto
)

type Entry

type Entry[T any] struct {
	Data     T         `json:"data"`
	CachedAt time.Time `json:"cachedAt"`
}

Entry is a wrapper for a value with a cache timestamp

type ErrKeyNotFound

type ErrKeyNotFound struct {
	Cached     bool  // whether this NotFound result was cached before
	CacheState State // the state of the cached NotFound entry (only meaningful when Cached=true)
}

ErrKeyNotFound indicates that the requested key was not found in the cache

func (*ErrKeyNotFound) Error

func (e *ErrKeyNotFound) Error() string

Error returns a string representation of the error

type GORMCache

type GORMCache[T any] struct {
	// contains filtered or unexported fields
}

GORMCache is a cache implementation using GORM

func NewGORMCache

func NewGORMCache[T any](config *GORMCacheConfig) *GORMCache[T]

NewGORMCache creates a new GORM-based cache with configuration

func (*GORMCache[T]) Del

func (g *GORMCache[T]) Del(ctx context.Context, key string) error

Del removes a value from the cache

func (*GORMCache[T]) Get

func (g *GORMCache[T]) Get(ctx context.Context, key string) (T, error)

Get retrieves a value from the cache

func (*GORMCache[T]) Migrate

func (g *GORMCache[T]) Migrate(ctx context.Context) error

Migrate creates or updates the cache table schema

func (*GORMCache[T]) Set

func (g *GORMCache[T]) Set(ctx context.Context, key string, value T) error

Set stores a value in the cache

type GORMCacheConfig

type GORMCacheConfig struct {
	// DB is the GORM database connection
	DB *gorm.DB

	// TableName is the name of the cache table
	TableName string

	// KeyPrefix is the prefix for all keys (optional)
	KeyPrefix string
}

GORMCacheConfig holds configuration for GORMCache

type MockClock

type MockClock struct {
	// contains filtered or unexported fields
}

MockClock provides a controllable time source for testing

func NewMockClock

func NewMockClock(start time.Time) *MockClock

NewMockClock creates a new mock clock starting at the given time

func (*MockClock) Advance

func (m *MockClock) Advance(d time.Duration)

Advance moves the clock forward by the given duration

func (*MockClock) Install

func (m *MockClock) Install() func()

Install replaces the global NowFunc with this mock clock

func (*MockClock) Now

func (m *MockClock) Now() time.Time

Now returns the current mocked time

func (*MockClock) Set

func (m *MockClock) Set(t time.Time)

Set sets the clock to a specific time

type RedisCache

type RedisCache[T any] struct {
	// contains filtered or unexported fields
}

RedisCache is a cache implementation using Redis

func NewRedisCache

func NewRedisCache[T any](config *RedisCacheConfig) *RedisCache[T]

NewRedisCache creates a new Redis-based cache with configuration

func (*RedisCache[T]) Del

func (r *RedisCache[T]) Del(ctx context.Context, key string) error

Del removes a value from the cache

func (*RedisCache[T]) Get

func (r *RedisCache[T]) Get(ctx context.Context, key string) (T, error)

Get retrieves a value from the cache

func (*RedisCache[T]) Set

func (r *RedisCache[T]) Set(ctx context.Context, key string, value T) error

Set stores a value in the cache

type RedisCacheConfig

type RedisCacheConfig struct {
	// Client is the Redis client (supports both single and cluster)
	Client redis.UniversalClient

	// KeyPrefix is the prefix for all keys (optional)
	KeyPrefix string

	// TTL is the time-to-live for cache entries
	// Zero means no expiration
	TTL time.Duration
}

RedisCacheConfig holds configuration for RedisCache

type RistrettoCache

type RistrettoCache[T any] struct {
	// contains filtered or unexported fields
}

RistrettoCache is a cache implementation using ristretto

func NewRistrettoCache

func NewRistrettoCache[T any](config *RistrettoCacheConfig[T]) (*RistrettoCache[T], error)

NewRistrettoCache creates a new ristretto-based cache

func (*RistrettoCache[T]) Close

func (r *RistrettoCache[T]) Close() error

Close closes the cache and stops all background goroutines

func (*RistrettoCache[T]) Del

func (r *RistrettoCache[T]) Del(_ context.Context, key string) error

Del removes a value from the cache Note: Del immediately removes the item from storage (see cache.go:376-377). However, it also sends a deletion flag to setBuf to handle operation ordering. If a concurrent Set(new key) is in progress, the order is:

  1. Set sends itemNew to setBuf (but doesn't update storedItems yet)
  2. Del immediately removes from storedItems (removes nothing if key is new)
  3. Del sends itemDelete to setBuf
  4. setBuf processes: itemNew (adds key) → itemDelete (removes key)

Between steps 4's itemNew and itemDelete processing, there's a race window where Get might find the key. Calling Wait() ensures itemDelete is processed.

func (*RistrettoCache[T]) Get

func (r *RistrettoCache[T]) Get(_ context.Context, key string) (T, error)

Get retrieves a value from the cache

func (*RistrettoCache[T]) Set

func (r *RistrettoCache[T]) Set(_ context.Context, key string, value T) error

Set stores a value in the cache with cost of 1

type RistrettoCacheConfig

type RistrettoCacheConfig[T any] struct {
	// Config is the ristretto configuration
	*ristretto.Config[string, T]

	// TTL is the time-to-live for cache entries
	// Zero means no expiration
	TTL time.Duration
}

RistrettoCacheConfig holds configuration for RistrettoCache

func DefaultRistrettoCacheConfig

func DefaultRistrettoCacheConfig[T any]() *RistrettoCacheConfig[T]

DefaultRistrettoCacheConfig returns a default configuration

type State

type State int8

State represents the staleness state of cached data

const (
	StateFresh  State = iota // Data is fresh and valid
	StateStale               // Data is stale but usable
	StateRotten              // Data is rotten and must be refreshed
)

type SyncMap

type SyncMap[T any] struct {
	sync.Map
}

SyncMap is a cache implementation using sync.Map

func NewSyncMap

func NewSyncMap[T any]() *SyncMap[T]

func (*SyncMap[T]) Del

func (s *SyncMap[T]) Del(_ context.Context, key string) error

func (*SyncMap[T]) Get

func (s *SyncMap[T]) Get(_ context.Context, key string) (T, error)

func (*SyncMap[T]) Set

func (s *SyncMap[T]) Set(_ context.Context, key string, value T) error

type Upstream

type Upstream[T any] interface {
	Get(ctx context.Context, key string) (T, error)
}

Upstream defines the interface for a data source that can retrieve values

type UpstreamFunc

type UpstreamFunc[T any] func(ctx context.Context, key string) (T, error)

UpstreamFunc is a function adapter that implements Upstream interface

func (UpstreamFunc[T]) Get

func (f UpstreamFunc[T]) Get(ctx context.Context, key string) (T, error)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL