Documentation
¶
Overview ¶
Package compressfs provides a transparent compression/decompression wrapper for any absfs.FileSystem implementation.
It automatically compresses data when writing files and decompresses when reading, supporting multiple compression algorithms with configurable levels and smart content detection.
Features ¶
- Transparent compression/decompression
- 5 compression algorithms: gzip, zstd, lz4, brotli, snappy
- Configurable compression levels
- Skip patterns for selective compression
- Automatic format detection
- Statistics tracking
- Empty file handling
- Large file support
Quick Start ¶
import (
"github.com/absfs/compressfs"
"github.com/absfs/osfs"
)
// Create base filesystem
base := osfs.New("/data")
// Wrap with zstd compression (recommended)
fs, _ := compressfs.New(base, &compressfs.Config{
Algorithm: compressfs.AlgorithmZstd,
Level: 3,
})
// Write file - automatically compressed as data.txt.zst
f, _ := fs.Create("data.txt")
f.Write([]byte("Hello, compressed world!"))
f.Close()
// Read file - automatically decompressed
f, _ = fs.Open("data.txt")
data, _ := io.ReadAll(f)
f.Close()
Algorithm Selection Guide ¶
Choose based on your requirements:
- General Purpose: Zstd (level 3) - Best balance of speed and compression
- Maximum Speed: LZ4 or Snappy - Ultra-fast, moderate compression
- Maximum Compression: Brotli (level 9-11) - Best for static content
- Maximum Compatibility: Gzip - Universally supported
- CPU-Constrained: Snappy - Lowest CPU usage
Performance Characteristics ¶
Compression speeds (4KB files):
- LZ4: 642 MB/s (fastest)
- Snappy: 77 MB/s (very fast, low CPU)
- Gzip: 12 MB/s (compatible)
- Brotli: 6 MB/s (best compression)
- Zstd: 4 MB/s (recommended - best ratio/speed balance)
Configuration Options ¶
Extension Handling:
- PreserveExtension: true → file.txt becomes file.txt.zst
- StripExtension: true → access via "file.txt" (transparent)
Selective Compression:
- SkipPatterns: Skip files matching regex patterns
- MinSize: Only compress files above threshold
- AutoDetect: Detect and handle pre-compressed files
See examples in the examples directory for more usage patterns.
Example (AlgorithmRules) ¶
ExampleAlgorithmRules demonstrates custom algorithm rules
package main
import (
"fmt"
"github.com/absfs/compressfs"
)
func main() {
memfs := compressfs.NewMemFS()
// Define custom rules for different file types
config := &compressfs.Config{
Algorithm: compressfs.AlgorithmZstd,
Level: 3,
AlgorithmRules: []compressfs.AlgorithmRule{
// Critical data: maximum compression
{
Pattern: `^/important/`,
Algorithm: compressfs.AlgorithmBrotli,
Level: 11,
},
// Logs: fast compression
{
Pattern: `\.log$`,
Algorithm: compressfs.AlgorithmLZ4,
Level: 0,
},
// Cache files: very fast
{
Pattern: `^/cache/`,
Algorithm: compressfs.AlgorithmSnappy,
Level: 0,
},
},
PreserveExtension: true,
StripExtension: true,
}
fs, _ := compressfs.New(memfs, config)
// Each file uses the algorithm matching its pattern
fs.Create("/important/data.txt") // Uses Brotli level 11
fs.Create("app.log") // Uses LZ4
fs.Create("/cache/temp.dat") // Uses Snappy
fs.Create("regular.txt") // Uses default Zstd level 3
fmt.Println("Files compressed with custom rules")
}
Output: Files compressed with custom rules
Example (ArchivalConfig) ¶
ExampleArchivalConfig demonstrates maximum compression for archival
package main
import (
"fmt"
"github.com/absfs/compressfs"
)
func main() {
memfs := compressfs.NewMemFS()
// Optimized for maximum compression
// - Brotli level 11 (best compression)
// - Custom rules for different file types
fs, _ := compressfs.NewWithArchival(memfs)
// Compress for long-term storage
file, _ := fs.Create("archive.txt")
file.Write([]byte("Important data to archive with maximum compression"))
file.Close()
fmt.Println("Data compressed for archival storage")
}
Output: Data compressed for archival storage
Example (AutoTuning) ¶
ExampleAutoTuning demonstrates automatic compression level adjustment
package main
import (
"fmt"
"github.com/absfs/compressfs"
)
func main() {
memfs := compressfs.NewMemFS()
config := &compressfs.Config{
Algorithm: compressfs.AlgorithmZstd,
Level: 6, // High compression by default
EnableAutoTuning: true,
AutoTuneSizeThreshold: 1024 * 1024, // 1MB
PreserveExtension: true,
StripExtension: true,
}
fs, _ := compressfs.New(memfs, config)
// Small files (< 1MB) use level 6 (high compression)
smallFile, _ := fs.Create("small.txt")
smallFile.Write(make([]byte, 100*1024)) // 100KB
smallFile.Close()
// Large files (> 1MB) automatically use lower level for speed
// Level is reduced to 1-2 for faster compression
largeFile, _ := fs.Create("large.dat")
largeFile.Write(make([]byte, 10*1024*1024)) // 10MB
largeFile.Close()
fmt.Println("Compression levels auto-tuned based on file size")
}
Output: Compression levels auto-tuned based on file size
Example (Basic) ¶
package main
import (
"fmt"
"io"
"log"
"github.com/absfs/compressfs"
)
func main() {
// Create an in-memory filesystem for demonstration
base := compressfs.NewMemFS()
// Wrap with compression using gzip
cfs, err := compressfs.New(base, &compressfs.Config{
Algorithm: compressfs.AlgorithmGzip,
Level: 6,
PreserveExtension: true,
StripExtension: true,
})
if err != nil {
log.Fatal(err)
}
// Write a file - it will be automatically compressed
f, err := cfs.Create("data.txt")
if err != nil {
log.Fatal(err)
}
data := []byte("Hello, compressed world! This data will be automatically compressed.")
_, err = f.Write(data)
if err != nil {
log.Fatal(err)
}
err = f.Close()
if err != nil {
log.Fatal(err)
}
// Read the file back - it will be automatically decompressed
f, err = cfs.Open("data.txt")
if err != nil {
log.Fatal(err)
}
readData, err := io.ReadAll(f)
if err != nil {
log.Fatal(err)
}
err = f.Close()
if err != nil {
log.Fatal(err)
}
fmt.Println(string(readData))
}
Output: Hello, compressed world! This data will be automatically compressed.
Example (CombinedFeatures) ¶
ExampleCombinedFeatures demonstrates using multiple advanced features together
package main
import (
"fmt"
"github.com/absfs/compressfs"
)
func main() {
memfs := compressfs.NewMemFS()
// Combine algorithm rules, auto-tuning, and dictionaries
config := &compressfs.Config{
Algorithm: compressfs.AlgorithmZstd,
Level: 3,
AlgorithmRules: []compressfs.AlgorithmRule{
{Pattern: `\.log$`, Algorithm: compressfs.AlgorithmLZ4},
{Pattern: `\.json$`, Algorithm: compressfs.AlgorithmZstd, Level: 6},
},
EnableAutoTuning: true,
AutoTuneSizeThreshold: 1024 * 1024,
ZstdDictionary: []byte("sample dictionary"),
SkipPatterns: []string{
`\.(jpg|png|zip)$`, // Skip already compressed
},
PreserveExtension: true,
StripExtension: true,
}
fs, _ := compressfs.New(memfs, config)
// Each file is handled optimally
fs.Create("app.log") // LZ4 (rule)
fs.Create("data.json") // Zstd level 6 (rule)
fs.Create("large.txt") // Auto-tuned level
fs.Create("photo.jpg") // Skipped (already compressed)
fmt.Println("Combined features for optimal compression")
}
Output: Combined features for optimal compression
Example (HighPerformanceConfig) ¶
ExampleHighPerformanceConfig demonstrates high-throughput configuration
package main
import (
"fmt"
"github.com/absfs/compressfs"
)
func main() {
memfs := compressfs.NewMemFS()
// Optimized for maximum speed
// - LZ4 algorithm (fastest)
// - Large buffers (256KB)
// - Parallel compression enabled
fs, _ := compressfs.NewWithHighPerformance(memfs)
// Compress data at maximum speed
file, _ := fs.Create("data.bin")
file.Write(make([]byte, 1024*1024)) // 1MB
file.Close()
fmt.Println("Data compressed at high speed")
}
Output: Data compressed at high speed
Example (MinSize) ¶
package main
import (
"fmt"
"log"
"github.com/absfs/compressfs"
)
func main() {
base := compressfs.NewMemFS()
cfs, err := compressfs.New(base, &compressfs.Config{
Algorithm: compressfs.AlgorithmGzip,
MinSize: 100, // Only compress files >= 100 bytes
PreserveExtension: true,
StripExtension: true,
})
if err != nil {
log.Fatal(err)
}
// Small file - won't be compressed
f, _ := cfs.Create("small.txt")
f.Write([]byte("tiny"))
f.Close()
// Large file - will be compressed
f, _ = cfs.Create("large.txt")
largeData := make([]byte, 200)
for i := range largeData {
largeData[i] = 'a'
}
f.Write(largeData)
f.Close()
stats := cfs.GetStats()
fmt.Printf("Files compressed: %d\n", stats.FilesCompressed)
fmt.Printf("Files skipped: %d\n", stats.FilesSkipped)
}
Output: Files compressed: 1 Files skipped: 1
Example (SkipPatterns) ¶
package main
import (
"fmt"
"log"
"github.com/absfs/compressfs"
)
func main() {
base := compressfs.NewMemFS()
// Configure to skip already-compressed formats
cfs, err := compressfs.New(base, &compressfs.Config{
Algorithm: compressfs.AlgorithmGzip,
PreserveExtension: true,
StripExtension: true,
SkipPatterns: []string{
`\.(jpg|jpeg|png|gif)$`, // Images
`\.(zip|gz|bz2)$`, // Archives
},
})
if err != nil {
log.Fatal(err)
}
// This file will NOT be compressed (matches skip pattern)
f, _ := cfs.Create("image.jpg")
f.Write([]byte("fake image data"))
f.Close()
// This file WILL be compressed (doesn't match skip pattern)
f, _ = cfs.Create("document.txt")
f.Write([]byte("document content"))
f.Close()
fmt.Println("Files processed with skip patterns")
}
Output: Files processed with skip patterns
Example (SmartConfig) ¶
ExampleSmartConfig demonstrates using SmartConfig with intelligent algorithm selection
package main
import (
"fmt"
"log"
"github.com/absfs/compressfs"
)
func main() {
// Create a memory filesystem for demo
memfs := compressfs.NewMemFS()
// Create filesystem with smart configuration
// - Auto-selects algorithms based on file type
// - LZ4 for logs (speed)
// - Zstd for JSON/XML (balance)
// - Snappy for temp files (very fast)
fs, err := compressfs.NewWithSmartConfig(memfs)
if err != nil {
log.Fatal(err)
}
// Log files automatically use LZ4 (fast)
logFile, _ := fs.Create("app.log")
logFile.Write([]byte("2025-01-15 INFO: Application started\n"))
logFile.Close()
// JSON files automatically use Zstd level 6 (good compression)
jsonFile, _ := fs.Create("config.json")
jsonFile.Write([]byte(`{"setting": "value", "count": 42}`))
jsonFile.Close()
// Regular files use default Zstd level 3
textFile, _ := fs.Create("readme.txt")
textFile.Write([]byte("This is a readme file"))
textFile.Close()
fmt.Println("Files compressed with smart algorithm selection")
}
Output: Files compressed with smart algorithm selection
Example (Statistics) ¶
package main
import (
"fmt"
"log"
"github.com/absfs/compressfs"
)
func main() {
base := compressfs.NewMemFS()
cfs, err := compressfs.New(base, &compressfs.Config{
Algorithm: compressfs.AlgorithmGzip,
PreserveExtension: true,
StripExtension: true,
})
if err != nil {
log.Fatal(err)
}
// Write some files
for i := 0; i < 3; i++ {
f, _ := cfs.Create(fmt.Sprintf("file%d.txt", i))
f.Write([]byte(fmt.Sprintf("Content for file %d", i)))
f.Close()
}
// Check statistics
stats := cfs.GetStats()
fmt.Printf("Files compressed: %d\n", stats.FilesCompressed)
fmt.Printf("Bytes written: %d\n", stats.BytesWritten)
}
Output: Files compressed: 3 Bytes written: 54
Example (TransparentExtensions) ¶
package main
import (
"fmt"
"io"
"log"
"github.com/absfs/compressfs"
)
func main() {
base := compressfs.NewMemFS()
cfs, err := compressfs.New(base, &compressfs.Config{
Algorithm: compressfs.AlgorithmGzip,
PreserveExtension: true, // file.txt -> file.txt.gz
StripExtension: true, // access via "file.txt"
})
if err != nil {
log.Fatal(err)
}
// Write to "data.txt" - actually stored as "data.txt.gz"
f, _ := cfs.Create("data.txt")
f.Write([]byte("transparent compression"))
f.Close()
// Read from "data.txt" - automatically finds "data.txt.gz"
f, _ = cfs.Open("data.txt")
content, _ := io.ReadAll(f)
f.Close()
fmt.Println(string(content))
}
Output: transparent compression
Example (ZstdDictionary) ¶
ExampleZstdDictionary demonstrates dictionary-based compression
package main
import (
"fmt"
"github.com/absfs/compressfs"
)
func main() {
memfs := compressfs.NewMemFS()
// In practice, train dictionary from sample data
// For demo, use a simple dictionary
dictionary := []byte("common repeated pattern")
config := &compressfs.Config{
Algorithm: compressfs.AlgorithmZstd,
Level: 3,
ZstdDictionary: dictionary,
PreserveExtension: true,
StripExtension: true,
}
fs, _ := compressfs.New(memfs, config)
// Files with similar patterns compress better with dictionary
file, _ := fs.Create("data.txt")
file.Write([]byte("common repeated pattern appears common repeated pattern"))
file.Close()
// Read back - dictionary is used automatically
file, _ = fs.Open("data.txt")
data := make([]byte, 1024)
n, _ := file.Read(data)
file.Close()
fmt.Printf("Read %d bytes with dictionary compression\n", n)
}
Output: Read 55 bytes with dictionary compression
Index ¶
- Variables
- func AddExtension(name string, algo Algorithm, preserveOriginal bool) string
- func CompressBytes(data []byte, algo Algorithm, level int) ([]byte, error)
- func DecompressBytes(data []byte, algo Algorithm) ([]byte, error)
- func GetCompressionPercentage(originalSize, compressedSize int64) float64
- func GetCompressionRatio(originalSize, compressedSize int64) float64
- func GetExtension(algo Algorithm) string
- func HasCompressionExtension(name string) bool
- func NewMemFS() absfs.Filer
- func TrainZstdDictionary(samples [][]byte, dictSize int) ([]byte, error)
- type Algorithm
- type AlgorithmRule
- type Config
- type FS
- func New(base interface{}, config *Config) (*FS, error)
- func NewWithArchival(base interface{}) (*FS, error)
- func NewWithBestCompression(base interface{}) (*FS, error)
- func NewWithFastestConfig(base interface{}) (*FS, error)
- func NewWithHighPerformance(base interface{}) (*FS, error)
- func NewWithRecommendedConfig(base interface{}) (*FS, error)
- func NewWithSmartConfig(base interface{}) (*FS, error)
- func (cfs *FS) Chdir(dir string) error
- func (cfs *FS) Chmod(name string, mode os.FileMode) error
- func (cfs *FS) Chown(name string, uid, gid int) error
- func (cfs *FS) Chtimes(name string, atime time.Time, mtime time.Time) error
- func (cfs *FS) Create(name string) (absfs.File, error)
- func (cfs *FS) GetStats() *Stats
- func (cfs *FS) Getwd() (string, error)
- func (cfs *FS) Mkdir(name string, perm fs.FileMode) error
- func (cfs *FS) MkdirAll(name string, perm os.FileMode) error
- func (cfs *FS) Open(name string) (absfs.File, error)
- func (cfs *FS) OpenFile(name string, flag int, perm fs.FileMode) (absfs.File, error)
- func (cfs *FS) ReadDir(name string) ([]fs.DirEntry, error)
- func (cfs *FS) ReadFile(name string) ([]byte, error)
- func (cfs *FS) Remove(name string) error
- func (cfs *FS) RemoveAll(path string) error
- func (cfs *FS) Rename(oldpath, newpath string) error
- func (cfs *FS) ResetStats()
- func (cfs *FS) SetAlgorithm(algo Algorithm) error
- func (cfs *FS) SetLevel(level int) error
- func (cfs *FS) Stat(name string) (fs.FileInfo, error)
- func (cfs *FS) Sub(dir string) (fs.FS, error)
- func (cfs *FS) TempDir() string
- func (cfs *FS) Truncate(name string, size int64) error
- type File
- type FileSystem
- type Stats
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( ErrUnsupportedAlgorithm = errors.New("compressfs: unsupported compression algorithm") ErrInvalidLevel = errors.New("compressfs: invalid compression level") ErrSeekNotSupported = errors.New("compressfs: seek not supported for compressed files") ErrAlreadyCompressed = errors.New("compressfs: file already compressed") ErrCorruptedData = errors.New("compressfs: corrupted compressed data") )
Functions ¶
func AddExtension ¶
AddExtension adds the compression extension to a filename
func CompressBytes ¶
CompressBytes compresses a byte slice using the specified algorithm and level
func DecompressBytes ¶
DecompressBytes decompresses a byte slice using the specified algorithm
func GetCompressionPercentage ¶
GetCompressionPercentage calculates the compression percentage Returns the percentage of space saved (0-100) E.g., 50 means 50% space savings
func GetCompressionRatio ¶
GetCompressionRatio calculates the compression ratio for given original and compressed sizes Returns a value between 0 and 1, where lower is better E.g., 0.5 means the compressed size is 50% of the original
func GetExtension ¶
GetExtension returns the file extension for an algorithm
func HasCompressionExtension ¶
HasCompressionExtension checks if filename has a compression extension
func TrainZstdDictionary ¶
TrainZstdDictionary trains a zstd dictionary from sample data samples should contain representative data similar to what will be compressed dictSize is the target dictionary size in bytes (recommended: 100KB - 1MB) Returns the trained dictionary or an error
Types ¶
type Algorithm ¶
type Algorithm string
Algorithm represents a compression algorithm
func DetectAlgorithm ¶
DetectAlgorithm detects compression algorithm from magic bytes
func DetectAlgorithmFromExtension ¶
DetectAlgorithmFromExtension detects the algorithm from file extension
func DetectCompressionAlgorithm ¶
DetectCompressionAlgorithm detects the compression algorithm from data
func IsCompressed ¶
IsCompressed checks if data appears to be compressed based on magic bytes
type AlgorithmRule ¶
type AlgorithmRule struct {
// Pattern to match file names (regex)
Pattern string
// Algorithm to use for matching files
Algorithm Algorithm
// Compression level override (-1 = use default, 0+ = specific level)
Level int
}
AlgorithmRule defines algorithm selection based on file patterns
type Config ¶
type Config struct {
// Algorithm to use for compression (default: zstd)
Algorithm Algorithm
// Compression level (algorithm-specific)
// gzip: 1-9 (6 default)
// zstd: 1-22 (3 default)
// lz4: 1-16 (1 default)
// brotli: 0-11 (6 default)
// snappy: ignored (no levels)
Level int
// Skip patterns - regex patterns for files to skip compression
// Examples: []string{`\.jpg$`, `\.png$`, `\.mp4$`, `\.zip$`}
SkipPatterns []string
// Auto-detect already compressed content by magic bytes
AutoDetect bool // default: true
// Preserve original extension (e.g., file.txt.gz vs file.gz)
PreserveExtension bool // default: true
// Strip compression extensions on reads (transparent)
StripExtension bool // default: true
// Buffer size for streaming (default: 64KB)
BufferSize int
// Minimum file size to compress (skip smaller files)
MinSize int64 // default: 0 (compress all)
// AlgorithmRules defines file-specific algorithm selection
// Rules are evaluated in order, first match wins
AlgorithmRules []AlgorithmRule
// EnableAutoTuning enables automatic compression level adjustment
// based on file size and type
EnableAutoTuning bool
// AutoTuneSizeThreshold is the file size threshold for auto-tuning (bytes)
// Files larger than this may use lower compression levels for speed
AutoTuneSizeThreshold int64 // default: 1MB
// ZstdDictionary is a pre-trained dictionary for zstd compression
// Improves compression ratio for similar files
ZstdDictionary []byte
// EnableParallelCompression enables parallel compression for large files
// Only applies to files larger than ParallelThreshold
EnableParallelCompression bool
// ParallelThreshold is the minimum file size for parallel compression
ParallelThreshold int64 // default: 10MB
// ParallelChunkSize is the chunk size for parallel compression
ParallelChunkSize int // default: 1MB
// AllowRecompression allows transparent re-compression when reading
// files compressed with a different algorithm
AllowRecompression bool
// RecompressionTarget is the target algorithm for re-compression
RecompressionTarget Algorithm
}
Config holds compression filesystem configuration
func ArchivalConfig ¶
func ArchivalConfig() *Config
ArchivalConfig returns a configuration optimized for long-term storage Maximum compression with brotli, optimized for write-once/read-many
func BestCompressionConfig ¶
func BestCompressionConfig() *Config
BestCompressionConfig returns a configuration optimized for maximum compression Use for static content or write-once/read-many scenarios
func CompatibleConfig ¶
func CompatibleConfig() *Config
CompatibleConfig returns a configuration using gzip for maximum compatibility
func DefaultConfig ¶
func DefaultConfig() *Config
DefaultConfig returns a config with sensible defaults
func FastestConfig ¶
func FastestConfig() *Config
FastestConfig returns a configuration optimized for speed
func HighPerformanceConfig ¶
func HighPerformanceConfig() *Config
HighPerformanceConfig returns a configuration optimized for high throughput Uses LZ4 for maximum speed with minimal CPU usage
func LowCPUConfig ¶
func LowCPUConfig() *Config
LowCPUConfig returns a configuration optimized for low CPU usage
func RecommendedConfig ¶
func RecommendedConfig() *Config
RecommendedConfig returns the recommended configuration for general use Uses Zstd level 3 which provides excellent compression with good speed
func SmartConfig ¶
func SmartConfig() *Config
SmartConfig returns a configuration with intelligent defaults based on use case Enables auto-tuning, algorithm rules for different file types, and skip patterns
type FS ¶
type FS struct {
// contains filtered or unexported fields
}
FS wraps a FileSystem with compression capabilities
func New ¶
New creates a new compressed filesystem wrapper The base parameter can be: - absfs.FileSystem - absfs.Filer (will be extended to FileSystem) - FileSystem (deprecated interface, will be adapted)
func NewWithArchival ¶
NewWithArchival creates a compressed filesystem optimized for maximum compression
func NewWithBestCompression ¶
NewWithBestCompression creates a new compressed filesystem optimized for compression ratio
func NewWithFastestConfig ¶
NewWithFastestConfig creates a new compressed filesystem optimized for speed
func NewWithHighPerformance ¶
NewWithHighPerformance creates a compressed filesystem optimized for speed
func NewWithRecommendedConfig ¶
NewWithRecommendedConfig creates a new compressed filesystem with recommended settings
func NewWithSmartConfig ¶
NewWithSmartConfig creates a compressed filesystem with intelligent algorithm selection
func (*FS) ReadFile ¶
ReadFile reads the named file and returns its contents. This reads and decompresses the file if it's compressed.
func (*FS) SetAlgorithm ¶
SetAlgorithm changes the compression algorithm
type File ¶
type File interface {
io.Reader
io.Writer
io.Closer
io.Seeker
Stat() (fs.FileInfo, error)
Sync() error
}
File interface for compressed files
type FileSystem ¶
type FileSystem interface {
Open(name string) (File, error)
OpenFile(name string, flag int, perm fs.FileMode) (File, error)
Create(name string) (File, error)
Mkdir(name string, perm fs.FileMode) error
Remove(name string) error
Stat(name string) (fs.FileInfo, error)
ReadDir(name string) ([]fs.DirEntry, error)
}
FileSystem interface that compressfs wraps Deprecated: Use absfs.FileSystem instead. This interface is maintained for backward compatibility.
type Stats ¶
type Stats struct {
FilesCompressed int64
FilesDecompressed int64
FilesSkipped int64
BytesRead int64
BytesWritten int64
BytesCompressed int64
BytesDecompressed int64
AlgorithmCounts sync.Map // map[Algorithm]int64
}
Stats holds compression statistics
func (*Stats) GetAlgorithmCount ¶
GetAlgorithmCount returns the count for a specific algorithm
func (*Stats) IncrementAlgorithmCount ¶
IncrementAlgorithmCount increments the count for a specific algorithm
func (*Stats) TotalCompressionRatio ¶
TotalCompressionRatio returns the overall compression ratio
func (*Stats) TotalDecompressionRatio ¶
TotalDecompressionRatio returns the overall decompression ratio