Entroca

Next-Generation, Bare-Metal Caching for the Performance-Critical

Entroca is an experimental, high-performance key-value store designed for developers who demand maximum control over their caching layer. Built in Zig with a shared-nothing architecture, it prioritizes raw speed, minimal overhead, and compile-time customization—at the cost of convenience.

Link to Github.

Why Entroca Exists

Modern caches often abstract too much. Entroca flips the script: it’s a collaboration between developers and infrastructure. Clients handle hashing and port routing directly, while the server focuses on doing one thing exceptionally well: storing and retrieving data with near-zero runtime fluff.

Core Philosophy


What Makes It Different

✅ Thermodynamic Eviction

Forget LRU/LFU. Our experimental temperature-based system uses probability and thermodynamics to auto-balance hot/cold data with minimal overhead.

✅ Client-Driven Architecture

You control hashing and port routing. The server stays lean—no internal hashing, no dynamic thread management.

✅ Compile-Time Specialization (Planned)

Leverage Zig’s comptime to strip out unneeded logic. Want fixed-size keys? Preallocated buffers? Build a cache binary tailored to your data.

✅ Memory That Listens (Planned)

A custom multi-layered bitmap allocator is in development to reduce fragmentation and outpace general-purpose allocators.


Current State

What works today:

What’s coming:


Who It’s For

Consider Entroca if you:

Avoid if you:


“For those who’d rather rebuild the wheel than carry spare tires.”