Tansu proposes rebuilding the Kafka model: removing state from the brokers and delegating reliability to external storage. This changes the system’s behavior under load and simplifies the operational model.
The problem manifests at the operational level. A classic Kafka broker is a stateful component: replication, leader elections, persistent state, long uptime. Such nodes are hard to scale down; they require configuration and resources (e.g., gigabytes of heap). The system is resilient due to its internal complexity. This works as long as the team is willing to pay for the operational overhead and keep clusters “always on”.
Tansu changes the premise. Instead of in-broker replication, it assumes that the storage already provides durability. Brokers become stateless: no leaders, no local state, with a memory footprint in the tens of megabytes. They can be scaled to zero and spun up in milliseconds. This is a trade-off: durability is no longer in the broker layer, but in the chosen backend storage. System reliability now depends on the properties of S3, Postgres, or another backend.
The implementation is built around pluggable storage. The backend is specified via a URL:
- S3-compatible storage — for a fully diskless mode
- SQLite — for local development and testing
- Postgres — for integration with streaming and transactions
In the case of Postgres, the architecture is simplified to DB primitives: produce is an INSERT or COPY, fetch is a SELECT. The bottleneck was sequential INSERTs due to a round-trip for each record. This was replaced with COPY FROM: a single setup, a COPY DATA stream, and a final COPY DONE. This mode removes per-record acknowledgments and increases throughput for batch writes.
Additionally, the need for a transactional outbox disappears. A message and business data can be written atomically in a single transaction via a stored procedure. This eliminates the typical gap between the DB and the broker.
Schema validation is moved to the broker. In Kafka, it is usually on the client side and optional. In Tansu, the broker validates every record (Avro, JSON, Protobuf) before writing. This slows down processing (requires unpacking and validating) but guarantees consistency regardless of the client. This same mechanism allows writing data directly into analytical formats (Iceberg, Delta, Parquet). Through a “sink topic,” intermediate storage can be bypassed to form tables directly.
As a proxy, Tansu can sit in front of a Kafka cluster. It claims ~60k records per second and sub-millisecond P99 latency on modest hardware, consuming about 13 MB of RAM. This shows that a stateless layer can be lightweight if storage responsibilities are removed.
The bottom line is a shift in responsibility. The broker becomes a stateless compute layer. Storage becomes the source of truth. Operationally, this simplifies scaling and deployment (down to minimal OS-less containers). But new dependencies emerge: the storage properties, its latency, and constraints directly impact the system’s behavior. Some Kafka features are currently missing (e.g., throttling, ACL, compaction for S3), which limits use cases.
The approach looks like a pragmatic simplification for environments where external storage is already a reliable and managed component.