× Install ThecoreGrid App
Tap below and select "Add to Home Screen" for full-screen experience.
B2B Engineering Insights & Architectural Teardowns

Distributed sequence generation without bottlenecks

Distributed sequence generation replaces database sequences at scale. It removes central bottlenecks while keeping compatibility with existing systems.

The problem does not manifest immediately — until the organization attempts to transition from a relational database to a cloud-native storage solution. In this case, over a hundred services relied on database sequences for generating primary keys. These counters were deeply embedded in the logic: from sorting to backward compatibility of the API. In a NoSQL environment, such as DynamoDB, there are no native sequences. A simple replacement breaks contracts, data order, and performance. Scale adds pressure: thousands of counters and high throughput make each network round-trip costly in terms of latency.

Standard alternatives were considered, but each broke under constraints. UUIDs eliminate collisions but destroy order and degrade index performance. The Snowflake approach retains BIGINT and partial order but requires management of worker IDs and clock synchronization, complicating operations. A central coordinator becomes a bottleneck and a single point of failure. Timestamp approaches do not guarantee uniqueness under high contention. A key observation: most systems do not require strict global order and the absence of “gaps.” This allowed for simplifying requirements and abandoning heavy coordination in favor of locality and caching.

The solution is a specialized sequence service with multi-level caching. The architecture is built around DynamoDB as the source of truth, a server-side cache, and “thick” clients within applications. Instead of generating a single value per request, the system allocates blocks (batches) of 500–1000 values through atomic increments. This reduces the load on the storage and removes it from the critical path. Most requests are served locally, without network calls. The trade-off is clear: in the event of failures, some values are lost, creating gaps, and global order is not guaranteed.

The implementation relies on atomic operations in DynamoDB with conditional updates. Each counter is stored as a separate object. When allocating a block, compare-and-set logic is used: if the value has changed, a retry occurs. This provides uniqueness without distributed locks. At the service level, each instance maintains its own in-memory cache with non-overlapping ranges. An external cache (e.g., Redis) is deliberately excluded to avoid adding unnecessary network hops and new points of failure.

The key challenge is not value generation but managing the refill logic. If the cache is replenished too early, costs and losses increase. If too late, cache misses and latency spikes occur. Here, a sliding window algorithm is used, which assesses the current consumption rate and dynamically calculates the refill threshold. The formula is simple: the current rate multiplied by a time buffer. Refill is initiated asynchronously, before the cache is exhausted, so user requests are not blocked. This mechanism resides within the client, further reducing the load on the network.

The result is a system where DynamoDB serves less than 0.1% of sequence requests. The main flow is handled by local caches, making identifier generation latency close to that of a standard in-memory operation. Migration of services occurs without schema changes and with minimal code adjustments. Compatibility with existing contracts is maintained. Performance metrics are not disclosed, but architecturally, a reduction in latency and the elimination of a central bottleneck is evident.

This solution is a pragmatic compromise. It sacrifices strict guarantees for scalability and operational simplicity. In high-load systems, this often proves to be the right choice: not the ideal model, but one that does not break under load.

Original Source

×

🚀 Deploy the Blocks

Controls: ← → to move, ↑ to rotate, ↓ to drop.
Mobile: use buttons below.