Posts for: #Backend

Consolidating a Noisy Kafka Topic with a Debounced Aggregator

When an order is short on stock at a warehouse, the system creates an internal transfer request to move inventory from another location. The existing flow publishes one Kafka message per order, which triggers one transfer request per order.

During peak hours, a single warehouse can generate hundreds of these messages in minutes. Each one spawns a small transfer note. The warehouse team ends up processing dozens of tiny requests for the same destination when a single consolidated batch would do.

Read more →

Dual Write

Dual Write

While working with backend systems, there is a common pattern when handling incoming data. We usually have a server waiting to receive data from a message broker or an API. When the data arrives, we process the data based on the business contract and then insert or update that data into the database before also transmitting the data to the next server via a message broker or an API. This data processing pattern is called a dual write, sometimes also called multi-write or sync-write.

Read more →