When I started building Flux-Order, a high-concurrency ticketing system, the core requirement sounded almost embarrassingly simple: process flash-sale traffic without overselling a single ticket.
You have probably lived the other side of this problem. You click "Buy," the page loads for ten seconds, and somehow the show is still sold out by the time your cart loads. I wanted to build the backend that does not do that. What I did not expect was how many ways I would nearly break it before getting there.
The race condition
A standard REST API hitting a relational database was never going to handle flash-sale volume. The moment I started stress-testing my initial endpoints, I ran straight into the hardest problem in distributed systems: the race condition.
Picture two users, Alice and Bob, both clicking "Buy" on the very last ticket at the exact same millisecond.
| Thread A | Checks the database. One ticket left. |
| Thread B | Checks the database. One ticket left. |
| Thread A | Decrements to zero. Alice's order confirmed. |
| Thread B | Decrements to -1. Bob's order confirmed. |
You just sold a ticket that does not exist. My first attempt to fix this was optimistic locking, which checks a row version before writing. It looked clean in isolation. But the moment I ran Locust to simulate a realistic burst of concurrent buyers, it fell apart. Threads were colliding, failing, and looping in retries. Latency spiked. The system got slower under the exact conditions it needed to hold up. Optimistic locking is not a solution for a real flash sale. It is just a slower version of the same problem.
The Redis mutex
What I actually needed was a way to serialize access to a specific ticket before the primary database was ever touched. That is where a Redis Distributed Lock came in.
I implemented a Redis mutex using the SET NX command, which stands for Set if Not Exists. Here is what the flow looks like in practice:
- When Alice tries to buy Ticket #42, the backend writes a unique lock key in Redis (
ticket:42) with a TTL attached. - Because Redis is single-threaded and runs entirely in memory, only one thread can successfully set that key. Alice gets it.
- Alice's transaction proceeds to the database to finalize the purchase.
- A millisecond later, Bob tries the same ticket. Redis rejects the
SET NXcall because the lock exists. His request fails immediately. No database hit, no retry loop, just a clean instant response.
Failing fast is a feature
When I ran the Locust load tests again, hammering the API with 50+ concurrent requests per second on the same ticket, the result was clean.
Building Flux-Order taught me that localhost is deceptive. Code that runs perfectly in a linear, single-threaded environment will behave completely differently when real traffic hits it in the cloud.
More importantly, it taught me that failing fast is a genuine feature. Keeping a user waiting ten seconds while threads stubbornly retry is far worse UX than rejecting them in 20 milliseconds. The Redis lock makes the failure instant, deterministic, and intentional.