I am a big fan of the NOSQL datastore redis (http://code.google.com/p/redis/)
Redis is an astonishly fast datastore, but what people often overlook is that it performs amazingly well under concurrent loads.
I will write a later post on why being able to deliver low latency requests at high concurrency levels will be the defining metric for databases powering the next generation of webservers.
For now, here are some rather ugly bug accurate graphs
Redis 2.0.0 latency tests from 1-64K concurrent connections
Standard Hardware: Standard NIC (A780GM-A motherboard integrated), Phenom X4 CPU @3.0GHz, 8GB RAM PC3200.
Two machines connected via a standard 1GigE Netgear Switch.
OS: Ubuntu 9.10 64bit.
Tests are run using a single core for the client and a single core (on the other machine) for the server.
This is how the server performs in terms of requests per second as concurrency goes up. This is such an impressive graph:
REQUEST PER SECOND (Y-Axis) VS CONCURRENCY (X-Axis)
COMMENTS: below 5 is bad, 10 is already max, then a beautifully slow and smooth degradation to 64K concurrent connections
NEXT: Latency will be analyzed as concurrency varies. Latency is measured in milliseconds, 100K requests are made, and the graphs will show the percentage of those 100K requests took how long NOTES: For the following graphs, the axis are all the same:
THE BIG PICTURE 1-64K concurrent connections
COMMENTS: quick to see that even at high concurrency, if you are below 90%, you are ok
MOST IMPORTANT: CONCURRENCY 1 to 10K
COMMENTS: lots of L’s … there are always gonna be some anomolies, so this is very strong
THE BULK: 0-90% latency
COMMENTS: its all fantastic until maybe 30K, which is absurdly high. Also even in the very high ranges it is smooth in both the X and Z directions until 80%.
VERY GOOD: 90-99% latencies
COMMENTS: everything is kosher until 30K AND under 99%
THE END: 99% latencies
COMMENTS: Remember this is a closeup of the worst of the worst. The good news is we see the same L patterns at this resolution, until 30K. The 0-5K range is cluttered, so on to the next graph.
THE END: Concurrency 1 to 1000 – 99% latencies
COMMENTS: This is the SLA graph. @400 and approx: 99.5%, there is a hop. This holds true until 900. This latency jump is always from below 20 direct to about 220, and over the whole data set this hop hovers around the 99th percentile and mostly to pretty far right of it.
FINAL COMMENTS: I am presenting this as neutrally as possible. I am pretty impressed with these latencies, and the thing that strikes me about the graphs is their smoothness in all directions. This smoothness to me is as important as the numbers, it means the software is predictable.
- The data for these graphs is in this directory: http://allinram.info/redis/concurrency/test2/
- Two files were needed to create these benchmarks: redis-benchmark.c and Concurrency_test_redis.sh
- additonal unix commands need to be done to allow greater than 28K concurrency, like: “ulimit -n 90000” , “echo 1024 65000 > /proc/sys/net/ipv4/ip_local_port_range”, “echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle”, “”echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse”
- in reds code, the following change must be made “ae.h:#define AE_SETSIZE (1024*10) … redefine to 1024*64” for concurrency higher than 28K
I used gnu-plot for these graphs, and this is the way to look at them, cause you can rotate the axis in 3 dimensions
Here are the instructions:
1.) gnuplot> splot “FILE” u 1:2:3