OpO is two tools rolled into one. Benchmarks are run for the HTTP server and JSON database. The results are compared to industry leaders in each category.
The benchmark tool use is called perfer and is fast enough to fully stress not only OpO but the other applications as well. All tests were run on a 4.00GHz i7-6700 with 4 cores (8 hyperthreads) running Ubuntu 16.04.
A simple, small 100 byte html file was used as the test file for fetching with both OpO and NGINX. This is a performance benchmark and is not intended to compare OpO to the many other features nginx has.
All tests were run on one machine. Runs used a varying number of connections to see how well each scaled. The OpO version was 0.9.0. The NGINX version was Open Source NGINX 1.10.3 on Ubuntu.
The file use was named 'index.html' and has the contents of:
With regard to throughput, OpO is consistently faster than NGINX. Starting at more than twice the throughput and eventually converging at 7000 simultaneous connections.
Latency with OpO starts out slightly lower than NGNIX but increases relative to NGNIX at higher numbers of simultaneous connections.
OpO | NGINX | |||
---|---|---|---|---|
Connections | Throughput | Latency | Throughput | Latency |
100 | 159K GETs/sec | 0.5 msecs | 61K GETS/sec | 1.6 msecs |
200 | 169K GETs/sec | 1.0 msecs | 64K GETS/sec | 2.0 msecs |
300 | 148K GETs/sec | 1.4 msecs | 62K GETS/sec | 2.7 msecs |
400 | 133K GETs/sec | 2.1 msecs | 62K GETS/sec | 3.2 msecs |
500 | 138K GETs/sec | 2.9 msecs | 62K GETS/sec | 3.9 msecs |
600 | 140K GETs/sec | 3.3 msecs | 62K GETS/sec | 4.1 msecs |
700 | 148K GETs/sec | 3.2 msecs | 61K GETS/sec | 3.7 msecs |
800 | 136K GETs/sec | 3.6 msecs | 61K GETS/sec | 3.9 msecs |
900 | 125K GETs/sec | 4.6 msecs | 62K GETS/sec | 5.3 msecs |
1000 | 129K GETs/sec | 5.3 msecs | 61K GETS/sec | 4.7 msecs |
2000 | 122K GETs/sec | 9.8 msecs | 61K GETS/sec | 9.0 msecs |
3000 | 119K GETs/sec | 10.6 msecs | 61K GETS/sec | 7.1 msecs |
4000 | 117K GETs/sec | 14.6 msecs | 60K GETS/sec | 11.4 msecs |
5000 | 114K GETs/sec | 19.0 msecs | 60K GETS/sec | 11.9 msecs |
6000 | 106K GETs/sec | 18.2 msecs | 61K GETS/sec | 13.9 msecs |
7000 | 62K GETs/sec | 27.1 msecs | 60K GETS/sec | 17.8 msecs |
8000 | 79K GETs/sec | 30.6 msecs | 61K GETS/sec | 24.0 msecs |
The MongoDB HTTP API is deprecated as of version 3.2. It also does not support inserts. For the benchmarks the following scenarions were used after doing a bulk load of simple 3 attribute JSON.
Attempting to simultaneously connect with more than than about 130 connection causes mongodb throughput to drop by about 100x and latencies to increase by 100x. For this reason, runs were made with only 20 simultaneous connections to avoid the limitations. OpO on the other hand responded well even at over 4000 simultaneous connections.
The bulk load is of ten million records using either the --import option for opod or mongoimport. Other runs were made with both one million records and ten million records as indicated.
Graphs are not shows as OpO is between 10 and 1000 times faster in almost all categories.
OpO | MongoDB no indices | MongoDB with indices | ||||
---|---|---|---|---|---|---|
Bulk Load | 12.7 secs | 48 secs | 72 secs | |||
Memory Use | 3.1 GB | 2.2 MB | 3.4 GB | |||
Operation | Throughput Op/sec | Latency msecs | Throughput Op/sec | Latency msecs | Throughput Op/sec | Latency msecs |
Insert | 248K | 0.08 | ---- | ---- | ---- | ---- |
1M Fetch | 369K | 0.05 | 390 | 50.1 | 388 | 51.3 |
1M Query | 213K | 0.09 | 6 | 2103 | 21K | 0.94 |
10M Fetch | 341K | 0.06 | 379 | 51.5 | 371 | 52.8 |
10M Query | 166K | 0.11 | 0.25 | 2060 | 19K | 1.0 |