OpO FAST Triple Store, Web Server, and JSON Database

OpO is two tools rolled into one. Benchmarks are run for the HTTP server and JSON database. The results are compared to industry leaders in each category.

The benchmark tool use is called perfer and is fast enough to fully stress not only OpO but the other applications as well. All tests were run on a 4.00GHz i7-6700 with 4 cores (8 hyperthreads) running Ubuntu 16.04.

HTTP Server

A simple, small 100 byte html file was used as the test file for fetching with both OpO and NGINX. This is a performance benchmark and is not intended to compare OpO to the many other features nginx has.

All tests were run on one machine. Runs used a varying number of connections to see how well each scaled. The OpO version was 0.9.0. The NGINX version was Open Source NGINX 1.10.3 on Ubuntu.

The file use was named 'index.html' and has the contents of:

<!DOCTYPE HTML> <html> <head><title>Bench</title></head> <body> Benchmark </body> </html>

With regard to throughput, OpO is consistently faster than NGINX. Starting at more than twice the throughput and eventually converging at 7000 simultaneous connections.

Latency with OpO starts out slightly lower than NGNIX but increases relative to NGNIX at higher numbers of simultaneous connections.

100159K GETs/sec0.5 msecs61K GETS/sec1.6 msecs
200169K GETs/sec1.0 msecs64K GETS/sec2.0 msecs
300148K GETs/sec1.4 msecs62K GETS/sec2.7 msecs
400133K GETs/sec2.1 msecs62K GETS/sec3.2 msecs
500138K GETs/sec2.9 msecs62K GETS/sec3.9 msecs
600140K GETs/sec3.3 msecs62K GETS/sec4.1 msecs
700148K GETs/sec3.2 msecs61K GETS/sec3.7 msecs
800136K GETs/sec3.6 msecs61K GETS/sec3.9 msecs
900125K GETs/sec4.6 msecs62K GETS/sec5.3 msecs
1000129K GETs/sec5.3 msecs61K GETS/sec4.7 msecs
2000122K GETs/sec9.8 msecs61K GETS/sec9.0 msecs
3000119K GETs/sec10.6 msecs61K GETS/sec7.1 msecs
4000117K GETs/sec14.6 msecs60K GETS/sec11.4 msecs
5000114K GETs/sec19.0 msecs60K GETS/sec11.9 msecs
6000106K GETs/sec18.2 msecs61K GETS/sec13.9 msecs
700062K GETs/sec27.1 msecs60K GETS/sec17.8 msecs
800079K GETs/sec30.6 msecs61K GETS/sec24.0 msecs


The MongoDB HTTP API is deprecated as of version 3.2. It also does not support inserts. For the benchmarks the following scenarions were used after doing a bulk load of simple 3 attribute JSON.

  • bulk load without indices
  • bulk load with indices on id and content
  • fetch by _id without indices
  • fetch by _id with indices
  • query by id without indices
  • query by id with indices

Attempting to simultaneously connect with more than than about 130 connection causes mongodb throughput to drop by about 100x and latencies to increase by 100x. For this reason, runs were made with only 20 simultaneous connections to avoid the limitations. OpO on the other hand responded well even at over 4000 simultaneous connections.

The bulk load is of ten million records using either the --import option for opod or mongoimport. Other runs were made with both one million records and ten million records as indicated.

Graphs are not shows as OpO is between 10 and 1000 times faster in almost all categories.

OpOMongoDB no indicesMongoDB with indices
Bulk Load12.7 secs48 secs72 secs
Memory Use3.1 GB2.2 MB3.4 GB
1M Fetch369K0.0539050.138851.3
1M Query213K0.096210321K0.94
10M Fetch341K0.0637951.537152.8
10M Query166K0.110.25206019K1.0