This is part 7 of my series on calculating percentiles on streaming data.
In 2005, Graham Cormode, Flip Korn, S. Muthukrishnan, and Divesh Srivastava published a paper called Effective Computation of Biased Quantiles over Data Streams [CKMS05]. This paper took the Greenwald-Khanna algorithm [GK01] and made the following notable changes:
Generalized the algorithm to work with arbitrary targeted quantiles, where a targeted quantile is a combination of a quantile $\phi$ and a maximum allowable error $\epsilon$.

This is part 6 of my series on calculating percentiles on streaming data.
For the past 10 days or so, I’ve been working on the build process of my C++ and JavaScript streaming analytics libraries. Using the magic of Emscripten, I have been able to combine both libraries into a single, C++ codebase, from which I can compile both the C++ and JavaScript versions of the library. Furthermore, I was able to do this without breaking backwards compatibility of the JavaScript library.

This is part 5 of my series on calculating percentiles on streaming data.
I have created a reusable C++ library which contains my implementation of the streaming percentile algorithms found within this blog post and published it to GitHub. Here’s what using it looks like:
#include <stmpct/gk.hpp> using namespace stmpct; double epsilon = 0.1; gk g(epsilon); for (int i = 0; i < 1000; ++i) g.insert(rand()); double p50 = g.quantile(0.5); // Approx.

This is part 4 of my series on calculating percentiles on streaming data.
I have created a reusable JavaScript library which contains my implementation of the streaming percentile algorithms found within this blog post and published it to GitHub and NPM. Here’s what using it looks like:
var sp = require("streaming-percentiles'); // epsilon is allowable error. As epsilon becomes smaller, the // accuracy of the approximations improves, but the class consumes // more memory.

This is part 3 of my series on calculating percentiles on streaming data.
In an effort to better understand the Greenwald-Khanna [GK01] algorithm, I created a series of visualizations of the cumulative distribution functions of a randomly-generated, normally-distributed data set with $\mu$ = 0 and $\sigma$ = 1, as the number of random numbers $n$ increases from 1 to 1,000.
The way to read these visualizations is to find the percentile you are looking for on the y-axis, then trace horizontally to find the vertical line on the chart which intersects this location, then read the value from the x-axis.

This is part 2 of my series on calculating percentiles on streaming data.
The most famous algorithm for calculating percentiles on streaming data appears to be Greenwald-Khanna [GK01]. I spent a few days implementing the Greenwald-Khanna algorithm from the paper and I discovered a few things I wanted to share.
Insert Operation The insert operation is defined in [GK01] as follows:
INSERT($v$) Find the smallest $i$, such that $v_{i-1} \leq v < v_i$, and insert the tuple $ t_x = (v_x, g_x, \Deltax) = (v, 1, \lfloor 2 \epsilon n \rfloor) $, between $t{i-1}$ and $t_i$.

This is part 1 of my series on calculating percentiles on streaming data.
Suppose that you are dealing with a system which processes one million requests per second, and you’d like to calculate the median percentile response time over the last 24 hours.
The naive approach would be to store every response time, sort them all, and then return the value in the middle. Unfortunately, this approach would require manipulating 1,000,000 * 60 * 60 * 24 = 86.