This is my visualizing latency post series Introduction What is Binning? Rendering Event Data Official D3 Latency Heatmap Page
This post is part 4 of my series about visualizing latency, which is very useful for debugging certain classes of performance problems. Allow me to wrap up my visualizing latency post series by noting that my official D3 latency heatmap repository is at https://github.com/sengelha/d3-latency-heatmap/. Monitor this repository for future developments to the D3 latency heatmap chart.
This post is part 3 of my series about visualizing latency, which is very useful for debugging certain classes of performance problems. Now that I have introduced the D3 latency heatmap chart component and explained what binning is, I can discuss the primary use case of the chart: rendering event data. What is event data First, I must explain what I mean by event data. For a fuller treatment, please read Analytics For Hackers: How To Think About Event Data, but allow me to summarize: Event data describes actions performed by entities.
This post is part 2 of my series about visualizing latency, which is very useful for debugging certain classes of performance problems. As mentioned in Brendan Gregg’s Latency Heat Maps page, a latency heat map is a visualization where each column of data is a histogram of the observations for that time interval. Using Brendan Gregg’s visualization: As with histograms, the key decision that needs to be made when using a latency heat map is how to bin the data.
This post is part 1 of my series about visualizing latency, which is very useful for debugging certain classes of performance problems. A latency heatmap is a particularly useful tool for visualizing latency. For a great treatment of latency heatmaps, please read Brendan Gregg’s Latency Heat Maps page and the ACM Queue article Visualizing System Latency. On the right, you can see a latency heatmap generated from a job queueing system which shows a number of interesting properties, not least of which is that the system appears to be getting slower over time.
This post is part 5 of my series about data-driven code generation of unit tests. In the previous posts in this series, I walked through the idea of performing data-driven code generation for unit tests, as well as how I implemented it in three different programming languages and build systems. This post contains some final thoughts about the effort. Was it worth it? Almost certainly. Although it required substantial up-front effort to set up the unit test generators, this approach found numerous, previously-undetected bugs both within my implementation of the calculation library as well as with legacy implementations.
This post is part 4 of my series about data-driven code generation of unit tests. This blog post explains how I used C#, MSBuild, T4 Text Templates, and the Microsoft Unit Test Framework for Managed Code to perform data-driven code generation of unit tests for a financial performance analytics library. If you haven’t read it already, I recommend starting with Part 1: Background. As mentioned in Part 2: C++, CMake, Jinja2, Boost, all performance analytics metadata is stored in a single file called metadata.