This post is part 5/5 of my Data-Driven Code Generation of Unit Tests series.
In the previouspostsinthis series, I walked through the idea of performing data-driven code generation for unit tests, as well as how I implemented it in three different programming languages and build systems. This post contains some final thoughts about the effort.
Was it worth it?
Almost certainly. Although it required substantial up-front effort to set up the unit test generators, this approach found numerous, previously-undetected bugs both within my implementation of the calculation library as well as with legacy implementations. It is straightforward to write code generators that test all possible combinations of parameters to the calculations, ensuring that the resulting code coverage is excellent. Adding tests for a new calculation is as straightforward as adding a line to a single file.
As mentioned in Part 2: C++, CMake, Jinja2, Boost, all performance analytics metadata is stored in a single file called metadata.csv. This file drives all code generation and is what helps ensure inter-platform consistency.
This post is part 3/5 of my Data-Driven Code Generation of Unit Tests series.
This blog post explains how I used Java, Apache Maven, StringTemplate, and JUnit to perform data-driven code generation of unit tests for a financial performance analytics library. If you haven’t read it already, I recommend starting with Part 1: Background.
As mentioned in Part 2: C++, CMake, Jinja2, Boost, all performance analytics metadata is stored in a single file called metadata.csv. This file drives all code generation and is what helps ensure inter-platform consistency.
This post is part 2/5 of my Data-Driven Code Generation of Unit Tests series.
This blog post explains how I used CMake, Jinja2, and the Boost Unit Test framework to perform data-driven code generation of unit tests for a financial performance analytics library. If you haven’t read it already, I recommend starting with Part 1: Background.
All performance analytics metadata is stored in a single metadata file called metadata.csv. This file contains the complete list of calculations, and for each calculation, its settings (i.e. how it differs from other calculations), including properties like:
This post is part 1/5 of my Data-Driven Code Generation of Unit Tests series.
At Morningstar, I created a multi-language, cross-platform performance analytics library which implements both online and offline implementations of a number of common financial analytics such as Alpha, Beta, R-Squared, Sharpe Ratio, Sortino Ratio, and Treynor Ratio (more on this library later). The library relies almost exclusively on a comprehensive suite of automated unit tests to validate its correctness. I quickly found that maintaining a nearly-identical battery of unit tests in three different programming languages was a chore, and I had a hunch that I could use a common technique to deal with this problem: code generation.