This is all fine for simple benchmarks, but sometimes it's beneficial to be able to set up some code before we actually perform our iterations. In unit testing, the common approach to this is using so called fixtures, and hayai provides the exact same facility. A fixture is created for every run, and before the iterations begin, the virtual function SetUp() is invoked, and once the iterations for the run have been completed, the function TearDown() is invoked. The upside to this is, that profiling is only done across the iterations, so costly setup and teardown does not count in the benchmark, which has a lot of uses.