Skip to content

Workflow

Beyond the library itself there are a range of tools and tests that make up the project as a whole. These are there to aid development and to provide an element of analysis. These extra components also have more requirements and dependencies than the library alone.

Installing LFS

Some of the files in the repository require Git Large File Storage which is a common Git extension for handling large file types. You can find installation instructions on the LFS website.

Nix development environment

Once the project is cloned, on NixOS or systems with Nix available, you can load a developer environment with all required dependencies:

nix develop

Docker development environment

On systems without Nix, a Docker solution is also provided to run a development environment as a service. This is done using Docker Compose:

docker compose run --rm --service-ports develop

From within the containerised service you can also install other tools as required from Nixpkgs, such as Helix for editing text:

nix profile install nixpkgs#helix

Alternatively you can build the project without using a provided development environment using a local compiler. This should work just fine on most systems.

The project has a simple layout. As the library is header only the library source code is under the include directory. A large proportion of the project code is also the tooling and testing. This is found under the src directory.

The directories are:

  • cmake: CMake examples for adding OpenQMC as a dependency.
  • doxygen: Extra static pages for Doxygen API documentation site.
  • images: Documentation images generated using the tools and notebooks.
  • include: OpenQMC library source code.
  • python: Jupyter notebooks and Python wrapper for tooling.
  • scripts: Utility scripts for command line and CI usage.
  • src/tests: Unit and statistical hypothesis testing.
  • src/tools: Project tooling for analysis and offline optimisation.
  • tsc: Meeting notes and project process documentation.

Dependencies

Tools and tests have dependencies on external libraries. If these are already installed on the system they can be found automatically, such as when using one of the development environments. If a dependency isn't installed, then it will be downloaded and compiled along with the project.

The dependencies are:

Extra build options

There are a few extra options that go along with those listed in the Library build options section. These are specific to handling the tools and tests. The options are:

  • OPENQMC_BUILD_TOOLS: Enable targets for building the project tools. You may want to enable during development. Option values can be ON or OFF. Default value is OFF.
  • OPENQMC_BUILD_TESTING: Enable targets for building the project tests. You may want to enable during development. Option values can be ON or OFF. Default value is OFF.
  • OPENQMC_FORCE_DOWNLOAD: Force dependencies to download and build, even if they are installed. Useful for guaranteeing compatibility. Option values can be ON or OFF. Default value is OFF.

Build configuration

You can use configurations for Extra build options based on your operating system by referencing pre-defined CMake presets. The preset for Linux and macOS has the name unix. Presets initialise the build config so that it's ready for development. Do this by running:

cmake --preset unix

You can also override the options using the CMake CLI while initialising the build config. If the build already exists, then CMake updates the option:

cmake --preset unix -D OPENQMC_FORCE_DOWNLOAD=ON

If you prefer to update the options using an interactive TUI rather than the command line, then there is also the CMake curses interface command:

ccmake --preset unix

Tools

The project comes with various tools that allow for offline optimisation of the sampler implementations, as well as a way of evaluating the library results. A benefit is that these tools also act as an example for users.

You can find the tools under the src/tools folder. These make use of the GLM and oneTBB external dependencies. You can build the tools using:

cmake --build --preset unix

Alternatively you can build a specific tool by specifying a target name:

cmake --build --preset unix --target benchmark

All tools link into a single library with a C API. You can find the source code for this in src/tools/lib folder. Here is a list of some of the key tools:

You can access the library's C API via a Python CTypes wrapper. On Unix this just requires the TOOLSPATH environment variable to point towards the shared library binary and importing the python/wrapper.py module.

This library is also linked against multiple CLI commands found in the src/tools/cli folder. Once you have compiled the tools, you can run the CLI binaries from the command line as shown:

Benchmark CLI usage
The 'benchmark' tool measures the time for cache initialisation, as well as the
draw sample time, independently for each implementation. The results depend on
the hardware, as well as the build configuration.

USAGE: ./build/src/tools/cli/benchmark <sampler> <measurement>

ARGS:
  <sampler> Options are 'pmj', 'pmjbn', 'sobol', 'sobolbn', 'lattice', 'latticebn'.
  <measurement> Options are 'init', 'samples'.
Generate CLI usage
The 'generate' tool evaluates a specific implementation and outputs a table of
points from the sequence. Currently the CLI sets the table output options to 2
sequences, 256 samples, and 8 dimensions.

USAGE: ./build/src/tools/cli/generate <sampler>

ARGS:
  <sampler> Options are 'pmj', 'sobol', 'lattice'.
Trace CLI usage
The 'trace' tool is a CPU / GPU uni-directional path tracer that provides an
example of how you might use the API in a renderer. There are multiple scenes
embedded in the source code, including the Cornell box used on this page. This
also demonstrates how each sampler implementation practically performs.

USAGE: ./build/src/tools/cli/trace <sampler> <scene>

ARGS:
  <sampler> Options are 'pmj', 'pmjbn', 'sobol', 'sobolbn', 'lattice', 'latticebn'.
  <scene> Options are 'box', 'presence', 'blur'.
Optimise CLI usage
The 'optimise' tool targets a single base sampler implementation and produces
the data needed to construct a spatial blue noise variant. The output
is two tables, files named 'keys.txt' and 'ranks.txt'.

The table data can be mapped to a 3D array, with the axes representing 2D pixel
coordinates and 1D time. The optimisation process works toroidally, so you can
tile each table to cover a large spatial area.

Keys are 32 bit integers, and seed unique sampler domains. Ranks are also 32 bit
integers, and when XORed with the sample index allow for spatial blue
noise with progressive and adaptive rendering.

When optimising final quality results it's advisable to use a GPU build of the
optimiser due to the computational cost. Testing with an NVIDIA RTX A6000 found
that this provided a speedup of ~400x that of the CPU.

USAGE: ./build/src/tools/cli/optimise <sampler>

ARGS:
  <sampler> Options are 'pmj', 'sobol', 'lattice'.

Testing

The project aims to maintain full test coverage for all public library code. That includes unit tests for individual functions and components. It also means statistical hypothesis testing which validates the correctness of random distributions.

You can find the tests under the src/tests folder. These make use of the GoogleTest and Hypothesis external dependencies. You can build the tests using:

cmake --build --preset unix --target tests

Tests are then managed and run using CTest. You can run the tests using the CTest CLI from the project root directory. Run the command:

ctest --preset unix

CTest then aggregates the results of all tests from the GoogleTest framework and writes the results to the terminal. These also include the null-hypothesis tests that build upon Wenzel Jackob's hypothesis library.

Running commands

If all those CMake commands sound laborious, there is an easier way, and that is using a command runner like just. You can find instructions to install just on their site.

You call a command in just a 'recipe', and these recipes simplify the commands in the above documentation. If you run the just command without any arguments from the project root directory, it will display all available recipes with a description:

just

As a developer, you can set up the project using the setup recipe. This will run the commands described in Build configuration using the unix preset:

just setup

By default, just asks CMake to use Ninja as the build generator. Instructions on getting Ninja are on their site. If you would like to use a different generator, you can pass a parameter to setup:

just setup `Unix Makefiles`

Another useful recipe is clean which will remove the build directory and reset the project. You would want to call this before setup for a hard reset:

just clean

After this point you can use any of the other recipes just lists, some of these take arguments as you saw with setup and should simplify the commands listed in the Build configuration, Tools and Testing sections.

Here is an example of a recipe building and running the tests:

just test

Notebooks

All Jupyter notebooks used to author the images on this page are under the python/notebooks directory and are available to see directly online. These use the tools Python CTypes wrapper.

just notebook

Images on this page are generated with a build of the tools library, and then executing the notebooks from the command line. Images can be regenerated using:

just readme-images

Doxygen

API documentation is built using Doxygen with pages under the doxygen directory. Building the documentation site can be done using:

just docs