Integrate Benchmark to Fastpath

This guide explains how to create and integrate a benchmark with Fastpath. A benchmark integration typically consists of:

  1. A benchmark YAML fragment under fastpath/benchmarks/<suite>/, defines the benchmark identity & configuration

  2. A container Dockerfile under fastpath/containers/<suite>/, defines the runtime environment

  3. A container entrypoint script (exec.py) under fastpath/containers/<suite>/, for benchmark execution & result processing

Once created, the container image must be built and referred to in the benchmark YAML fragment. The benchmark then becomes available for execution via fastpath plan exec.

See Define & Execute a Plan for a full description of the plan creation & execution.

High-level Concepts

Fastpath executes benchmarks over Docker containers on the SUT node(s). Each benchmark container will receive the benchmark configuration (benchmark.yaml) generated by Fastpath in a shared directory (/fastpath-share). This benchmark.yaml is constructed from the benchmark object in the plan, which may either be defined inline in the plan or (more commonly) included from a benchmark YAML fragment stored under fastpath/benchmarks/<suite>/ (see Plan Schema, Benchmark Library section). The container entrypoint (exec.py) then:

  • reads the benchmark params from benchmark.yaml

  • validates those input parameters

  • initiates the benchmark workload (directly or through helper scripts)

  • writes results.csv back into /fastpath-share for Fastpath to collect

A benchmark can be:

  • single role (Fastpath assigns default executer role), or

  • multi role (multiple roles such as server, client, monitor).

For multi-role benchmarks, Fastpath starts a container per role on the node(s) (based on single-node/multi-node SUT selection & rolemap defined in plan) and runs them in parallel.

1. Add Benchmark YAML

Location

fastpath/benchmarks/<suite>/<benchmark-name>.yaml

Purpose

Defines a benchmark as a reusable benchmark fragment in the Fastpath benchmark library. It declares the benchmark identity and how Fastpath should execute it. Plans can reference this fragment (via include) from the Benchmark Library instead of repeating the benchmark definition inline, while still allowing per-plan overrides (for example, overriding warmups, repeats or other fields).

Typical fields
  • suite: Benchmark suite name (folder name).

  • name: Benchmark/workload name within the suite.

  • type: Category of benchmark (e.g. scheduler, network, storage).

  • image: Full image path for the docker container (registry or local).

  • params: Configuration keys supported by the benchmark (optional).

  • roles: Role names for multi-role benchmarks where each role performs a different task (optional).

Example

suite: repro-collection
name: mysql-workload
type: database
image: registry.gitlab.arm.com/tooling/fastpath/containers/repro_collection:v2.1
params:
  workload: mysql
  sut_nr_cpus: 16

If your benchmark requires multiple roles, include roles (example):

roles:
  - server
  - client

Fastpath will start one container per role and run them concurrently on selected SUT node(s) as per the rolemap (default or could be defined in the plan).

Sample plan snippet for multi-role multi-node benchmark with rolemap defined:

benchmark:
- include: repro-collection/mysql-workload.yaml
  rolemap:
    server: 0   #0 is the first node in the SUT definition
    client: 1   #1 is the second node in the SUT definition

2. Add Container Dockerfile

Location

fastpath/containers/<suite>/Dockerfile

Purpose

Defines the runtime environment for the benchmark. The resulting image must be built and deployed based on your deployment choice (registry or local). The deployed image path will be used in the benchmark YAML.

Typical responsibilities
  • Install system dependencies (apt-get install ...).

  • Install Python and set up a Python venv for pip packages.

  • Install common Fastpath Python dependencies from fastpath/requirements.txt.

  • Install benchmark-specific dependencies.

  • Fetch the benchmark workload code (e.g. git clone and build).

  • Copy any helper files/scripts needed by the benchmark to container’s shared folder.

  • Copy the Fastpath container entrypoint exec.py to shared folder.

  • Set the entrypoint/command to run exec.py.

Example structure

FROM registry.gitlab.arm.com/tooling/fastpath/containers/base:latest

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update && \
    apt-get install --assume-yes --no-install-recommends \
      python3 python3-pip python3-venv python3-dev \
      build-essential pkg-config

RUN python3 -m venv /pyvenv
ENV PATH="/pyvenv/bin:${PATH}"

COPY fastpath/requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt && rm -rf /tmp/requirements.txt

# benchmark-specific dependencies
RUN apt-get update && \
apt-get install --assume-yes --no-install-recommends git

RUN mkdir /fastpath

# benchmark workload source (example)
RUN git clone https://example.org/bench.git /fastpath/bench && \
cd /fastpath/bench && make

ARG NAME
# Copy & compile helper scripts (example)
COPY containers/${NAME}/helper.c /tmp/helper.c
RUN gcc /tmp/helper.c -o /fastpath/bench
RUN rm -rf /tmp/helper.c

# Setup the entrypoint.
COPY containers/${NAME}/exec.py /fastpath/.
RUN chmod +x /fastpath/exec.py
CMD /fastpath/exec.py
Versioning

The image tag used in the benchmark YAML (for example :v1.0) should match the container version you publish locally or in the registry. Update the tag whenever you change benchmark container behavior.

3. Implement exec.py

Location

fastpath/containers/<suite>/exec.py

Purpose

This script is the container entrypoint and the primary integration between Fastpath and the benchmark. It must initiate the benchmark workloads, parse and write results in Fastpath’s Results schema to /fastpath-share/results.csv.

Container contract
  • Input file: /fastpath-share/benchmark.yaml

  • Output directory (recommended for logs): /fastpath-share/output/

  • Output file: /fastpath-share/results.csv

Typical structure for exec.py
  1. Read benchmark.yaml.

  2. Validate suite/name and validate required parameters from the yaml.

  3. Expand parameters into one or more test descriptors (if the benchmark runs a sweep).

  4. Execute the benchmark command(s) (or invoke helper scripts) based on rolemap.

  5. Parse output and convert to Fastpath results format (one resultclass per metric).

  6. Catch and raise exceptions (optional).

  7. Log the script output (debug logs) to a log file such as /fastpath-share/output/exec.log (optional).

  8. Write results to results.csv.

Important requirements
  • Always write results.csv (even on failure). On failure, write a row with an error code so Fastpath can report the failure consistently. ( Example : ERR_INVAL_BENCHMARK_FORMAT=2 )

  • Prefer writing benchmark logs into /fastpath-share/output to make them available in Fastpath collected artifacts.

Minimal results format

Fastpath expects rows with fields like:

  • name (taken as resultclass)

  • unit

  • improvement (bigger or smaller)

  • value

  • error (0 for success; non-zero for benchmark errors)

Helper files (optional)

You may add extra files under fastpath/containers/<suite>/ (shell scripts, patch files, configs). The Dockerfile can copy them into the image, and exec.py can invoke them.

4. Build & Publish Container Image

Once all benchmark files (YAML, Dockerfile, exec.py) are ready, build and deploy the container image.

For Arm internal users (Fastpath tool CI)

  1. Trigger the Build Container pipeline.

  2. Provide the container folder name (the <suite> directory under fastpath/containers/).

  3. Provide the container version/tag.

  4. The pipeline produces and pushes an image to the GitLab Container Registry.

For external users (manual Docker build)

You can build the container image manually using standard Docker commands:

  • Build the container image:

    docker build \
      --build-arg NAME=<suite> \
      -t <image-name>:<version> \
      -f fastpath/containers/<suite>/Dockerfile \
      .
    

    Replace <suite> with your benchmark suite name (e.g., repro_collection), <image-name> with your desired image name, and <version> with the version tag (e.g., v1.0).

  • Publish the container image:

    • If you have a registry: Push the image and update the benchmark YAML.

      docker tag <image-name>:<version> <registry>/<image-name>:<version>
      docker push <registry>/<image-name>:<version>
      

      Update the benchmark YAML image: ... field:

      image: <registry>/<image-name>:<version>
      
    • If you do not have a registry: Use the local image name in the benchmark YAML. In this case, ensure that the image is available on the SUT before execution.

      image: <image-name>:<version>
      

5. Execute Benchmark

Benchmarks are executed through a plan (Refer Define & Execute a Plan).

fastpath plan exec --output <output-dir> full-plan.yaml
The final plan must include:
  • the benchmark YAML file path

  • role map of the benchmark (if applicable)

  • any benchmark parameters you want to override from the benchmark YAML

Verify that the benchmark runs successfully and that results are visible.

fastpath results show --benchmark <suite/name> <output-dir>/results

Note

For role-based benchmarks (server/client), ensure final plan maps roles to the correct nodes (if multi-node SUT selected) and that exec.py implements role-specific behavior (for example, starting the server process on one node and the client process on another). See Plan Schema benchmark object section for details about role mapping.

Checklist for New Benchmarks

  • [ ] Add benchmark YAML under fastpath/benchmarks/<suite>/.

  • [ ] Add Dockerfile under fastpath/containers/<suite>/.

  • [ ] Add exec.py under fastpath/containers/<suite>/ with results.csv output.

  • [ ] Ensure logs are written under /fastpath-share/output.

  • [ ] Build and publish Docker image locally or to a registry.

  • [ ] Update benchmark YAML image tag to the published version.

  • [ ] Validate benchmark execution via fastpath plan exec and confirm results appear in the output/resultstore.