Dashboard

Note

The Fastpath Dashboard is currently under active development. Features and interface may change in the future.

The Fastpath Dashboard provides a web-based interface for visualizing and comparing benchmark results stored in your result store. The dashboard is built using Streamlit and runs as a local web server.

Starting Dashboard

To start the dashboard server:

fastpath dashboard start <resultstore url>

The dashboard will automatically open in your default web browser at http://localhost:8501.

If you have configured a default result store in your preferences (see Installation & Setup), you can omit <resultstore url> and the default will be used.

Using Dashboard

Filter and View Results

The dashboard allows you to interactively filter and visualize your benchmark results:

  1. Select SUT(s): Choose one or more Systems Under Test

  2. Select SW Profile(s): Choose SW Profiles/Profile

  3. Select Benchmark: Pick the benchmark to analyze

  4. Select Result Class: Choose the specific metric to visualize

Note

For comparative analysis, select either:

  • One SUT + Multiple SW Profiles: Compare different SW Profiles on the same hardware

  • Multiple SUTs + One SW Profile: Compare different hardware with the same SW Profile

Main Chart

The main chart displays:

  • 95% Confidence Interval (thick blue line): The statistical range where the true mean likely falls

  • Mean value (white tick mark): Average of all measurements

  • Raw data points (crosses): Individual measurement values from each test session

  • Hover tooltips: Detailed statistics including min, max, mean, CI bounds, standard deviation, and sample count

Visualization Controls

  • Pan and zoom: Click and drag to pan, scroll wheel to zoom

  • Session colors: Each test session is shown in a different color (up to 20 colors)

Deviation Chart

The deviation chart shows relative performance from your baseline:

  • Baseline: The first selected item (SUT or SW Profile) serves as the reference

  • Comparison: All other selected items are compared against this baseline

  • Color coding:

    • Green bars: Performance improvement over baseline

    • Red bars: Performance regression from baseline

    • Gray bars: No significant change (within 1% noise threshold)

  • Toggle view: Use the sidebar checkbox to switch between relative (%) and absolute differences

Note

A change is classified as “improvement” or “regression” (green/red) only if both conditions are met: (1) the 95% confidence intervals do not overlap, and (2) the difference exceeds the 1% noise threshold. Otherwise, it is classified as “no significant change” (gray).

View Metadata

Expand the metadata sections at the bottom to see detailed information about:

  • Selected SUT hardware specifications

  • Selected SW Profiles

  • Benchmark parameters

Aggregated Results Table

Click the “Aggregated Results Table” expander below the main chart to view:

  • SUT and SW Profile identifiers

  • Statistical summary: min, ci95min, mean, ci95max, max

  • Standard deviation and sample count

This table shows the same data as the chart in tabular format.

Result Caching

The dashboard uses Streamlit caching with a 10-minute TTL (time-to-live) for result store data. This improves performance by avoiding repeated data loads.

To manually refresh the data before the cache expires, click the “🔄 Refresh Data” button in the sidebar. This clears the cache and reloads data from the result store.

Stopping Dashboard

Press Ctrl+C in the terminal where the dashboard is running to stop the Streamlit server.