This directory shows how to use github-action-benchmark
with cargo bench.
Official documentation for usage of cargo bench:
https://doc.rust-lang.org/unstable-book/library-features/test.html
e.g.
- name: Run benchmark
run: cargo +nightly bench | tee output.txtNote that cargo bench is available only with nightly toolchain.
Note that this example does not use LTO for benchmarking because entire code in benchmark iteration
will be removed as dead code. For normal use case, please enable it in Cargo.toml for production
performance.
[profile.bench]
lto = trueStore the benchmark results with step using the action. Please set cargo to tool input.
- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
tool: 'cargo'
output-file-path: output.txtPlease read 'How to use' section for common usage.
In the previous section, both regular and criterion-rs can be used through the regular cargo bench facility, but there's an additional crate and cargo extension named cargo-criterion.
The improvements in cargo-criterion do match the goals of github-action-benchmark, so it makes sense to include support for it.
Official documentation for usage of cargo criterion:
https://bheisler.github.io/criterion.rs/book/cargo_criterion/cargo_criterion.html
.e.g:
- name: Run benchmarks
run: cargo criterion 1> output.jsonIf you have a group of benchmarks, cargo criterion will output a ndJSON.
There are two notable differences in cargo-criterion:
- Since the output is machine-readable JSON, the extract process only parses the result file and maps the required data into github-action-benchmark plotting system. In fact, cargo-criterion only supports JSON as
message-format(output format). - cargo-criterion incorporates its own HTML benchmark reports system, which can be stored alongside if desired through the
native-benchmark-data-dir-path.
- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
tool: 'cargo-criterion'
output-file-path: output.json
native-benchmark-data-dir-path: target/criterionThe native benchmark reports is simply copied from target/criterion/reports and pushed to the github results repo so that they are available under:
https://YOUR_ORG.github.io/YOUR_REPO/dev/bench/native/criterion/reports/