node/deps/v8/test/benchmarks/csuite
Michaël Zasso 6bd756d7c6
deps: update V8 to 10.7.193.13
PR-URL: https://github.com/nodejs/node/pull/44741
Fixes: https://github.com/nodejs/node/issues/44650
Fixes: https://github.com/nodejs/node/issues/37472
Reviewed-By: Ben Noordhuis <info@bnoordhuis.nl>
Reviewed-By: Jiawen Geng <technicalcute@gmail.com>
Reviewed-By: James M Snell <jasnell@gmail.com>
2022-10-11 07:24:33 +02:00
..
benchmark.py deps: update V8 to 10.7.193.13 2022-10-11 07:24:33 +02:00
compare-baseline.py deps: update V8 to 10.7.193.13 2022-10-11 07:24:33 +02:00
csuite.py deps: update V8 to 10.7.193.13 2022-10-11 07:24:33 +02:00
README.md deps: update V8 to 10.7.193.13 2022-10-11 07:24:33 +02:00
run-kraken.js deps: update V8 to 9.3.345.16 2021-08-30 21:02:51 +02:00
sunspider-standalone-driver.js deps: update V8 to 9.3.345.16 2021-08-30 21:02:51 +02:00

CSuite: Local benchmarking help for V8 performance analysis

CSuite helps you make N averaged runs of a benchmark, then compare with a different binary and/or different flags. It knows about the "classic" benchmarks of SunSpider, Kraken and Octane, which are still useful for investigating peak performance scenarios. It offers a default number of runs, by default they are:

  • SunSpider - 100 runs
  • Kraken - 80 runs
  • Octane - 10 runs

Usage

Say you want to see how much optimization buys you:

./csuite.py kraken baseline ~/src/v8/out/d8 -x="--noturbofan"
./csuite.py kraken compare ~/src/v8/out/d8

Suppose you are comparing two binaries, and want a quick look at results. Normally, Octane should have about 10 runs, but 3 will only take a few minutes:

./csuite.py -r 3 octane baseline ~/src/v8/out-master/d8
./csuite.py -r 3 octane compare ~/src/v8/out-mine/d8

You can run from any place:

../../somewhere-strange/csuite.py sunspider baseline ./d8
../../somewhere-strange/csuite.py sunspider compare ./d8-better

Note that all output files are created in the directory where you run from. A _benchmark_runner_data directory will be created to store run output, and a _results directory as well for scores.

For more detailed documentation, see:

./csuite.py --help

Output from the runners is captured into files and cached, so you can cancel and resume multi-hour benchmark runs with minimal loss of data/time. The -f flag forces re-running even if these cached files still exist.