From 8eda9e84bee9b3bcf7c9e83f9e532da2cc98d483 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 10 Mar 2024 17:13:06 +0000 Subject: [PATCH] build based on 1cdfb9d --- previews/PR80/404.html | 2 +- previews/PR80/autoload.html | 2 +- previews/PR80/autoload/index.html | 6 +++--- previews/PR80/explanations.html | 2 +- previews/PR80/explanations/index.html | 6 +++--- previews/PR80/hashmap.json | 2 +- previews/PR80/index.html | 2 +- previews/PR80/migration.html | 2 +- previews/PR80/migration/index.html | 6 +++--- previews/PR80/reference.html | 2 +- previews/PR80/reference/index.html | 6 +++--- previews/PR80/regressions.html | 2 +- previews/PR80/regressions/index.html | 6 +++--- previews/PR80/tutorial.html | 2 +- previews/PR80/tutorial/index.html | 6 +++--- previews/PR80/why.html | 2 +- previews/PR80/why/index.html | 6 +++--- 17 files changed, 31 insertions(+), 31 deletions(-) diff --git a/previews/PR80/404.html b/previews/PR80/404.html index b0ee6fad..2e673787 100644 --- a/previews/PR80/404.html +++ b/previews/PR80/404.html @@ -15,7 +15,7 @@
Skip to content

404

PAGE NOT FOUND

But if you don't change your direction, and if you keep looking, you may end up where you are heading.
- + \ No newline at end of file diff --git a/previews/PR80/autoload.html b/previews/PR80/autoload.html index b77d8a55..29849a6a 100644 --- a/previews/PR80/autoload.html +++ b/previews/PR80/autoload.html @@ -48,7 +48,7 @@ pushfirst!(REPL.repl_ast_transforms, load_tools) end - + \ No newline at end of file diff --git a/previews/PR80/autoload/index.html b/previews/PR80/autoload/index.html index 5d2e29c9..52356297 100644 --- a/previews/PR80/autoload/index.html +++ b/previews/PR80/autoload/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/autoload - - \ No newline at end of file +Redirecting to ../autoload + + \ No newline at end of file diff --git a/previews/PR80/explanations.html b/previews/PR80/explanations.html index 5d417826..d88525b5 100644 --- a/previews/PR80/explanations.html +++ b/previews/PR80/explanations.html @@ -18,7 +18,7 @@
Skip to content

Explanation of design decisions

This page of the documentation is not targeted at teaching folks how to use this package. Instead, it is designed to offer insight into how the the internals work, why I made certain design decisions. That said, it certainly won't hurt your user experience to read this!

This is not part of the API

The things listed on this page are true (or should be fixed) but are not guarantees. They may change in future 1.x releases.

Why the name "Chairmarks.jl"?

The obvious and formulaic choice, Benchmarks.jl, was taken. This package is very similar to Benchmarks.jl and BenchmarkTools.jl, but has a significantly different implementation and a distinct API. When differentiating multiple similar things, I prefer distinctive names over synonyms or different parts of speech. The difference between the names should, if possible, reflect the difference in the concepts. If that's not possible, it should be clear that the difference between the names does not reflect the difference between concepts. This rules out most names like "Benchmarker.jl", "Benchmarking.jl", "BenchmarkSystem.jl", etc. I could have chosen "EfficientBenchmarks.jl", but that is pretty pretentious and also would become misleading if "BenchmarkTools.jl" becomes more efficient in the future.

Ultimately, I decided to follow Julia's package naming conventions and heed the advice that

A less systematic name may suit a package that implements one of several possible approaches to its domain.

How is this faster than BenchmarkTools?

A few reasons

  • Chairmarks doesn't run garbage collection at the start of every benchmark by default

  • Chairmarks has faster and more efficient auto-tuning

  • Chairmarks runs its arguments as functions in the scope that the benchmark was invoked from, rather than evaling them at global scope. This makes it possible to get significant performance speedups for fast benchmarks by putting the benchmarking itself into a function. It also avoids leaking memory on repeated invocations of a benchmark, which is unavoidable with BenchmarkTools.jl's design. (discourse, github)

  • Because Charimarks does not use toplevel eval, it can run arbitrarily quickly, as limited by a user's noise tolerance. Consequently, the auto-tuning algorithm is tuned for low runtime budgets in addition to high budgets so its precision doesn't degrade too much at low runtime budgets.

  • Chairmarks tries very hard not to discard data. For example, if your function takes longer to evaluate then the runtime budget, Chairmarks will simply report the warmup runtime (with a disclaimer that there was no warmup). This makes Chairmarks a viable complete substitute for the trivial @time macro and friends. @b sleep(10) takes 10.05 seconds (just like @time sleep(10)), whereas @benchmark sleep(10) takes 30.6 seconds despite only reporting one sample.

Is this as stable/reliable as BenchmarkTools?

When comparing @b to @btime with seconds=.5 or more, yes: result stability should be comparable. Any deficiency in precision or reliability compared to BenchmarkTools is a problem and should be reported. When seconds is less than about 0.5, BenchmarkTools stops respecting the requested runtime budget and so it could very well perform much more precisely than Chairmarks (it's hard to compete with a 500ms benchmark when you only have 1ms). In practice, however, Chairmarks stays pretty reliable even for fairly low runtimes.

How does tuning work?

First of all, what is "tuning" for? It's for tuning the number of evaluations per sample. We want the total runtime of a sample to be 30μs, which makes the noise of instrumentation itself (clock precision, the time to takes to record performance counters, etc.) negligible. If the user specifies evals manually, then there is nothing to tune, so we do a single warmup and then jump straight to the benchmark. In the benchmark, we run samples until the time budget or sample budget is exhausted.

If evals is not provided and seconds is (by default we have seconds=0.1), then we target spending 5% of the time budget on calibration. We have a multi-phase approach where we start by running the function just once, use that to decide the order of the benchmark and how much additional calibration is needed. See https://github.com/LilithHafner/Chairmarks.jl/blob/main/src/benchmarking.jl for details.

Why Chairmarks uses soft semantic versioning

We prioritize human experience (both user and developer) over formal guarantees. Where formal guarantees improve the experience of folks using this package, we will try to make and adhere to them. Under both soft and traditional semantic versioning, the version number is primarily used to communicate to users whether a release is breaking. If Chairmarks had an infinite number of users, all of whom respected the formal API by only depending on formally documented behavior, then soft semantic versioning would be equivalent to traditional semantic versioning. However, as the user base differs from that theoretical ideal, so too does the most effective way of communicating which releases are breaking. For example, if version 1.1.0 documents that "the default runtime is 0.1 seconds" and a new version allows users to control this with a global variable, then that change does break the guarantee that the default runtime is 0.1 seconds. However, it still makes sense to release as 1.2.0 rather than 2.0.0 because it is less disruptive to users to have that technical breakage than to have to review the changelog for breakage and decide whether to update their compatibility statements or not.

- + \ No newline at end of file diff --git a/previews/PR80/explanations/index.html b/previews/PR80/explanations/index.html index 9912b7d6..995b72ca 100644 --- a/previews/PR80/explanations/index.html +++ b/previews/PR80/explanations/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/explanations - - \ No newline at end of file +Redirecting to ../explanations + + \ No newline at end of file diff --git a/previews/PR80/hashmap.json b/previews/PR80/hashmap.json index e2977004..5617970f 100644 --- a/previews/PR80/hashmap.json +++ b/previews/PR80/hashmap.json @@ -1 +1 @@ -{"regressions.md":"DdzLvj8o","tutorial.md":"CMbHRg0P","explanations.md":"ieCh5sp2","autoload.md":"B5lpEgYs","why.md":"DjnXz4zM","migration.md":"CR7sLHUN","index.md":"6I2zjsqH","reference.md":"59__ebpL"} +{"index.md":"6I2zjsqH","autoload.md":"B5lpEgYs","explanations.md":"ieCh5sp2","migration.md":"CR7sLHUN","tutorial.md":"CMbHRg0P","why.md":"DjnXz4zM","regressions.md":"DdzLvj8o","reference.md":"59__ebpL"} diff --git a/previews/PR80/index.html b/previews/PR80/index.html index 4333553e..5abda704 100644 --- a/previews/PR80/index.html +++ b/previews/PR80/index.html @@ -27,7 +27,7 @@ julia> @b rand(1000) _.*5 # How long does it take to multiply it by 5 element wise? 172.970 ns (3 allocs: 7.875 KiB) - + \ No newline at end of file diff --git a/previews/PR80/migration.html b/previews/PR80/migration.html index 8973f8e4..c2741cea 100644 --- a/previews/PR80/migration.html +++ b/previews/PR80/migration.html @@ -43,7 +43,7 @@ julia> @b x rand # put the access in the setup phase (most concise in simple cases) 15.507 ns (2 allocs: 112 bytes) - + \ No newline at end of file diff --git a/previews/PR80/migration/index.html b/previews/PR80/migration/index.html index bc032516..6ebe4872 100644 --- a/previews/PR80/migration/index.html +++ b/previews/PR80/migration/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/migration - - \ No newline at end of file +Redirecting to ../migration + + \ No newline at end of file diff --git a/previews/PR80/reference.html b/previews/PR80/reference.html index 61a1edb0..128f0bac 100644 --- a/previews/PR80/reference.html +++ b/previews/PR80/reference.html @@ -132,7 +132,7 @@ julia> @be (x = 0; for _ in 1:5e8; x = hash(x); end; x) # This runs for a long time, so it is only run once (with no warmup) Benchmark: 1 sample with 1 evaluation 2.488 s (without a warmup)

source


- + \ No newline at end of file diff --git a/previews/PR80/reference/index.html b/previews/PR80/reference/index.html index 6a5179ed..92875cc2 100644 --- a/previews/PR80/reference/index.html +++ b/previews/PR80/reference/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/reference - - \ No newline at end of file +Redirecting to ../reference + + \ No newline at end of file diff --git a/previews/PR80/regressions.html b/previews/PR80/regressions.html index 9105442e..2cf1142c 100644 --- a/previews/PR80/regressions.html +++ b/previews/PR80/regressions.html @@ -26,7 +26,7 @@ @testset "Regression tests" begin RegressionTests.test(skip_unsupported_platforms=true) end

See the RegressionTests.jl documentation for more information.

- + \ No newline at end of file diff --git a/previews/PR80/regressions/index.html b/previews/PR80/regressions/index.html index 9528b94c..c08c48f4 100644 --- a/previews/PR80/regressions/index.html +++ b/previews/PR80/regressions/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/regressions - - \ No newline at end of file +Redirecting to ../regressions + + \ No newline at end of file diff --git a/previews/PR80/tutorial.html b/previews/PR80/tutorial.html index e113c1dc..81c79487 100644 --- a/previews/PR80/tutorial.html +++ b/previews/PR80/tutorial.html @@ -78,7 +78,7 @@ 129.294 ns (3 allocs: 7.875 KiB) 129.471 ns (3 allocs: 7.875 KiB) 130.570 ns (3 allocs: 7.875 KiB)

Setting the seconds parameter too low can cause benchmarks to be noisy. It's good practice to run a benchmark at least a couple of times no matter what the configuration is to make sure it's reasonably stable.

Advanced usage

It is possible to manually specify the number of evaluations, samples, and/or seconds to run benchmarking for. It is also possible to pass a teardown function or an initialization function that runs only once. See the docstring of @be for more information on these additional arguments.


  1. note that the samples are aggregated element wise, so the max field reports the maximum runtime and the maximum proportion of runtime spent in garbage collection (gc). Thus it is possible that the trial which had a 19.748 μs runtime was not the same trial that spent 96.95% of its time in garbage collection. This is in order to make the results more consistent. If half the trials spend 10% of their time in gc amd runtime varies based on other factors, it would be unfortunate to report maximum gc time as either 10% or 0% at random depending on whether the longest running trial happened to trigger gc. ↩︎

- + \ No newline at end of file diff --git a/previews/PR80/tutorial/index.html b/previews/PR80/tutorial/index.html index 66d58145..22ff71f6 100644 --- a/previews/PR80/tutorial/index.html +++ b/previews/PR80/tutorial/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/tutorial - - \ No newline at end of file +Redirecting to ../tutorial + + \ No newline at end of file diff --git a/previews/PR80/why.html b/previews/PR80/why.html index cebcbfa8..da0dd49e 100644 --- a/previews/PR80/why.html +++ b/previews/PR80/why.html @@ -54,7 +54,7 @@ julia> @b 1.0 checksum=false 0 ns

You may experiment with custom reductions using the internal _map and _reduction keyword arguments. The default maps and reductions (Chairmarks.default_map and Chairmarks.default_reduction) are internal and subject to change and/or removal in the future.

Innate qualities

Chairmarks is inherently narrower than BenchmarkTools by construction. It also has more reliable back support. Back support is a defining feature of chairs while benches are known to sometimes lack back support.

- + \ No newline at end of file diff --git a/previews/PR80/why/index.html b/previews/PR80/why/index.html index ff016fc2..8c37533a 100644 --- a/previews/PR80/why/index.html +++ b/previews/PR80/why/index.html @@ -1,5 +1,5 @@ -Redirecting to https://chairmarks.lilithhafner.com/why - - \ No newline at end of file +Redirecting to ../why + + \ No newline at end of file