Making benchmarks useful
-
Have a dedicated runner for them that is not too high-spec to not be able to notice a change -
Set up a codespeed and report the benchmark results to it. See https://speed.pypy.org/ for example of what this allows us to do -
Run them on a real database or at least cache the generated one. (no idea how to do that in gitlab ci, probably have an external db server and create protected variables for connecting to it?)
feld edit: marginally related to pleroma-meta#20