= Overview =
The goal of darcs-benchmark is to detect performance regressions in darcs
as they happen by comparing darcs against previous versions of itself
at specific tasks.

Looking to help?  See the TODO list below.

= Project Details =
Ways we can compare darcs against itself include:
1) Keeping special copies of darcs binaries around
2) Get tags that correspond to stable releases and build those

The latter approach has the weakness that sometimes our slowness is
due to the tools used to build darcs.  To simplify matters, we should 
ignore this case in the initial release.

= TODO =
* Track mmap usage:
  http://bugs.darcs.net/issue99
  Bake the tracking directly into darcs.  Darcs will print
  <<mmap: bytes used :map>> to stderr when passed --track-mmap

* Run operations on a zoo of repositories.  Details here:
  http://wiki.darcs.net/DarcsWiki/StandardDarcsBenchmarks

* We're more concerned with regressions on known hard cases than
  a general sense of scalability.

* Automate the process so that we compare the current darcs against
  specific tags.

* Build darcs-benchmark into the buildbot scripts.

* Add meaningful statistics.  This may require us running each
  case 10 to 30 times depending on the variability we observe.

* Add support for tracking profiler statistics.

* Add support for prof2dot.

* Auto generation of data, scalability tests.
