if you have a system you care that much about performance or data integrity on you certainly have a battery backup and power line conditioning UPS hooked to it right? in which case you can safely disable the extra integrity protection.
Data Integrity and performance are at opposite sides of a scale. The correct "default" slides around from somewhat secure to somewhat fast. The main sort of focus seems to be moving towards you might lose data, but at least make sure that you lose data in a consistent manner.
Barriers and so on go a long way to "chunk" the written data, balancing between caching and synchronous writing (a barrier guarantees that the data before the barrier is written before the data afterwards, rather than each forcing in-order data to be written.
I don't think we'll ever get to a "right" balance point, it will always be wrong in some way.
As a user, you have two choices, either accept the default (and hence the "best judgement/best awareness" of the maintainers), or tune to your requirement (either performance at the risk of data, or integrity at the costof speed).
git has the command 'git bisect run <script>' that will automate the bisect and testing process (the script returns one code if the test passes, a different on if it fails, and a third one if the test was unable to be run for some unrelated reason)
I'm curious if you took advantage of this function, or if you ended up recreating it (or most of it) in your code?
what was it lacking? (so that I can pass it on the the git developers as a possible enhancement)
how do you handle the case where a kernel picked by the bisect can't compile, crashes on boot, etc? (this is one thing that git bisect run does have a mechanism to handle)
what do you do if new compile options appear as you bisect?
one extreme case of this a year or so ago was that a bunch of compile options were moved into a submenu, with the menu needing to be selected before the other options would work. (this broke a lot of people's processes, and there never was a good solution for it found that I know of)
please do not take this the wrong way, I am not trying to attack you for building this feature. I am just trying to point out land mines that other people discovered doing this so that you can work on fixing them before they blow up on you.
Bisection is CM agnostic, hell, you don't even need CM. You just need the following
- An ordered list with identifiers
- A way to set up a system based on the identifier
- Something to run that generates a quantative result
- A fulcrum (my term) that you want to detect
Assuming that the ordered list is has a single transition you can do all sorts of funky things.
Determine optimum cluster size for a filesystem
- Cluster size (512,1k,2k,4k,8k)
- PTS doing a benchmark of some sort
- A perfomance value you can't go outside
Somewhat contrived, but if you have hard performance criteria, and want to balance that against size. The above would work, just set it up, and a bisection would be able to tell you what cluster size meets your requirement.
Work out when a driver slowed down a 2D operation
- Driver releases (CAT 9.1, 9.2, 9.3)
- curl to download, script to install, reboot
- PTS doing a benchmark of some sort
- A known before and after value for a benchmark
Of course defect tracking through bisection is easiest to grok, and is usually the best value for a developers time :).
The "setup" ie: download, build, install stage can be rigged to do different things based on different builds. ie: if after this commit, setup this way, otherwise set up this way.
It does bring in a danger that you end up commiting a cardinal sin modifying two variables, (the setup and the commit point), but in some cases you can't avoid it.
Autonomous (btw, why not automatic) regression testing is super useful.
How about extending it for regression testing wine with a set of windows programs? This could at least test for crashes. You could add automatic screen shots to get some of the functionality testing done too.