The source of the problem is quite obvious to anyone looking at the benchmarks:
The person doing the test screwed up.
The tests all need to be run again. You need to stop using erroneous, spurious results until you can explain -exactly- what happened. Not guess, not conjecture, and certainly not blind acceptance.
Do the tests properly. Stop using the obviously faulty results. Stop. Really. Stop embarrassing the community you supposedly represent by showing that new distributions are half as fast as older ones.
It's a problem seen throughout the OSS community: Other than a handful of major projects, the kids don't want to finish their work. They don't want to debug, they don't want to write documentation, or they don't want to do proper benchmarking.
The person doing the test screwed up.
The tests all need to be run again. You need to stop using erroneous, spurious results until you can explain -exactly- what happened. Not guess, not conjecture, and certainly not blind acceptance.
Do the tests properly. Stop using the obviously faulty results. Stop. Really. Stop embarrassing the community you supposedly represent by showing that new distributions are half as fast as older ones.
It's a problem seen throughout the OSS community: Other than a handful of major projects, the kids don't want to finish their work. They don't want to debug, they don't want to write documentation, or they don't want to do proper benchmarking.
Comment