Announcement

Collapse
No announcement yet.

Why Software Defaults Are Important & Benchmarked

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Well Michael's approach isn't bad per se. The problem is that he tests different things tht he mentions.

    For example "How does the BKL removal affect performance?" should be "Does the BKL removal influences everyday operations?" and then jump toh the concluion "Not realy" and then maybe a diacussion for a little more depth, being "While doesn't have any real effect on performance, it does make this do that better or faster or worse.".

    Comment


    • #32
      Originally posted by mtippett View Post
      Incorrect. Benchmarks are a measure of a system. If the benchmarks are capped, there are other measures to use. Power consumption, sound, CPU or GPU utilization.

      FPS is _not_ the only measure.
      From THAT point of view, I absolutely agree!

      But if you're measuring pure performance, then capping the output to 60Hz is really stupid.

      You're completely right that there are other important factors to benchmark, like power consumption, durability, noise, etc.

      Comment


      • #33
        Originally posted by V!NCENT View Post
        Well Michael's approach isn't bad per se. The problem is that he tests different things tht he mentions.

        For example "How does the BKL removal affect performance?" should be "Does the BKL removal influences everyday operations?" and then jump toh the concluion "Not realy" and then maybe a diacussion for a little more depth, being "While doesn't have any real effect on performance, it does make this do that better or faster or worse.".
        Remember that the lay person is really getting the hype from the community and trying to parse that. The BKL has long been pushed as a problem for scalability and performance - with minimal details below that. Michael did put it to the test, and more or less as expected showed that BKL is a performance non-event. As David Airlie posted in the forum for that article, the BKL is a non-event due to work that has been done to get around it over the last few years.

        The assertion that removal of the BKL is going to have no meaningful impact for most people is a fair statement.

        Determining where the BKL has an impact and showing the performance delta there is a completely different article, but I'd expect there would be forum posts of "that's all well and good, but it's irrelevant to me". Again, it all depends on you perspective and the implicit questions you are looking to be answered in articles.

        PTS has lowered the bar to doing repeatable benchmarking, OpenBenchmarking has created a collaborative forum around repeatable results. Anyone can look to show how it should be done, go forth and benchmark! (This is a general call, not something pointed at you V!ncent).

        Comment


        • #34
          Originally posted by mtippett View Post
          The assertion that removal of the BKL is going to have no meaningful impact for most people is a fair statement.
          I will put beforehand in "balanced" team. I do think that defaults are what matter in users' machine.
          On the other hand, in many benchmarks tweaking or not tweaking get wrong conclusions.
          I will put some hypotethical cases but they are a mirror of many articles.
          NVidia card 8800GTX will give 400 FPS in OpenArena and 60 FPS in opensource driver. This may make as an benchmark "conclusion" that NVidia driver works 6.5 times faster classic one. (when is clear that VSync option is set on the second case). Probably disabling it in OSS driver will give 240 FPS or whatever and will get a proper measure in raw performance benchmark.
          Statement 2: Compressed filesystem works very slow compared with normal disks on burst writes, but works decently in threaded writes and in reads, and much faster in reads a zero-filled file. This should be *always* the case on a fast CPU. Doesn't matter the FS, it is because compression will add some extra headache on CPU to do compression or decompression but will reduce (depends on file content) the disk usage. Also those benchmarks are afecting the compilation time (because of CPU usage in compress/decompress). If it was explained in a paragraph how things works, the benchmarks will not be meaningless, but will give the measure of impacted CPU usage.
          Statement 3: How GCC/LLVM AVX support mean something? Depends a lot of application "nature". Those instructions are for parallel programming, somewhat like CUDA. A compiler may be optimized in a range of 1%, an application as Gimp in (some) filters may be in range of 20% and scientific high parallel computations may be close to 100% speedup. When you read GCC/related articles, they will mostly appear like compilers are just hit of regressions and at least on two benchmarks they get annomalies. Also, compilers are today fairly mature, so a 100% speedup (without hardware support, like multi-core, AVX, but just of applications are rewritten to support them) are fairly untouchable.
          So I think that the people when are frustrated, they complain for the statements that appear not to be informed, and a statement before that shows what may be the expectations based on how things work, will be the reason why people (I hope and thing) they will complain less.

          Comment

          Working...
          X