Announcement

Collapse
No announcement yet.

DragonFlyBSD 5.4 & FreeBSD 12.0 Performance Benchmarks, Comparison Against Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • DragonFlyBSD 5.4 & FreeBSD 12.0 Performance Benchmarks, Comparison Against Linux

    Phoronix: DragonFlyBSD 5.4 & FreeBSD 12.0 Performance Benchmarks, Comparison Against Linux

    Coincidentally the DragonFlyBSD 5.4 release and FreeBSD 12.0 lined up to be within a few days of each other, so for an interesting round of benchmarking here is a look at DragonFlyBSD 5.4 vs. 5.2.2 and FreeBSD 12.0 vs. 11.2 on the same hardware as well as comparing those BSD operating system benchmark results against Ubuntu 18.04.1 LTS, Clear Linux, and CentOS 7 for some Linux baseline figures.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I wonder what influence source of development environment has on benchmark results. For clarification, most of the software used is probably developed primarily on Linux distributions so optimization pathways would naturally be tailored to those optimizations that may be more useful on Linux than the BSDs (or Solaris, or Windows, or OS-X, etc) which may result in more benchmark wins for the Linux distributions than the various BSDs. This becomes particularly obvious when you dig down in code where you have various code paths that contain "GCCisms" and "Linuxisms" - syntax that are GCC or Linux specific. I wonder if there's even anyway to properly quantify any such biases in outcomes? Timing latency in common system calls and code pathways between various OSes and comparing results for baseline bias analysis?

    This may not matter as much to the end user... right tool for the job and all that. But it could be useful to know how much of the problem of a performance difference is actually the underlying OS and how much is because a lack of port completeness and OS support for contextual purposes.

    Note to Michael: I'm not requesting such numbers. I'm just wondering out loud about cross platform performance biases between OSes in general and putting out those thoughts for discussion.

    Comment


    • #3
      Originally posted by stormcrow View Post
      I wonder what influence source of development environment has on benchmark results. For clarification, most of the software used is probably developed primarily on Linux distributions so optimization pathways would naturally be tailored to those optimizations that may be more useful on Linux than the BSDs (or Solaris, or Windows, or OS-X, etc) which may result in more benchmark wins for the Linux distributions than the various BSDs. This becomes particularly obvious when you dig down in code where you have various code paths that contain "GCCisms" and "Linuxisms" - syntax that are GCC or Linux specific. I wonder if there's even anyway to properly quantify any such biases in outcomes? Timing latency in common system calls and code pathways between various OSes and comparing results for baseline bias analysis?

      This may not matter as much to the end user... right tool for the job and all that. But it could be useful to know how much of the problem of a performance difference is actually the underlying OS and how much is because a lack of port completeness and OS support for contextual purposes.

      Note to Michael: I'm not requesting such numbers. I'm just wondering out loud about cross platform performance biases between OSes in general and putting out those thoughts for discussion.
      This could be a factor, but it doesn't make the comparisons biased or invalid. A benchmark like this is useful for the end user if it answers the question "what will I get if I install Linux, or BSD?". And if it turns out that these apps will run faster on Linux, whatever the reason, then it's good information, because ultimately most users want to know how well will their software run when they just install it and use it, not how far can any given OS be tuned or if the software could be optimised better on some platforms.

      Comment


      • #4
        The maximum speed of the storage device really should be included as a way to show whether the benchmarks are meaningful. I consider compile bench to be utterly useless as a benchmark because it does not represent a realistic workload. The only way that it could possibly be realistic is if the CPU compiled software instanteously, which is the most unrealistic thing you could possibly benchmark.

        Also, it would be interesting if gaming benchmarks were done. FreeBSD can run Steam and steam games via its Linux emulation support.

        Comment

        Working...
        X