Announcement

Collapse
No announcement yet.

DragonFlyBSD's Kernel Optimizations Are Paying Off - 3 BSDs & 5 Linux OS Benchmarks On Threadripper

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by stormcrow View Post
    I simply disagree with FreeBSD's problematic stance on conservative compatibility at the expense of the security of the entire platform.
    Linux is in the same boat but it's even worse.

    Comment


    • #12
      Originally posted by stormcrow View Post


      No I haven't missed the point of HardenedBSD at all. In fact I purposely pointed out why it exists, which you apparently ignored. I simply disagree with FreeBSD's problematic stance on conservative compatibility at the expense of the security of the entire platform.
      I didn't ignore it, I specifically addressed it: 'innovative' security.

      By your statement, I take it you take exception to EVERY linux distro then?

      The statement remains true: freebsd chose not to be aggressive/innovative like the way hardenedbsd wanted, so they forked. Disagree all you want it doesn't change the fact they are correct and you are wrong. Why should freebsd, the basis for many other releases suddenly become the radical innovator two developers want? Are you crazy?
      Of course the implication that somehow freebsd is insecure because it doesn't adopt hardenedbsd's mantra is equally specious.
      Last edited by Bsdisbetter; 01 June 2019, 07:48 PM.

      Comment


      • #13
        Not bad... mostly what I expected. In particular, that slight regression in pgbench... I saw that in my own testing but what might not be apparent is that the scheduler heuristics were actually broken for a long time in 5.4 and resulted in horrendous pgbench numbers (half of what they were before). When I fixed it relatively recently in 5.5, I still couldn't get back to the original numbers, though I was able to get very close. pgbench is extremely sensitive to scheduler heuristics. If each client and the server pair (for same-machine client and server)... if the client and server are not scheduled to the same CCX a lot of performance is lost. Under medium loads if the client and server are not scheduled to sibling hyper-thread pairs, performance is lost. Then, under the heaviest loads (say, 128 client / 128 server), if the client and server are not scheduled to the same logical cpu, performance is lost. In the latter case, because under extreme loads being able to avoid the sleep/wakeup IPIs becomes extremely important.

        The OpenMP stuff has always had problems on the BSDs. I believe this issue comes down to how individual page protections are handled. In Linux, if I understand the code properly, they are just flipping protection bits in the terminal PTEs and thus overhead for sporatic page protection strewn around the shared memory is low. On the BSDs, including DragonFly, mprotect() calls have a certain degree of kernel structural overhead in the vm_map/vm_map_entry handling code that makes sparse/sporatic page protection more expensive. At least we were able to remove the 'struct pv_entry' overhead in the kernel which was per-PTE, though.

        The JAVA benches tend to be determined by the memory allocator in libc. We did testing a few years ago and basically if we cached enormous amounts of memory we could get good scores (this is why FreeBSD does fairly well, and I believe also why linux does very well)... but there is a huge cost to doing that because an enormous amount of memory winds up being wasted. If you happen to be running a java workload that needs a lot of memory, or it is sharing resources with other applications running on the machine, the whole thing can bog down and implode. So we opted to cache less free memory in libc (though we have opened it up a bit in the last year).

        The compiler benches seem kinda ridiculous since the same compiler configuration is not being compared. There are trade-offs there too... different projects choose defaults to focus on different levels of safety and robustness. So there isn't much of a point benching the differences. Similarly, benching predominantly user-space code is not really an operating system test, just a test of the default 'cc'. And sometimes those bench programs get hung up in linux-specific optimizations (literally #ifdef'd code) that makes them unreasonable on other platforms. Care must be taken.

        -Matt

        Comment

        Working...
        X