Announcement

Collapse
No announcement yet.

Windows Server 2016 vs. FreeBSD 11.2 vs. 8 Linux Distributions Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Michael_S View Post
    I'm thrilled Windows consistently takes a beating, but surprised too. I wonder what Microsoft is doing wrong technically speaking (without regards to their evil actions and business model) to perform so poorly.

    In the free time you don't have, Michael, it might be interesting to throw some kind of C# benchmark into the mix. Maybe Windows will take a lead there as the native platform for the code.
    Pure guessing here but I would hazard that NTFS plays a big part in it.

    Comment


    • #22
      Originally posted by starshipeleven View Post
      ATTENTION PLEASE, general service announcement:

      FreeBSD's performance in these tests is very likely caused by ZFS (the default filesystem), which is a CoW filesystem and as such it does have worse performance than a journaled filesystem (unless you go out of your way to add SSD caching and such).

      Let's not start flaming FreeBSD plz, keep it classy, flame Windows server only.
      ZFS is definitely a factor, although in this case it's not that CoW is slow in general, ZFS is.
      Another factor is probably the Clang compiler which is less efficient than GCC in many cases.

      Comment


      • #23
        Originally posted by Weasel View Post
        On servers? Really?

        I see Windows everywhere, literally... on desktops.

        ...never once seen a server with Windows on it.

        Not doubting you but yeah.
        You need to get out more. Windows has ~33% of the server market overall and probably close to 100% in small & medium businesses.

        Comment


        • #24
          Originally posted by aht0 View Post
          You also all ignore the fact that performance is not usually even priority, stability is. What use of small performance advantage when you get down times costing million bucks a hour? Tuning for performance often means cutting some corners somewhere. Can't have that when time literally means money.
          I understand that, but my impression is that in terms of stability Windows Server has improved exponentially since NT 3.51 but Linux is still ahead.

          Comment


          • #25
            Originally posted by jacob View Post

            Pure guessing here but I would hazard that NTFS plays a big part in it.
            Good point. I suppose that could be the entire problem. NTFS is slow. Git-for-Windows, which I use at work on a laptop with an SSD, has to do all sorts of caching tricks to get somewhere near the speed of git commands on my personal desktop on a spinning platter drive. And my spinning platter drive uses BTRFS for the filesystem.

            Comment


            • #26
              Originally posted by Michael_S View Post

              Good point. I suppose that could be the entire problem. NTFS is slow. Git-for-Windows, which I use at work on a laptop with an SSD, has to do all sorts of caching tricks to get somewhere near the speed of git commands on my personal desktop on a spinning platter drive. And my spinning platter drive uses BTRFS for the filesystem.
              BTRFS is only really slow when you do random access writes, which is not what git does. In all other usage scenarios, the CoW machinery imposes some overhead compared to a journaling FS like ext4 or NTFS, but not *that* much. It's definitely not the performance killer some people think (hint: synthetic benchmarks are not really representative here). With other tricks, like deferred allocation (which BTRFS uses) and on-the-fly compression, which can be enabled, it can be actually pretty fine. I'm using it on my laptop on a SSD and can't be happier. My only complain is that is seems to drain the battery quicker than ext4, but that is understandable.

              Comment


              • #27
                Originally posted by Michael_S View Post
                I presume they run Bing and Cortana from Windows Server
                No they aren't using Windows Server, but they aren't running Linux either.

                The servers running these services are using some custom Windows kernel/OS where they removed all the random crap they don't need and optimized the kernel for performance in the tasks it has to provide.

                While everyone else must use binary windows releases, MS themselves is free to hack around and make customized Windows versions optimized for this tast or the other, similar to Linux distros. Xbox for example is another customized Windows version.

                The network switches in their Azure infrastructure are using Linux though, and I think the developers posted in the blog or statement that using linux allows them much more free reign over customization (aka easier to reach peak performance by hacking around) than working on Windows.

                Comment


                • #28
                  Originally posted by aht0 View Post
                  You also all ignore the fact that performance is not usually even priority, stability is. What use of small performance advantage when you get down times costing million bucks a hour? Tuning for performance often means cutting some corners somewhere. Can't have that when time literally means money.
                  Not all servers have expensive downtime, especially webservers or jobs where you have many servers doing the same job.

                  That said, apart from Clear Linux which is used as a "theoretical Linux distro best", all distros tested are technically server-grade (I have my reservatons on Fedora Server 28, but I'll leave it at that), so they are a fair comparison.

                  Comment


                  • #29
                    Originally posted by starshipeleven View Post
                    Not all servers have expensive downtime, especially webservers or jobs where you have many servers doing the same job.

                    That said, apart from Clear Linux which is used as a "theoretical Linux distro best", all distros tested are technically server-grade (I have my reservatons on Fedora Server 28, but I'll leave it at that), so they are a fair comparison.
                    Yes, it depends on a situation and organization/company. Car repair shop's web server going down is no big deal. Border guard agency's internal systems collapsing for an hour would affect a whole lot more people.

                    Comment


                    • #30
                      Originally posted by aht0 View Post
                      Yes, it depends on a situation and organization/company. Car repair shop's web server going down is no big deal. Border guard agency's internal systems collapsing for an hour would affect a whole lot more people.
                      Not what I meant. I meant that for many jobs you have multiple servers doing the exact same thing, so if one server crashes and reboots, the service isnt disrupted, and there is no costly downtime.

                      For example, if the car repair shop is using a hosting provider (which I assume is quite common for small and medium businness), then it's most likely to be a site replicated across the hosting provider's webserving infrastructure, many servers serving calls behind some load balancers that direct traffic to the least used server. That site is not going down unless all their infrastructure goes offline at the same moment, and killing many racks of servers at once isn't a small feat.

                      In these types of jobs, losing some servers every once in a while isn't really an issue, and having better performance is more important.

                      Comment

                      Working...
                      X