Originally posted by Michael_S
View Post
Announcement
Collapse
No announcement yet.
Windows Server 2016 vs. FreeBSD 11.2 vs. 8 Linux Distributions Performance
Collapse
X
-
Originally posted by starshipeleven View PostATTENTION PLEASE, general service announcement:
FreeBSD's performance in these tests is very likely caused by ZFS (the default filesystem), which is a CoW filesystem and as such it does have worse performance than a journaled filesystem (unless you go out of your way to add SSD caching and such).
Let's not start flaming FreeBSD plz, keep it classy, flame Windows server only.
Another factor is probably the Clang compiler which is less efficient than GCC in many cases.
Comment
-
Originally posted by Weasel View PostOn servers? Really?
I see Windows everywhere, literally... on desktops.
...never once seen a server with Windows on it.
Not doubting you but yeah.
- Likes 2
Comment
-
Originally posted by aht0 View PostYou also all ignore the fact that performance is not usually even priority, stability is. What use of small performance advantage when you get down times costing million bucks a hour? Tuning for performance often means cutting some corners somewhere. Can't have that when time literally means money.
Comment
-
Originally posted by jacob View Post
Pure guessing here but I would hazard that NTFS plays a big part in it.
Comment
-
Originally posted by Michael_S View Post
Good point. I suppose that could be the entire problem. NTFS is slow. Git-for-Windows, which I use at work on a laptop with an SSD, has to do all sorts of caching tricks to get somewhere near the speed of git commands on my personal desktop on a spinning platter drive. And my spinning platter drive uses BTRFS for the filesystem.
Comment
-
Originally posted by Michael_S View PostI presume they run Bing and Cortana from Windows Server
The servers running these services are using some custom Windows kernel/OS where they removed all the random crap they don't need and optimized the kernel for performance in the tasks it has to provide.
While everyone else must use binary windows releases, MS themselves is free to hack around and make customized Windows versions optimized for this tast or the other, similar to Linux distros. Xbox for example is another customized Windows version.
The network switches in their Azure infrastructure are using Linux though, and I think the developers posted in the blog or statement that using linux allows them much more free reign over customization (aka easier to reach peak performance by hacking around) than working on Windows.
- Likes 2
Comment
-
Originally posted by aht0 View PostYou also all ignore the fact that performance is not usually even priority, stability is. What use of small performance advantage when you get down times costing million bucks a hour? Tuning for performance often means cutting some corners somewhere. Can't have that when time literally means money.
That said, apart from Clear Linux which is used as a "theoretical Linux distro best", all distros tested are technically server-grade (I have my reservatons on Fedora Server 28, but I'll leave it at that), so they are a fair comparison.
- Likes 1
Comment
-
Originally posted by starshipeleven View PostNot all servers have expensive downtime, especially webservers or jobs where you have many servers doing the same job.
That said, apart from Clear Linux which is used as a "theoretical Linux distro best", all distros tested are technically server-grade (I have my reservatons on Fedora Server 28, but I'll leave it at that), so they are a fair comparison.
Comment
-
Originally posted by aht0 View PostYes, it depends on a situation and organization/company. Car repair shop's web server going down is no big deal. Border guard agency's internal systems collapsing for an hour would affect a whole lot more people.
For example, if the car repair shop is using a hosting provider (which I assume is quite common for small and medium businness), then it's most likely to be a site replicated across the hosting provider's webserving infrastructure, many servers serving calls behind some load balancers that direct traffic to the least used server. That site is not going down unless all their infrastructure goes offline at the same moment, and killing many racks of servers at once isn't a small feat.
In these types of jobs, losing some servers every once in a while isn't really an issue, and having better performance is more important.
- Likes 1
Comment
Comment