Originally posted by orangemanbad
Announcement
Collapse
No announcement yet.
Windows 10 vs. Eight Linux Distributions On The Threadripper 3970X
Collapse
X
-
-
Originally posted by Spooktra View PostHere's what I would like to know, do all distros run the cpu at the same clock speed?
It could be that TLP applied some default settings that hinder maximum performance or that TLP thinks it should activate battery saving settings because some USB device's battery made it think the machine is a laptop.
Another issue are the different CPU governors used by the different distros. As listed on the first page, some use ondemand, some use performance. Manjaro's setting strangely isn't listed. AFAIK it applies either ondemand or schedutil by default on AMD systems, not sure which one.
Leave a comment:
-
Originally posted by orangemanbadNo it doesn't. At least not on the order you're suggesting. I suggest you go back to the drawing board and actually learn how cacheline loads work. If CPU caches operated the way you're trying to suggest, there'd be literally no reason for them to even exist.
Take audio applications for example. Some can process up to 24-bit/96kHz audio samples and as a result require significantly more memory and time to process these. This is considered a feature. Some applications don't have this feature and can only process up to 16-bit/48kHz samples. How do you propose a cache line changes that??Last edited by sdack; 19 February 2020, 04:18 PM.
Leave a comment:
-
Really informative comparitive benchmark. Although I run Gentoo, I can relate to Clear Linux results due to my compilation configuration.
Although Windows 10 is starting to look better, think it's slow compared to Window 95/XP era.
Leave a comment:
-
Originally posted by tildearrow View PostDoes a Spigot server approximate the workload performed in the H2 test? If so, I will think of switching to Windows if I buy this processor.
Oh no, Windows beating Linux at x264... Does this mean I have to set it up as a Windows machine and keep using my current Linux one and transfer 4K frames at 60fps over 25GbE using RDMA and find a way to mount my XFS drive on Windows to enjoy that FPS boost?
Leave a comment:
-
Originally posted by orangemanbadYou obviously don't understand how caches work. At all.
Features aren't only configured at run-time, but many can be configured at compile-time, too. When a feature is being disabled at compile-time can a compiler leave it's code out completely and optimise the remaining code more effectively. And even when a feature was compiled into an executable may this not simply result in a single, additional code block, but in the addition of many code blocks throughout the entire executable. For example, a compiler may decide to inline a feature at every code location it is being used at. This is something a cache has no influence on, because caches only have an effect at run-time.
A feature can also cause an application to require more memory for it's data. This is also something a cache has no influence over. When a feature requires additional bytes per data record, i.e. +1000 bytes more per record, and your application manages millions of records then a data cache has to cache those extra bytes, too.
Compilers often don't know the exact size and associativity of caches at compile-time, nor are all of a compiler's optimisations aware of caches. As a result do many applications not perform as perfectly as one wants them to. Worse, even in cases where it would be possible is it often not being done, because compilers are often being instructed to only do general optimisations for a wide range of hardware.
Caches themselves are not perfect. They are finite in size and so is their associativity and granularity. It makes them effective on average and for a large number of applications, but caches can be thrashed and when it happens can it lead to regressions and because of a simple feature, a function, a code block or only a single statement.
- 1 like
Leave a comment:
-
Originally posted by schmidtbag View PostWell, depends how you look at it. Does it matter when you're watching the Olympics?
The overall performance is more important but that doesn't always tell a complete story. Windows for example performs the worst on average, but there are moments where it was #1, or at least placed better than everything other than Clear (and Clear isn't exactly a good representative of the average Linux setup). That shouldn't be ignored - that means Linux has room for improvement.
Leave a comment:
-
Originally posted by Imout0 View PostAm I the only one who thinks that those last and first place finishers are completely irrelevant?
The overall performance is more important but that doesn't always tell a complete story. Windows for example performs the worst on average, but there are moments where it was #1, or at least placed better than everything other than Clear (and Clear isn't exactly a good representative of the average Linux setup). That shouldn't be ignored - that means Linux has room for improvement.
- 1 like
Leave a comment:
-
Here's what I would like to know, do all distros run the cpu at the same clock speed? I am asking because with Ubuntu, depending on the governor setting I use, the clock speed will vary widely, especially during long encodes, as the processor heats up and thermal throttles.
What I'm wondering is if Clear Linux, as a rule, simply runs the cpu flat out on all cores as often as possible, where as other distros are more gentle on the cpu. This could mean that Clear would be great for short benchmarks but not good for long workloads or cpu longevity.
Leave a comment:
-
Originally posted by Danny3 View PostReally interesting to see that Ubuntu 20.04 not only that it comes with an outdated kernel, but it's also the slowest Linux distro from all of them, even slower than their previous LTS.
I wonder what is Canonical doing in 2 years of development when actually most of the hard work is done by the kernel developers and debian developers.
I wonder if Canonical tight friendship with Microsoft has anything to do with this, like an artificial limitation or not enough optimizations so Windows 10 would not look so bad in benchmarks against Ubuntu.
Anyway I think that Ubuntu 20.04 will be the most disappointing Ubuntu release ever.
On the other hand and speaking from experience, performance is almost always addressed late in the product lifecycle (premature optimization being the root of all evil and all that), so it is quite possible it will improve by quite a bit the following two months.
Leave a comment:
Leave a comment: