Originally posted by Sonadow
View Post
Announcement
Collapse
No announcement yet.
Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs
Collapse
X
-
Originally posted by Sonadow View PostLibreoffice is still stuck on using only one cpu thread for everything except Calc while MS Office has had proper multithreading and multicore support for who knows how long. Running LO on anything slower than an i3-grade processor is outright frustrating to the point of being just barely usable.
Mindblowing news, really.
Originally posted by Sonadow View PostThat, and the fact that much software still suck at proper multithreading makes the multicore race for anything about 8C16T practically pointless for general desktop computing.
And that's before considering any "public archetype" like gamer (game + streaming more and more), multimedia designer (file explorers, one to three instances of music/photo/video editor), etc...
So, yeah, you probably don't benefit *enough* from going from 8 to 12 cores that it would justify a change, for now. But that's not a reason to dismiss the total number of cores under the assumption that only one application is active and under heavy load at any given time.
- 1 like
Comment
-
Originally posted by Sonadow View PostA 32C64T mainstream processor with > 64GB of non-ECC memory works great as a dedicated headless build machine for personal use. There is no reason to be limited to HEDT and server hardware for such configurations.
Anytime you're talking content creation (like compiling code) you want stability and reliability first and foremost, so ECC is a must. For content consumption, this is less important. You will not see a 32c consumer CPU any time soon for the simple reason that content consumption (consumer) applications have no need for that many cores, and consumer software typically doesn't scale well across cores anyways. The price point of consumer grade software is such that developers don't have the budget to implement proper multithreading. See the vast majority of pc games as an example.Last edited by torsionbar28; 24 November 2021, 09:29 AM.
- 3 likes
Comment
-
Originally posted by Sonadow View Postwhile MS Office has had proper multithreading and multicore support for who knows how long.
And I also have a hard time understanding what parts outside of Excel/Calc that would benefit from using multiple threads.
- 2 likes
Comment
-
Originally posted by F.Ultra View Post
They have? AFAIK the entire office suite except Excel (since Excel 2007) is single threaded. Microsoft in their docs for Office 2022: "Code in Office solutions always runs on the main UI thread.".
And I also have a hard time understanding what parts outside of Excel/Calc that would benefit from using multiple threads.
Comment
-
Originally posted by Sonadow View Post
cmake, make and ninja still do not know how to automatically scale jobs according to the number of cpu threads available and always default to building on a single thread unless -j or --parallel is passed to the build; this never happens when building sln projects in Visual Studio where the build is always spread across all available threads by default. Rustc claims to be multithreaded, and yet it only occupies one cpu thread when invoked in a firefox compile.
Note that configuring modern versions of top to actually show useful information takes some time, the default settings with whatever you use for init on the top line are somewhat useless for watching what is actually running.
Back in the day, normal packages used to use up to 100% of cpu per thread (according to top). With rust, some of the build can be using all available cpus on one item (top at times shows towards 800% used by one rust job on the 3400G, but more often several rust jobs using 100%, or a mix of one bigger rust job and some smaler ones). That is for firefox, I don't know where you get your information that rust only uses one cpu thread in a firefox compile.
- 1 like
Comment
-
Originally posted by torsionbar28 View PostWhy wouldn't you want ECC? Given how inexpensive the HEDT and server stuff is these days, I see no reason to step down to consumer grade stuff.
Anytime you're talking content creation (like compiling code) you want stability and reliability first and foremost, so ECC is a must. For content consumption, this is less important. You will not see a 32c consumer CPU any time soon for the simple reason that content consumption (consumer) applications have no need for that many cores, and consumer software typically doesn't scale well across cores anyways. The price point of consumer grade software is such that developers don't have the budget to implement proper multithreading. See the vast majority of pc games as an example.
Comment
-
Originally posted by Sonadow View PostLibreoffice is still stuck on using only one cpu thread for everything except Calc while MS Office has had proper multithreading and multicore support for who knows how long. Running LO on anything slower than an i3-grade processor is outright frustrating to the point of being just barely usable.
I'm intrigued how these sorts of WYSIWYG document editors have been with us since the mid/late 80's (Mac), running on CPUs with literally a thousandth of the single-thread performance, or less. And yet, you do still sometimes see performance problems (even in the vaunted MS Office suite). I know it's not exactly fair to compare 80's -era word processors with their modern descendants, but it does make you think.
Comment
-
Originally posted by Citan View PostSo, yeah, you probably don't benefit *enough* from going from 8 to 12 cores that it would justify a change, for now. But that's not a reason to dismiss the total number of cores under the assumption that only one application is active and under heavy load at any given time.
But the main thing we need is better OS support, so that the OS effectively maintains and manages application-level work queues, instead of each process having its own pool of worker threads. You don't want a worker thread to start working on something, only to get preempted right after and not run again till long into the future. That's going to bottleneck attempts at multi-core scaling, except when only one app is getting the entire CPU to itself, and none of the libraries it's using have their own private worker thread pools.
Comment
-
Originally posted by Sonadow View PostANd why would someone building FOSS for personal use as a hobby require ECC?
Where I consider ECC to be a must is in work on high-value data and in servers.
Speaking specifically of FOSS, I'd say anyone building packages for redistribution should consider the cost to downstream users, if they produce a bad build due to memory errors. For that reason, it's probably also a good idea to use a filesystem with checksums, like BTRFS. That said, it seems like most distros have their own build service, which presumably utilizes appropriately-spec'd server hardware.
- 4 likes
Comment
Comment