BTW, "make -jN" is a feature I use many times each day. Sure "make -j32" is quick, but plain old default single-threaded "make -j1" stops much closer to the actual command-line compiler error. I usually do "make -j32; make" ftw.
Announcement
Collapse
No announcement yet.
Linux Prepares For Next-Gen AMD CPUs With Up To 12 CCDs
Collapse
X
-
Originally posted by pipe13 View PostBTW, "make -jN" is a feature I use many times each day. Sure "make -j32" is quick, but plain old default single-threaded "make -j1" stops much closer to the actual command-line compiler error. I usually do "make -j32; make" ftw.
And, at least when CMake is used to drive Ninja, it has the nice property of echoing the failed commandline + errors at the end. This eliminates the need to use a serial build or go searching through a logfile to find the cause of a failed build.
Prior to using CMake, the GNU Make buildsystem I wrote used extensive metaprogramming techniques, to achieve most of the same properties as CMake (e.g. public/private dependencies with public dependencies' include paths being automatically inherited). The advantage being that it's single-pass. The main benefit we got from switching to CMake is that it's standardized and fairly well-documented vs. my ad hoc buildsystem.
Something I wish Ninja would do is record the previous (user + sys) time to perform each step, so that subsequent builds of the same sources could be more optimally scheduled.
Comment
-
Originally posted by skeevy420 View PostY'all can't see the forest for the trees. All these "clouds" need lots of high performance cpu threads. It doesn't matter if LIbreOffice, Call of Duty, compilers, or anything else isn't optimized for 128 cores. What matters to them is being able to sell off some cores for some time and knowing that any task on any thread performs just as well. The end-user running poorly optimized software isn't the concern of the cloud providers. It's not their fault you didn't setup Cmake or used an inferior solution during premium, paid-for time...shit, they want you to run unoptimized solutions so you have to pay for extended runtime.
On the desktop side AMD can start selling better APUs since its not like most non-workstation desktops need more than 6C12T. I'd go with 8C16T to mirror game consoles. Instead of giving desktops more computing cores they can give them more graphics cores where the removed 120 computing cores would otherwise be.
Or do something similar to Intel with an 8C16T high performance CCD, an 8C16T low performance CCD, and more graphics cores where the removed 112 computing cores would otherwise be.
Comment
-
Originally posted by smitty3268 View Post
It's actually the exact opposite. Power efficiency is king in most of the big data centers.
It's not about the raw price of the power, it's about the actual power and cooling systems installed in their buildings. That's the limiting factor - the more efficient the processors are, the more of them they can pack into the same building in 1 datacenter, rather than having to build a dozen different datacenters across hundreds of miles.
It's the workstation and HEDT markets that don't care about power use.
Seriously, i don't think you understand just how much better in power efficiency Raptor Lake is going to be (with the rumor of much more e-cores). Alder Lake is already great. It sports both the best IPC in the business for the p-cores and numerous efficient e-cores. AMD's approach of just adding more slightly weaker than Intel p-cores won't be better in efficiency in any multi-threaded use case. It is just a matter of time until schedulers get optimized and by then AMD's Zen 4 is going to be a dinosaur. AMD's "plan" is to just add 50% more cores and some more cache.
The reason AMD came back from the dead was not that they had the better architecture, they never had it. Ryzen has always been a me-too copycat of Intel's designs. It is just that Intel's fabs failed unexpectedly and they got stuck at 14nm for too long while AMD exploited TMSC. That gave AMD their supposed "efficiency". But this is coming to an end, i am afraid, and AMD is going back to where it belong: To the budget bin. That will teach them to price their products sky-high every time in history they have a competitive product while pretending to be the pro-consumer company.
Comment
-
Originally posted by TemplarGR View PostE-cores of Alder Lake ARE high performance. They are the equivalent in performance of Core i10xxx.
Originally posted by TemplarGR View PostSo it makes more sense to have more of THOSE when you need 128-256 cores etc.
Comment
-
Originally posted by TemplarGR View PostSeriously, i don't think you understand just how much better in power efficiency Raptor Lake is going to be
Alder Lake and Raptor Lake are client-only (i.e. laptops and desktops). If you buy one in the form of a Xeon E-series (which are just rebranded desktop chips with a few extra features), you could put it in a small server, but that's a niche market. Mainstream servers use Xeon Scalable (i.e. Sapphire Rapids).
If Intel announced a true cloud CPU based on E-cores, I sure haven't heard about it. To my knowledge, they only offer Atom-branded server chips for embedded server applications, like 5G basestations and enterprise NAS boxes.Last edited by coder; 25 November 2021, 02:38 AM.
Comment
-
Originally posted by TemplarGR View PostThat will teach them to price their products sky-high every time in history they have a competitive product while pretending to be the pro-consumer company.
They gotta make money while they can. They've always offered decent value for money, but mo' cores gonna cost mo' money. It's not their fault Intel couldn't scale up to as many cores.
P.S. when did they ever do "pretending to be the pro-consumer company"? Some people like them because underdog, but I think you're projecting.
- Likes 1
Comment
-
Originally posted by coder View PostIn terms of scalar integer IPC, only. Not in absolute performance (because they clock lower) and definitely not in floating-point or integer vector performance.
Depends on what you're doing. Vectorized workloads would better benefit from big cores, especially those supporting AVX-512.
2) Alder Lake's successor will more than likely have official AVX-512 again. They probably cut it out from Alder Lake because they weren't ready to have it enabled at the same time as e-cores. So instead of a theoretical 32 p-cores of AMD, imagine you get 16 p-cores AND 64 e-cores. Even in highly vectorized workloads this is probably going to be the most efficient solution don't you think?
Comment
-
Originally posted by TemplarGR View PostSo, apparently, "power efficiency is king" (which is correct), but Intel having that power efficiency due to big.little is a failure cause we are all amd fanbois here and we juts have to push the red team, amirite?
It doesn't really matter on the desktop, because nobody cares if they use more power. It will matter for Raptor Lake, and it's a very big question exactly how that will look.
Intel's e-cores are really efficient compared to their p-cores. It's not nearly as impressive versus Zen 3 cores, though. How high will they be clocked on Raptor Lake, and what kind of efficiency will they get there? I have no idea. Maybe you are tuned into the Intel rumors more than I am, but I don't think there's currently much out there.
Raptor Lake isn't coming out much before Zen 4, by the way, so that is it's competition. Not the current Zen 3 based Epyc systems.
AMD's approach of just adding more slightly weaker than Intel p-cores won't be better in efficiency in any multi-threaded use case. It is just a matter of time until schedulers get optimized and by then AMD's Zen 4 is going to be a dinosaur. AMD's "plan" is to just add 50% more cores and some more cache.
The reason AMD came back from the dead was not that they had the better architecture, they never had it. Ryzen has always been a me-too copycat of Intel's designs.
That will teach them to price their products sky-high every time in history they have a competitive product while pretending to be the pro-consumer company.
4 e-cores equal 1 p-core in die area. That means that instead of a 128 core Ryzen you can get in a theoretical scenario 512 Intel e-cores.
Sure, lower clocked and slightly lower IPC, but you get 4 times the cores. You need a heavily multi core system, remember? Which is going to be more efficient?Last edited by smitty3268; 25 November 2021, 03:05 AM.
- Likes 3
Comment
-
Originally posted by coder View PostU mad, bro?
They gotta make money while they can. They've always offered decent value for money, but mo' cores gonna cost mo' money. It's not their fault Intel couldn't scale up to as many cores.
P.S. when did they ever do "pretending to be the pro-consumer company"? Some people like them because underdog, but I think you're projecting.
Intel simply put has the best architecture, period. Their big.little approach is going to dominate all kinds of workloads in the future, and AMD is going to -again- have to copy-cat Intel in order to survive.
As for Intel not having big.little in the server space, it is only a matter of time pal. Of course it got introduced at the desktop/mobile space first, it is brand new, and even on Linux they can't fix the schedulers just yet. They wouldn't introduce something that needs much testing and refinement on the server side just yet. But don't delude yourself that Intel is not targeting the server space with this move. Also i am pretty sure at some point Intel will introduce e-core only cpus to replace the atoms.
Comment
Comment