Originally posted by torsionbar28
View Post
Announcement
Collapse
No announcement yet.
Linus Torvalds Switches To AMD Ryzen Threadripper After 15 Years Of Intel Systems
Collapse
X
-
Originally posted by torsionbar28 View PostRyzen is a CPU. These products are APU's. An APU is CPU+GPU in the same package. It sounds like you don't understand the market for APU's. The added cost of a dGPU, even a low-end basic one, is significant and prohibitive in many markets. Corporate desktops, for example.
Again, being forced to buy a dGPU is not a benefit. It's a strong negative in many markets.
If you want a dGPU, there is regular Ryzen. If you want an iGPU, there are these APU's. You have the choice to pursue whatever solution suits you best. I'm not sure how having the choice can be interpreted as a negative to complain about.
Do you really think you understand the global APU market better than AMD? And how about intel, who offers even weaker iGPU's on many of their products, did they get it wrong too? Everyone got it wrong, except for you?
Lesser power requierements - small form factor. Often cheaper. If GPU dies CPU dies aswel because it is very likely that cooling was not working anymore - so easy to spot failure. Less Powerconnectors which might fail. Causes freez once in a while and is difficult to investigate - especially if you are not sitting days weeks in front of the affected system.
Comment
-
Originally posted by TemplarGR View Post
1) The point is, 8 core ryzen cpus already exist, also tiny dgpus for those who just need a basic output already exist. Why introduce this product?
2) This product fits no niche at all. People can already buy 8 cores ryzens today, IIRC with better clocks and obviously cheaper. And they get the benefit of picking whatever cheap dgpu they fancy to pair with it. And they don't need a specific mobo with graphical outputs.
2) Yes, not for niche. But for majority of earth-dweller, as most of them buy / use computer not for gaming (maybe just for light gaming)
- Likes 2
Comment
-
Originally posted by TemplarGR View PostAMD has lost their minds if this leak is true. This means they just copied Intel on every single thing and just compete on price. So now the G parts just have a weak igpu just for the UI and pixel art gaming, just like Intel was doing all along. What is the point of getting an 8core with such a weak igpu? Boring, trash product, only a tiny niche might be interested in it.
I'm not sure what you are upset about - the iGPU in Renoir is at least as powerful as anything we have ever offered in an APU, and for someone wanting a more powerful GPU we sell regular Ryzen CPUs. The cost benefit from integrating a GPU comes mostly with smaller GPUs, and we have always provisioned our mainstream APUs with as much GPU performance as can work efficiently while sharing CPU memory.
Our low-end APUs have always had smaller GPUs than this, typically 2 or 3 CU's... now that is a tiny GPU
Is it just the move from 11 CU's in Raven/Picasso to 8 faster CU's in Renoir that is concerning you ? If you check reviews, you'll see that Renoir is running maybe 50% faster than Picasso on most games so it seems to have worked out OK.
EDIT - it occurred to me that the "Graphics Cores" header might be causing confusion since people sometimes call CU's "graphics cores". It's debatable, but IMO the closest equivalent to a "core" in the GPU world would be each of the 4 SIMDs in a Vega CU - so an 8 CU part would have 32 10-thread graphics cores with each core having 16 FP32 ALUs, for a total of 512 SP's.
Anyways, bottom line is that Renoir has the fastest GPU we have ever offered in an APU, other than custom game console designs which have MUCH higher memory bandwidth than the dual DIMM channels in a typical PC.
Originally posted by TemplarGR View Post1) The point is, 8 core ryzen cpus already exist, also tiny dgpus for those who just need a basic output already exist. Why introduce this product?
2) This product fits no niche at all. People can already buy 8 cores ryzens today, IIRC with better clocks and obviously cheaper. And they get the benefit of picking whatever cheap dgpu they fancy to pair with it. And they don't need a specific mobo with graphical outputs.
My understanding was that AM4 motherboards include graphical outputs, so no constraint there other than making sure you have a connector you like (I have seen an HDMI-only, for example).Last edited by bridgman; 27 May 2020, 12:27 AM.Test signature
- Likes 11
Comment
-
Originally posted by Dedale View PostGates most probably never said that famous 640k quote.Last edited by torsionbar28; 26 May 2020, 01:49 PM.
- Likes 1
Comment
-
Originally posted by AmericanLocomotive View PostHEDT and "Workstation" are more or less synonymous these days.
TR is in a class of its own. Sort of an intermediate step between the two, or perhaps we can call it "HEDT+". There is no comparable intel product, so it's pretty unique in that regard. TR has vastly more cores and cache than any i9, but it's not quite an EPYC. Certainly I can see the argument that TR is blurring the lines between HEDT and Workstation, i.e. the product provides overlap between these traditionally distinct market segments. But the market segments are still distinct, even if a given product overlaps them.
Originally posted by AmericanLocomotive View PostEPYC isn't ideal for workstation use for a variety of reasons. A big one is the TDP difference between Threadripper and EPYC. TR chips have a much higher TDP of 280w (vs 225), allowing them to have higher base/boost speeds.
Originally posted by AmericanLocomotive View PostThen there is the price difference. The EPYC 7F42 costs $3600 vs the $2000 3970X. This is even more extreme on the 64c side, where the 3990x is $3990 and the 7H12 is $8,600. Even for professional workstation users, an extra $4600 is a serious chunk of change.
Originally posted by AmericanLocomotive View PostThere selection for EPYC ATX boards is extremely limited. Many of those boards lack a lot of useful features that HEDT and Workstation users can utilize, and are more "server" oriented. Server boards are a real pain to use for everyday things. For example, most server boards typically take 1-2 (or more_ minutes to complete POST, which gets annoying if you frequently have to reboot for testing software/hardware, etc...
Originally posted by AmericanLocomotive View PostI'm not sure what Xeons have a 32GB memory limit? I just checked Intel's site, and all of the "real" Xeons have at least a 1TB memory capacity, with the higher end SKUs going up to 3TB. The Embedded and low-end Desktop Xeons (fit in standard LGA1151 sockets) Xeons have a 128GB UDIMM limit though).Last edited by torsionbar28; 26 May 2020, 04:00 PM.
Comment
-
Originally posted by torsionbar28 View PostI don't think this is true at all. If we look at the intel side, i9 = HEDT and Xeon = workstation. Home "power users" buy i9, while professional workstations universally opt for Xeon with ECC memory. The delineation in the market segments is very clear. With AMD, Ryzen9 = HEDT, and EPYC = workstation.
i9 CPUs are now a "consumer" CPU, available on the LGA1200 platform. Xeons are also now available on the LGA2066 platform, supporting up to 1TB of memory. Intel offers very high clocked Xeons (The W-2295 will do 4.6 GHz) on their 2066 HEDT platform. Likewise, there are high-performance "Workstation" Xeons available for LGA3647. Intel has a lot of product overlap between its platforms.
$4600 is really not a serious chunk of change in the workstation market. The previous place I worked spent around $40k per workstation for just hardware. Another $20k in software per machine.
Comment
-
Originally posted by discordian View PostYou want a picture of Linus giving the finger to AMD?
Comment
-
Originally posted by duby229 View Post
At least as far as building a Gentoo system goes, I've found that 2gigs per core seems to be the sweet spot.
Android Studio + Gradle daemon + emulator + multiple browser tabs and windows exceed 8 GB + ZRAM.
- Likes 1
Comment
Comment