Announcement

Collapse
No announcement yet.

Linus Torvalds Switches To AMD Ryzen Threadripper After 15 Years Of Intel Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by torsionbar28 View Post
    "640 kb ought to be enough for anybody" -- Bill Gates
    "8 gb ought to be enough for anybody" -- Vistaus
    Im sorry Vistaus its a good one but on your expenses ...but good

    Comment


    • Originally posted by torsionbar28 View Post
      Ryzen is a CPU. These products are APU's. An APU is CPU+GPU in the same package. It sounds like you don't understand the market for APU's. The added cost of a dGPU, even a low-end basic one, is significant and prohibitive in many markets. Corporate desktops, for example.


      Again, being forced to buy a dGPU is not a benefit. It's a strong negative in many markets.

      If you want a dGPU, there is regular Ryzen. If you want an iGPU, there are these APU's. You have the choice to pursue whatever solution suits you best. I'm not sure how having the choice can be interpreted as a negative to complain about.

      Do you really think you understand the global APU market better than AMD? And how about intel, who offers even weaker iGPU's on many of their products, did they get it wrong too? Everyone got it wrong, except for you?
      If I would have to provide IT Service for a company I would really love iGPU's.
      Lesser power requierements - small form factor. Often cheaper. If GPU dies CPU dies aswel because it is very likely that cooling was not working anymore - so easy to spot failure. Less Powerconnectors which might fail. Causes freez once in a while and is difficult to investigate - especially if you are not sitting days weeks in front of the affected system.

      Comment


      • Originally posted by TemplarGR View Post

        1) The point is, 8 core ryzen cpus already exist, also tiny dgpus for those who just need a basic output already exist. Why introduce this product?

        2) This product fits no niche at all. People can already buy 8 cores ryzens today, IIRC with better clocks and obviously cheaper. And they get the benefit of picking whatever cheap dgpu they fancy to pair with it. And they don't need a specific mobo with graphical outputs.
        1) Tiny (gpu) for you, not for the others. But, go ahead, tell that to OEM like Dell, HP, Lenovo that supply built-up with PC with igpu. Or you can go to anandtech, read their comments on renoir articles. Majority wants powerhouse CPU with good enough integrated GPU (read: not intel), therefore, renoir APU. Other point: compactness, reliability, power consumption.

        2) Yes, not for niche. But for majority of earth-dweller, as most of them buy / use computer not for gaming (maybe just for light gaming)
        Last edited by t.s.; 26 May 2020, 01:28 PM. Reason: semantic

        Comment


        • Gates most probably never said that famous 640k quote.

          Comment


          • Originally posted by TemplarGR View Post
            AMD has lost their minds if this leak is true. This means they just copied Intel on every single thing and just compete on price. So now the G parts just have a weak igpu just for the UI and pixel art gaming, just like Intel was doing all along. What is the point of getting an 8core with such a weak igpu? Boring, trash product, only a tiny niche might be interested in it.
            I can't comment on unannounced products, but the numbers in the table look like typical Renoir configurations so just responding in the context of Renoir in general.

            I'm not sure what you are upset about - the iGPU in Renoir is at least as powerful as anything we have ever offered in an APU, and for someone wanting a more powerful GPU we sell regular Ryzen CPUs. The cost benefit from integrating a GPU comes mostly with smaller GPUs, and we have always provisioned our mainstream APUs with as much GPU performance as can work efficiently while sharing CPU memory.

            Our low-end APUs have always had smaller GPUs than this, typically 2 or 3 CU's... now that is a tiny GPU

            Is it just the move from 11 CU's in Raven/Picasso to 8 faster CU's in Renoir that is concerning you ? If you check reviews, you'll see that Renoir is running maybe 50% faster than Picasso on most games so it seems to have worked out OK.

            EDIT - it occurred to me that the "Graphics Cores" header might be causing confusion since people sometimes call CU's "graphics cores". It's debatable, but IMO the closest equivalent to a "core" in the GPU world would be each of the 4 SIMDs in a Vega CU - so an 8 CU part would have 32 10-thread graphics cores with each core having 16 FP32 ALUs, for a total of 512 SP's.

            Anyways, bottom line is that Renoir has the fastest GPU we have ever offered in an APU, other than custom game console designs which have MUCH higher memory bandwidth than the dual DIMM channels in a typical PC.

            Originally posted by TemplarGR View Post
            1) The point is, 8 core ryzen cpus already exist, also tiny dgpus for those who just need a basic output already exist. Why introduce this product?

            2) This product fits no niche at all. People can already buy 8 cores ryzens today, IIRC with better clocks and obviously cheaper. And they get the benefit of picking whatever cheap dgpu they fancy to pair with it. And they don't need a specific mobo with graphical outputs.
            Remember that APUs are primarily developed for the laptop market, although the socketed parts have also been very popular for compact and/or inexpensive desktop systems. You can generally build a system with an APU for less $$ than comparable discrete CPU+GPU, unless you are comparing new APU price with used CPU/GPU prices.

            My understanding was that AM4 motherboards include graphical outputs, so no constraint there other than making sure you have a connector you like (I have seen an HDMI-only, for example).
            Last edited by bridgman; 27 May 2020, 12:27 AM.
            Test signature

            Comment


            • Originally posted by Dedale View Post
              Gates most probably never said that famous 640k quote.
              His shitty operating systems have plagued an entire planet for a generation. A greedy industrialist, Gates made his billions forcing sub-subpar operating sofware on the world. The quality of Microsoft Windows ranges from adequate (XP, Win7), to terrible (Win9x, Vista, Win8) to malicious (Win10). His software has caused tens of billions of lost man-hours in time across the globe. Every single day people lose hours trying to fix Bill's mistakes. Microsoft has been nothing but mediocre from day one. They have copied or stolen, but they have never innovated. Whether Bill said that quote or not is immaterial; his legacy speaks for itself.
              Last edited by torsionbar28; 26 May 2020, 01:49 PM.

              Comment


              • Originally posted by AmericanLocomotive View Post
                HEDT and "Workstation" are more or less synonymous these days.
                I don't think this is true at all. If we look at the intel side, i9 = HEDT and Xeon = workstation. Home "power users" buy i9, while professional workstations universally opt for Xeon with ECC memory. The delineation in the market segments is very clear. With AMD, Ryzen9 = HEDT, and EPYC = workstation.

                TR is in a class of its own. Sort of an intermediate step between the two, or perhaps we can call it "HEDT+". There is no comparable intel product, so it's pretty unique in that regard. TR has vastly more cores and cache than any i9, but it's not quite an EPYC. Certainly I can see the argument that TR is blurring the lines between HEDT and Workstation, i.e. the product provides overlap between these traditionally distinct market segments. But the market segments are still distinct, even if a given product overlaps them.

                Originally posted by AmericanLocomotive View Post
                EPYC isn't ideal for workstation use for a variety of reasons. A big one is the TDP difference between Threadripper and EPYC. TR chips have a much higher TDP of 280w (vs 225), allowing them to have higher base/boost speeds.
                We see the same with the intel products, where i9 has much higher clocks than the comparable Xeon. By definition, the HEDT pushes the limits, while server/workstation chips err on the side of reliability and caution.

                Originally posted by AmericanLocomotive View Post
                Then there is the price difference. The EPYC 7F42 costs $3600 vs the $2000 3970X. This is even more extreme on the 64c side, where the 3990x is $3990 and the 7H12 is $8,600. Even for professional workstation users, an extra $4600 is a serious chunk of change.
                $4600 is really not a serious chunk of change in the workstation market. The previous place I worked spent around $40k per workstation for just hardware. Another $20k in software per machine. This is pretty typical in the industry. I don't think most home HEDT users spend more than $6k or so on their rig, which is well above a standard PC but is nowhere near the budget of professional workstations. Just as consumer gamer GPU's are a few hundred bucks compared with $4k+ for a big workstation GPU. Again, HEDT and Workstation are two very different markets.

                Originally posted by AmericanLocomotive View Post
                There selection for EPYC ATX boards is extremely limited. Many of those boards lack a lot of useful features that HEDT and Workstation users can utilize, and are more "server" oriented. Server boards are a real pain to use for everyday things. For example, most server boards typically take 1-2 (or more_ minutes to complete POST, which gets annoying if you frequently have to reboot for testing software/hardware, etc...
                The POST time is due to memory checking, and for BMC init. Both of which can be disabled in BIOS or via jumpers. This is true at least for all the Xeon, Opteron, and EPYC boards from Supermicro as that's what I have experience with.

                Originally posted by AmericanLocomotive View Post
                I'm not sure what Xeons have a 32GB memory limit? I just checked Intel's site, and all of the "real" Xeons have at least a 1TB memory capacity, with the higher end SKUs going up to 3TB. The Embedded and low-end Desktop Xeons (fit in standard LGA1151 sockets) Xeons have a 128GB UDIMM limit though).
                Sounds like intel has stepped up their game in recent years in an attempt to compete with AMD. It used to be all the E3 Xeons had 32 GB limit until very recently.
                Last edited by torsionbar28; 26 May 2020, 04:00 PM.

                Comment


                • Originally posted by torsionbar28 View Post
                  I don't think this is true at all. If we look at the intel side, i9 = HEDT and Xeon = workstation. Home "power users" buy i9, while professional workstations universally opt for Xeon with ECC memory. The delineation in the market segments is very clear. With AMD, Ryzen9 = HEDT, and EPYC = workstation.
                  Intel has 3 distinct platforms: Consumer, HEDT and Server. LGA1200 for mainstream consumer, LGA2066 for HEDT/Workstation and LGA3647 for Workstation/Server.

                  i9 CPUs are now a "consumer" CPU, available on the LGA1200 platform. Xeons are also now available on the LGA2066 platform, supporting up to 1TB of memory. Intel offers very high clocked Xeons (The W-2295 will do 4.6 GHz) on their 2066 HEDT platform. Likewise, there are high-performance "Workstation" Xeons available for LGA3647. Intel has a lot of product overlap between its platforms.
                  $4600 is really not a serious chunk of change in the workstation market. The previous place I worked spent around $40k per workstation for just hardware. Another $20k in software per machine.
                  Money is still money. I've worked at companies that would haggle about $1000 on $2.4 Million CNC machines. If I was a purchasing officer at a company needing to buy 30 workstations, you would bet I'd go Intel if it meant saving $4600 per PC for the ability to run 512GB of RAM.

                  Comment


                  • Originally posted by discordian View Post
                    You want a picture of Linus giving the finger to AMD?
                    He should have waited until the Intel Xe dGPU. I look forward to Linus ranting about AMD drivers constantly breaking. Maybe they'll stop doing that? Or maybe he'll stop merging in patches that break things.

                    ​​​

                    Comment


                    • Originally posted by duby229 View Post

                      At least as far as building a Gentoo system goes, I've found that 2gigs per core seems to be the sweet spot.
                      It depends on what you're doing in the first place. Even for Android development, 8 GB is not enough.

                      Android Studio + Gradle daemon + emulator + multiple browser tabs and windows exceed 8 GB + ZRAM.

                      Comment

                      Working...
                      X