Announcement

Collapse
No announcement yet.

AMD Ryzen Threadripper PRO 7995WX Linux Performance Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    The Threadrippers have always bordered on being a scam, preying on the gullible.

    AMD has been trying to compete with "more cores" since the Phenom II X6 days, and they finally sort of got it right with the Ryzen where thanks to the superior manufacturing process, they were able to cram more substandard execution units into a better power envelop than Intel.

    That's the real secret sauce behind AMD;s offerings, a superior manufacturing process. If Intel was using the same process, AMD would not be able to come close to Intel's CPUs.

    To see what a waste of money AMD's Threaduippers are, here's what an AMD engineer said:



    Hallock explained that the bottleneck "suddenly shifts to really weird places once you start to increase the core counts." Some areas that typically aren't a restriction, like disk I/O, can hamper performance in high core-count machines. For instance, development houses exclude project files from Windows Defender to reduce the impact of disk I/O during compile workloads.
    If you read between the lines, you realize that most benchmarks being done by every reviewer are grossly misleading. They are all structured in such a way that they never actually touch the drive, they rely on storing everything in ram.

    The problem is that eventually the data has to be copied from volatile memory to nonvolatile and this takes time.

    If you were to factor in the time it takes to complete this copy, you find that the AMD Threadripper and AMD's consumer processors, as well as intel's consumer processors, have the same effective overall performance.

    Intel's Xeon Max processors are the same way, you see that when running without any system ram, using only the massive onboard 64gb of ram, they are 20% faster than using the system ram, but take compiling code, or 3d rendering, or video encoding as an example, eventually the final product has to be permanently stored and this time needs to be factored in with the benchmarks.

    There is no such thing as a free lunch, AMD's Threadrippers are masking I/O limitations with massive amounts of memory, but eventually the results need to be written to disk.

    Now if you have a workload where I/O can be ignored completely, for instance where the results will simply be displayed on a screen from ram and then discarded, the Threadrippers, and Intel's Xeon Max CPUs, can be beneficial.

    But for most uses, they are a scam.

    Comment


    • #32
      Originally posted by sophisticles View Post
      To see what a waste of money AMD's Threaduippers are, here's what an AMD engineer said:
      ...
      If you read between the lines, you realize that most benchmarks being done by every reviewer are grossly misleading...
      But of course, no benchmark is a good benchmark unless intel says it is a good benchmark.

      Also, a friendly advice, first learn to read the lines, then try to read between the lines. You are clearly still stuck and struggling on the former and many years from the latter.

      All antivirus implementations have a performance impact on every cpu. Every system has bottlenecks. Nothing of that illustrates "what a waste of money TR is" as you put it. It is your general cluelessness that's getting in your way.

      Originally posted by sophisticles View Post
      That's the real secret sauce behind AMD;s offerings, a superior manufacturing process. If Intel was using the same process,..
      Not according to intel, according to them, their process is tops .

      But also, everyone is free to book tsmc capacity, and in fact, intel has booked plenty for its inferior products, just to keep that capacity from amd. And let me guess, amd will trash intel even at literally the same process node.

      Originally posted by sophisticles View Post
      Now if you have a workload where I/O can be ignored completely, for instance where the results will simply be displayed on a screen from ram and then discarded, the Threadrippers, and Intel's Xeon Max CPUs, can be beneficial.

      But for most uses, they are a scam.
      Displaying things on a screen actually has a higher latency than writing to a high performance nvme device. Most servers don't really write all that much data to disk, most of the data is actually served over network. The notion that "eventually the data has to be copied from volatile memory to nonvolatile and this takes time" and that this is somehow detrimental to HPC CPUs is outright ridiculous.

      Have you considered migrating your screen name to something more befitting your reality, like for example simplisticles?

      I've seen desperate fangirls, and sure, team intel doesn't really have a lot to work with, but you gotta be the most anemic and inapt instance of fanboism I've seen in a while.

      The only scam here is your clumsy attempt at simulating tech competence, and you are fooling yourself more than anyone else.

      Comment


      • #33
        Intel 10SF is very close to TSMC N7, but AMD had very competetive products using it despite that. Furthermore, ZEN3 core is much smaller than Tiger Lake, but has comparable IPC, just lacks frequency.

        Attributing all the AMD achievements to lithography is the biggest pot of copium by Intel fanboys I have seen in some time now.

        Comment


        • #34
          Originally posted by drakonas777 View Post
          Intel 10SF is very close to TSMC N7, but AMD had very competetive products using it despite that. Furthermore, ZEN3 core is much smaller than Tiger Lake, but has comparable IPC, just lacks frequency.
          True, I would also say, intels process is actually better in terms of achievable frequencies but a bit worse in power efficiency. If Intel wouldn't clock to 6 GHz and be fine with 5 GHz they also could achieve much better efficiency but loose the single core race.

          Comment


          • #35
            Originally posted by sophisticles View Post

            Yeah, it is hysterical.

            Even more hysterical is that you are ignoring that in tests where Intel's accelerators can be used, the power consumption of the Intel offerings and heat generation will be significantly lower.

            You are also ignoring the fact that Michael's tests do not come anywhere near to fully exploiting Intel's accelerators.

            Intel built those processors to take on IBM's z15/16 family of mainframes, not AMD's Threadrippers and EPYCs.
            Brainwashed Intel fanboy says what? 🤣 Without the accelerators the 56-core Sapphire Rapids flagship was like ≈40-50% slower than the ancient 64-core Threadripper 5995WX (aka Milan/Zen 3), let ALONE this thing!

            And those accelerators are only useful for VERY specific workloads that in most cases will involve completely rolling your own code, limiting their usefulness to mostly niche use-cases (AI workloads are better run on GPU's w/ CUDA/ROCm than on CPU's 99 times out of 100) for only the very biggest of clients.

            Sure they very much do have a unique market basically locked up thanks to the unique on-die accelerators, but it's absolutely freaking SMAAAAAAAAAAAAAAALL compared to the market for general purpose CPU compute, HPC, & data center compute (which AMD has 100% locked up w/ Genoa, Genoa-X, & Bergamo) or GPU acceleration (where Intel's stupid late & overly complicated Ponte Vecchio simply cannot compete). 🤷

            And considering just how much performance you lose in everything BUT those f​​​​ew accelerated tasks, your overall performance is basically GUARANTEED to be worse unless your specific workload literally ONLY uses the accelerators!
            Last edited by Cooe; 21 November 2023, 08:42 AM.

            Comment


            • #36
              Originally posted by sophisticles View Post
              The Threadrippers have always bordered on being a scam, preying on the gullible.

              To see what a waste of money AMD's Threaduippers are, here's what an AMD engineer said:



              If you read between the lines, you realize that most benchmarks being done by every reviewer are grossly misleading. They are all structured in such a way that they never actually touch the drive, they rely on storing everything in ram.

              The problem is that eventually the data has to be copied from volatile memory to nonvolatile and this takes time.

              If you were to factor in the time it takes to complete this copy, you find that the AMD Threadripper and AMD's consumer processors, as well as intel's consumer processors, have the same effective overall performance.

              Intel's Xeon Max processors are the same way, you see that when running without any system ram, using only the massive onboard 64gb of ram, they are 20% faster than using the system ram, but take compiling code, or 3d rendering, or video encoding as an example, eventually the final product has to be permanently stored and this time needs to be factored in with the benchmarks.

              There is no such thing as a free lunch, AMD's Threadrippers are masking I/O limitations with massive amounts of memory, but eventually the results need to be written to disk.

              Now if you have a workload where I/O can be ignored completely, for instance where the results will simply be displayed on a screen from ram and then discarded, the Threadrippers, and Intel's Xeon Max CPUs, can be beneficial.

              But for most uses, they are a scam.
              It is fairly crazy that anyone would spend so much money on such a powerful processor, and then use a toy OS in the first place. But even if you have to use Windows, it takes a special kind of brainless setup to be doing massive multi-processor work on a system crippled by Windows Defender or other pointless unnecessary roadblocks. If people are trying to do massive software builds, from disk, to disk, with anti-virus junk running, then I fully agree that spending more than about $500 on the cpu is a waste of money - AMD or Intel. You do your software builds without anti-virus junk, with your source code cached in ram, and if you are using a decent OS then most of your build objects never leave ram. The same applies to video encoding, simulations, or whatever else you are doing. Along with the fast cpu you need plenty of ram and good NVM SSDs for the bits that have to be in non-volatile storage.

              Sure, this might involve some change to old habits. But it's not rocket science - it's all very obvious.

              Comment


              • #37
                Originally posted by F.Ultra View Post

                A 7800x3d will run your games better by a wide margin than any of the Threadrippers.
                I wonder if there are any games that are highly multithreaded that would make use of such CPUs.
                Perhaps things like Anno 1800, Civilization VI, the Total War series ?

                Comment


                • #38
                  Originally posted by Terr-E View Post

                  I wonder if there are any games that are highly multithreaded that would make use of such CPUs.
                  Perhaps things like Anno 1800, Civilization VI, the Total War series ?
                  Most of those are fine with 8 core CPUs, only some games gain performance with a few more cores and it is mostly single digits improvements.

                  Anything over 16 cores will not give you better perf, possibly the opposite because of the lower clocks the high core CPUs reach.

                  Comment


                  • #39
                    Originally posted by sophisticles View Post
                    The Threadrippers have always bordered on being a scam, preying on the gullible.

                    AMD has been trying to compete with "more cores" since the Phenom II X6 days, and they finally sort of got it right with the Ryzen where thanks to the superior manufacturing process, they were able to cram more substandard execution units into a better power envelop than Intel.

                    That's the real secret sauce behind AMD;s offerings, a superior manufacturing process. If Intel was using the same process, AMD would not be able to come close to Intel's CPUs.

                    To see what a waste of money AMD's Threaduippers are, here's what an AMD engineer said:





                    If you read between the lines, you realize that most benchmarks being done by every reviewer are grossly misleading. They are all structured in such a way that they never actually touch the drive, they rely on storing everything in ram.

                    The problem is that eventually the data has to be copied from volatile memory to nonvolatile and this takes time.

                    If you were to factor in the time it takes to complete this copy, you find that the AMD Threadripper and AMD's consumer processors, as well as intel's consumer processors, have the same effective overall performance.

                    Intel's Xeon Max processors are the same way, you see that when running without any system ram, using only the massive onboard 64gb of ram, they are 20% faster than using the system ram, but take compiling code, or 3d rendering, or video encoding as an example, eventually the final product has to be permanently stored and this time needs to be factored in with the benchmarks.

                    There is no such thing as a free lunch, AMD's Threadrippers are masking I/O limitations with massive amounts of memory, but eventually the results need to be written to disk.

                    Now if you have a workload where I/O can be ignored completely, for instance where the results will simply be displayed on a screen from ram and then discarded, the Threadrippers, and Intel's Xeon Max CPUs, can be beneficial.

                    But for most uses, they are a scam.
                    That article (and common practice) doesn't say that high core counts is a problem when disk IO happens, it simply says that Windows Defender is not compatible with a HEDT workload (to no ones surprise).

                    Comment


                    • #40
                      Originally posted by ddriver View Post
                      Displaying things on a screen actually has a higher latency than writing to a high performance nvme device. Most servers don't really write all that much data to disk, most of the data is actually served over network. The notion that "eventually the data has to be copied from volatile memory to nonvolatile and this takes time" and that this is somehow detrimental to HPC CPUs is outright ridiculous.

                      Have you considered migrating your screen name to something more befitting your reality, like for example simplisticles?
                      First things first, I kind of like Simplisticles, I might have to steal it, maybe i can use it as a signature.

                      It's funny that you claim I am not able to "read the lines", I believe you stated, but then display that you clearly did not read what i said, only what you wanted me to say so that you could take me to task over it.

                      Reread what i said and try to understand what i mean, i am saying that in many ways, these HPC C{U's are a scam because they rely on vast amounts of memory to mask the limitations of I/O.

                      I don't know how old you are, but back in the day, there was a very well known benchmark for Windows that was widely used. It was so widely used that for all practical purposes a CPU review would not be taken seriously if it were not included and AMD, Via and Intel would tout the results in their advertising if their product happened to be on top.

                      I don't remember the name of the benchmark but it relied on int and float math operations and this application was also used to "burn in", i.e. test stability of new builds, especially overclocked systems.

                      The reason i bring this up was because there was one specific benchmark within this app where the P4 could not be touched, it would smoke everything. This was because that specific test fit entirely within it's cache and was run from there. If the sample size was increased so that it had to access system ram, then the performance tanked and you could see the performance drop over time if you plotted the results on a graph.

                      Fast forward to about 15 years later when i had my Ryzen 1600 desktop with 16gb ram ddr4 and a i7 4790 with 32 gb ddr3.

                      Of these 2, which would you say should be faster?

                      In every canned benchmark i ran the Ryzen was about 20% faster, but because I used to do a lot of consulting work for a video production company, i used to routinely have to work with very large mov files that were created on Macs.

                      Some of the work i did involved editing, some fixing, color correction, other filtering, etc. Many of these tasks would eat up ram like it was going out of style, and while the Ryzen was faster than the Intel initially, eventually as the system ran out of ram and started hitting the swap, performance would plummet and the Devil's Canyon would outpace it.

                      Here's the funny thing though, assume the source was a 6k ProRes 4444 mov, and I had to apply a few filters, and scale it to 4k for final render to h264, if I created a script that invoked ffmpeg, applied the filters, and encoded and used the time command to monitor the execution of the script, the results where very interesting.

                      What would happen is that the Ryzen system would be ahead, then as it crossed the 16gb ram usage mark the i7 would catch up and overtake it, then depending on the length of the video if the project crossed the 32gb mark, the Ryzen would overtake the i7.

                      The interesting part was using time to monitor the time spend on user mode and kernel mode of each system, since disk I/O is always kernel mode and the rest was user mode and then comparing the total time.

                      The main benefit of these HPC processors is the ability to use lots of ram but eventually all that data needs to be written to disk, whether a disk within the system or a disk over the network, it's getting written somewhere.

                      Even that AMD engineer admitted that they get bottle-necked by I/O and this is because eventually the results need to be stored somewhere other than ram.

                      Are you able to understand the words that are be typed by my fingers?

                      -Simplisticles, always keeping things simple.

                      Now if you will excuse me I am late for my desperate fan girl meeting, because i am a big fan of big fans.

                      Comment

                      Working...
                      X