Announcement

Collapse
No announcement yet.

AMD Ryzen 9 7900X3D Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Weasel View Post
    Welp, I've been holding off from buying a new rig, but this cements my position to get Zen4 since it'll be the last sane system for a while. I'm not AMD fan, I've been mainly using Intel CPUs until recently, so.
    Fuck this BIG.little garbage.
    big.little is only arm right now
    intel is fast.efficient
    amd right now is highclock.highcache

    i agree that many people have no use for all of this and simple want no big.little stuff.

    but from all these 3 options we have right now the only sane one is the amd solution.
    a stupid scheduler can detect cache miss rate and then can move it to the highcache cores.

    the intel version and the arm version need smart scheduler who is near to impossible to develop.
    Phantom circuit Sequence Reducer Dyslexia

    Comment


    • #32
      Originally posted by Weasel View Post
      Most of your post makes sense except for Apple. They're far from successful in terms of tech. Their userbase has almost no clue whatsoever and is technically illiterate. They never excelled at hardware specs, just propaganda and brainwashed userbase who will buy anything Apple no matter how cringe overpriced it is. That's how they rake in the money.
      you attack the userbase as brainless and clueless and thats right apple customers are complete garbage people.

      but that has nothing to do with the tech engineers at apple.

      i would say apple M1/M2 are technically speaking successfull products

      they are successful in terms of tech...
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #33
        Originally posted by Linuxxx View Post
        Could you please use your autism for anything else other than spoiling Michael's upcoming articles & therefore crapping all over his work?
        Michael
        You might want to delete that user's post...
        why do you think his opensource intelligence research is invalid in terms of merit ?...

        how can he steal michaels tests results if michael him self uploaded the test results ?
        Phantom circuit Sequence Reducer Dyslexia

        Comment


        • #34
          Wow, the improvements over my 5950x are there for sure, however, I was expecting more. Viewing these results further assures me that my 5950x will remain relevant for a long time considering I only care about a smooth 60 fps, anything more is a waste. I can't wait to get my 7900 XTX to upgrade from my 5700 XT and really open things up. I will most likely build again with early gen DDR6 or very last gen DDR5, wake me up around 2030.

          Comment


          • #35
            Why on earth would they optimize their engine specifically for 1st gen. Ryzens?
            Because I/O software managed to get specs and data before Zen was even published. First modern Hitman engine is 2012, but they updated it in 2016 for the reboot "Hitman" despite first zen cpu came out in 2017. Then over the time they made small updates on the engine with optimisation to support new versions of AMD architecture. Glacier engine II is now almost 12 years old, but it was always designed and build with modern hardware in mind and has very powerfull optimisations in regard to synchronization of parrallel workload.

            Outside of modern Hitman and modern Tomb Raider, there are not so many devs doing such great job in building their engines.
            Last edited by Jahimself; 03 March 2023, 03:24 PM.

            Comment


            • #36
              Originally posted by stormcrow View Post

              Linux is not unaware. It's mostly that the Intel hybrid topology is problematic regardless of the OS. Windows isn't getting it right, either. AMD's approach makes the performance/efficiency thing transparent to the OS. The OS doesn't know and doesn't care beyond the ACPI tables. Intel requires the OS to not only be aware of the difference, but the OS process scheduler to efficiently balance them. The only OS that's doing that correctly isMacOS which doesn't help Intel any in trying to market its CPUs since the new hybrid core Macs are all ARM based. Intel is pursuing a dead end approach (their approach is backwards) to keep from trying to retool their entire production line but it's resulting in pigs with lipstick. You can't start with a historically burdened inefficient design and shoe horn it into magically efficient packages (Intel's usual screwup - even with the Itanium). You have to start with something that's already efficient and map only what you need for backwards compatibility into temporary emulation extensions (Apple's repeatedly successful approach).
              as someone who actually uses a 13700kf on windows 11 pro, no, intel has no real scheduling problems with P and E cores. they might have early on when alder lake came out, but today, intel and microsoft has heavily ironed it out. it schedules extremely well.

              another problem with linux with the 13900k can simply be that linux doesn't have CPPC support for it. not to many people know that alder lake and now raptor lake implemented something similar to CPPC. with windows you can easily see it. two p cores that are the best on windows, which is set from factory, will be prioritized for programs. they boost the highest and windows using the CPPC data tosses as much as it can onto those two cores. i don't think linux has support for this either. hell, most amd cpu's don't even have proper CPPC support either and amd's pstate is still not ready for prime time. with windows 11, you can see which cores are the best and for me, its core 5 and core 6 of my P cores. from monitor CPU usage with hwinfo, windows 11 does an amazing job keeping important threads on those two cores 95% of the time. those two cores do the 5.4ghz while the rest of the p cores bounce between 5.1 and 5.2ghz. e cores are really only used for minimized / background stuff and when under a heavy load. windows 11 has been doing a great job scheduling. you can easily watch it in real time. start downloading a game with steam, have it focused, its on the two highest performing P cores. minimize steam, its tossed onto the e cores. play a game, decide to randomly do a virus scan with windows defender? its on the e cores. discord running in the background? its on the e cores.

              also, michael, something is terribly wrong with your 7950x. seeing these results clearly shows that. something is badly wrong. its barely above a 5950x and the 7900x with only 6 cores per ccd doing better than an 8 core per ccd is horribly wrong. check your 7950x system. 7900x and 7950x should be very similar to one another. with only the 7950x pulling out slightly because it has two more cores to keep the chance of threads staying on one ccd much higher to prevent infinity fabric bottleneck.
              Originally posted by rob-tech View Post
              Wow, the improvements over my 5950x are there for sure, however, I was expecting more. Viewing these results further assures me that my 5950x will remain relevant for a long time considering I only care about a smooth 60 fps, anything more is a waste. I can't wait to get my 7900 XTX to upgrade from my 5700 XT and really open things up. I will most likely build again with early gen DDR6 or very last gen DDR5, wake me up around 2030.
              i'm not surprised since linux doesn't have the software support like windows has for 7000 series x3d. AMD uses xbox game bar to know when games are playing and with the chipset drivers, interfaces with the windows scheduler to real time schedule everything from the game onto the ccd with the x3d cache. linux has no such thing. its all dependent on the linux scheduling not screwing up and being smart enough to know what ccd has the cache and keep the program solely on that ccd. luckily, its always the same ccd with the 7000 series that has the cache. so its possible for linux kernel developers to optimize for 7000 series x3d.

              seeing these results, i am lead to believe even the 7950x3d on linux is not doing as good as it should since clearly something is terribly wrong with michaels 7950x seeing the 7900x and 7700x results.
              Last edited by pieman; 03 March 2023, 07:55 PM.

              Comment


              • #37
                More interested in how the 7800X3D performs since all 8cores are 3d cache (I think) and shouldn't cause processes to run on wrong cores.

                Also under win11 the x3d do better with correct bios and drivers and software installed. Games run on V-Cache cores only.
                Last edited by theriddick; 03 March 2023, 08:48 PM.

                Comment


                • #38
                  Originally posted by Jahimself View Post

                  Because I/O software managed to get specs and data before Zen was even published. First modern Hitman engine is 2012, but they updated it in 2016 for the reboot "Hitman" despite first zen cpu came out in 2017. Then over the time they made small updates on the engine with optimisation to support new versions of AMD architecture. Glacier engine II is now almost 12 years old, but it was always designed and build with modern hardware in mind and has very powerfull optimisations in regard to synchronization of parrallel workload.

                  Outside of modern Hitman and modern Tomb Raider, there are not so many devs doing such great job in building their engines.
                  That still doesn't explain why their engine should have optimizations which are Zen-exclusive, though.

                  If a game engine is properly multi-threaded, then it will take advantage of multiple cores on any such CPU, regardless of specific vendors (in this case: Intel & AMD, since both are executing the same x86 instructions).

                  Comment


                  • #39
                    Originally posted by muncrief View Post
                    I'd love to buy a Zen 4 CPU but since I just bought a new X570 motherboard, 32 GB of DRAM, and 5700X CPU at the end of 2019 I can't justify the expense of buying a whole new system again. Especially with prices as they are.

                    But wow, the Zen 4 CPUs are just awesome! And in another 2 or 3 years, if prices come down, it might be worth it. On the other hand Zen 3 works so well, and Zen 4 requires a completely new system, so it's certainly put AMD users in a bind as far as upgrades go.
                    Many of today's CPUs are too good for most users. The CPU performance of a 5700X or my 5700G is absurdly high. You could probably hold out until a year or two into AM6 (2028-2030?), or build a budget system with Zen 6 if that launches on AM5. (Or Intel, but I'm sticking with AMD in this post. Ignore ARM.) Those who need the best sooner know who they are.

                    One thing to look out for soon is accelerators. Starting with XDNA in Phoenix APUs, which could be a sneak preview of what we see included with Zen 5 desktop CPUs (Granite Ridge).

                    AMD also has the chance to bridge the multi-threaded gap (e.g. 7600X vs. 13600K) with Granite Ridge. They can make the 7600X look quaint in a single generation if they feel like it.

                    Originally posted by drakonas777 View Post
                    Only for APU family of products.
                    ​You don't know that. Hybrid Zen 5 + Zen 4c could come to Granite Ridge desktop CPUs, Strix Point APUs, both, or neither.

                    Hopefully the "hybrid" 7000X3D would help prepare AMD for Zen 5 + Zen 4c on desktop. But instead of clocks vs. cache, it could be clocks + IPC + cache vs. a lot of lower-clocked cores with less cache. Essentially, AMD's version of Alder/Raptor Lake, which would lead to the usual complaints.

                    I think we'll get good leaks for these later in 2023. Should be fun.

                    Comment


                    • #40
                      Originally posted by pieman View Post
                      as someone who actually uses a 13700kf on windows 11 pro, no, intel has no real scheduling problems with P and E cores. they might have early on when alder lake came out, but today, intel and microsoft has heavily ironed it out. it schedules extremely well.

                      another problem with linux with the 13900k can simply be that linux doesn't have CPPC support for it. not to many people know that alder lake and now raptor lake implemented something similar to CPPC. with windows you can easily see it. two p cores that are the best on windows, which is set from factory, will be prioritized for programs. they boost the highest and windows using the CPPC data tosses as much as it can onto those two cores. i don't think linux has support for this either. hell, most amd cpu's don't even have proper CPPC support either and amd's pstate is still not ready for prime time. with windows 11, you can see which cores are the best and for me, its core 5 and core 6 of my P cores. from monitor CPU usage with hwinfo, windows 11 does an amazing job keeping important threads on those two cores 95% of the time. those two cores do the 5.4ghz while the rest of the p cores bounce between 5.1 and 5.2ghz. e cores are really only used for minimized / background stuff and when under a heavy load. windows 11 has been doing a great job scheduling. you can easily watch it in real time. start downloading a game with steam, have it focused, its on the two highest performing P cores. minimize steam, its tossed onto the e cores. play a game, decide to randomly do a virus scan with windows defender? its on the e cores. discord running in the background? its on the e cores.
                      You're wrong about the optimal CPU core part on Linux:

                      My Intel i7-11700F (Rocket Lake) has a single core which boosts upto 4.9 GHz, whereas all the other ones are going upto 4.8 GHz.

                      I can see this by simply running a command (sudo cpupower -c all frequency-info) and when running any game that's mostly single-threaded (e.g. the original CRYSiS), I can observe that Linux is putting that game thread onto the fastest core, only bouncing that thread onto another core for a very short period of time when the fastest core gets too hot (remember that it was manufactured on Intel's aging 14nm node).

                      Then, after a very short cool-down period, the game thread is immediately put back onto the best performing core.

                      So no, Linux's CPU scheduler is pretty smart already, but just lacking in awareness when it comes to Intel's hybrid architecture, for now...

                      Comment

                      Working...
                      X