Announcement

Collapse
No announcement yet.

AMD Announces Ryzen 7000 Series "Zen 4" Desktop CPUs - Linux Benchmarks To Come

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Dukenukemx View Post
    It's been done without DLSS and FSR. Also, FSR is really easy to enable in Linux or Windows and works both AMD and Nvidia cards. Probably works on Intel but I haven't tried.
    Just add these to the .profile and reboot. When you set the correct resolution you should get FSR working.
    export WINE_FULLSCREEN_FSR=1
    export WINE_FULLSCREEN_FSR_STRENGTH=2​
    the claim that the amd 6900xt is much more efficient than nvidia 3090 becomes a joke as soon as you discover that the amd 6950XT lost all the efficiency benefits.

    also to activate FSR1,0 globally really is not the same as activating it in the game Engine.

    "Native implementation produce much higher fidelity image as the upscale is applied before any UI elements and post-processing effects are applied. Third-party implementation such as Steam Deck's apply the upscaling at the very end of the rendering pipeline, blurring UI and producing significantly much worse image."
    https://www.pcgamingwiki.com/wiki/Fi...per_Resolution

    WINE: "
    Wine Can be applied to any Vulkan game (even dxvk and vkd3d-proton), but will look worse than a "real" implementation as it upscales the final image, instead of upscaling before post-effects.
    "

    the 2D overlay GUI/HUD of the game if you activate it globally is damanged and if you activate it in the game engine the GUI/HUD of the game is not reduced in resolution.

    and Nvidias DLSS has more games who have it in the engine compared to FSR1.0/2.0...

    https://www.pcgamingwiki.com/wiki/Li...lity_upscaling
    Last edited by qarium; 06 September 2022, 09:46 PM.
    Phantom circuit Sequence Reducer Dyslexia

    Comment


    • Originally posted by qarium View Post

      the claim that the amd 6900xt is much more efficient than nvidia 2090 becomes a joke as soon as you discover that the amd 6950XT lost all the efficiency benefits.
      Anux said 3090, not 2080. There's a big difference.
      also to activate FSR1,0 globally really is not the same as activating it in the game Engine.

      "Native implementation produce much higher fidelity image as the upscale is applied before any UI elements and post-processing effects are applied. Third-party implementation such as Steam Deck's apply the upscaling at the very end of the rendering pipeline, blurring UI and producing significantly much worse image."
      https://www.pcgamingwiki.com/wiki/Fi...per_Resolution

      WINE: "
      Wine Can be applied to any Vulkan game (even dxvk and vkd3d-proton), but will look worse than a "real" implementation as it upscales the final image, instead of upscaling before post-effects.
      "

      the 2D overlay GUI/HUD of the game if you activate it globally is damanged and if you activate it in the game engine the GUI/HUD of the game is not reduced in resolution.
      Better than nothing. World of Warcraft surprisingly supports FSR.
      and Nvidias DLSS has more games who have it in the engine compared to FSR1.0/2.0...

      https://www.pcgamingwiki.com/wiki/Li...lity_upscaling
      Yea and DLSS is Nvidia exclusive plus earlier implementations kinda sucked.

      Comment


      • Oct. 2022 UPDATE: Welp, if Anandtech's Ryzen 7950X review is to be believed, then desktop Zen4 / Ryzen 7000 does indeed have Pluton:
        ————————————————————————————————

        ORIGINAL POST

        ​​
        Originally posted by Anux View Post
        I'm not sure that it will be 100% monolithic, is there any source for that (it could very well splitt CPU/GPU)?
        Current rumors are that it's at least partially monolithic with at least CPU core and I/O on the same chiplet.

        Originally posted by Anux View Post
        But what leads you to assume they would drop Pluton on a notebook chip that will be used with Win 11 99% of times?
        First off, I was differentiating between Dragon Range and Phoenix. If we assume that server/desktop truly does lack Pluton then, with Dragon Range basically being a derivative of desktop AM5, I would expect that to also lack Pluton.

        Basically I see server/desktop/dragon range as all being in the same boat.

        But Phoenix? All bets are off considering that Ryzen 6000, Phoenix's direct predecessor, had Pluton.



        Originally posted by Anux View Post
        I don't think it's because of shared architecture but need for Win server, the webserver space is > 80% linux as is cloud but you have many companies running windows clients and those are typically surrounded by many Win servers. To have 2 different server CPU lineups would not be clever.
        And that's why I lean towards Pluton not being present across the server/desktop/dragon range, uh, range being more likely than Pluton being present, especially since it'd take more work for AMD to implement Pluton than to leave it out for this market segment.

        This also kind of goes back to what I said about USB4 not even being integrated into AM5 Ryzen 7000's SOC despite USB4 having much more obvious benefits.


        Originally posted by Anux View Post
        What kind of compatibility would there be needed if pluton does nothing different from TPM?
        I would think that Pluton is at least doing something different from TPM if Pluton straight up requires 3rd party certificates to be disabled.


        Originally posted by Anux View Post
        the compute die could just ask the I/O die "is pluton available?" and just report that back to the OS.
        Maybe, maybe not. Honestly, just like the above regarding differences with TPM, it's pretty difficult to say for sure if it even works like that or not since, by its nature, Pluton is very much a "black box".


        Originally posted by Anux View Post
        Is it really? If I had to implement it, I would just make 2 different APIs for the same hardware.
        Again, the black box nature makes things a little bit difficult to know for absolute 100% sure but if the PSP and Pluton were on the same hardware then I really would have expected AMD to have listed that part of the Ryzen 6000 diagram as one larger rectangle labeled as something like "Pluton Platform Security Processor" or "Platform Security Pluton Processor" or the like (e.g. a "PPSP" or "PSPP")

        Also, the PSP actually takes care of part of the boot sequence which is why disabling it hasn't really been a thing all this time despite disabling Pluton already being a thing (if the BIOS options are exposed of course).
        Last edited by NM64; 02 October 2022, 01:55 AM.

        Comment


        • Originally posted by qarium View Post
          if you watch sales it looks like nvidia does well on 8nm compared to amd on 7nm...
          its also looks like nvidia is able to earn more money per 1mm² chip die compared to amd...
          if you benchmark with raytracing it also looks like nvidia is faster...
          What does that have to do with process node?

          i know only 1 group of people who would prever amd instead of nvidia and thats the linux people who want opensource drivers.​
          something doesn't add up there, linux market share is 1% AMDs is 20%. I'm not sure why Nvidia is​ still that dominant because their most sold cards are ones that don't support RT or are much to slow for RT, that rules RT out as it's selling point. I guess it's just like with Intel, consumers need time to realize AMD is competitive again.

          my argument also still stands because AMD 7nm cards beat intel 6nm cards...​
          that could very well be the shitty drivers

          Originally posted by qarium View Post

          the claim that the amd 6900xt is much more efficient than nvidia 2090 becomes a joke as soon as you discover that the amd 6950XT lost all the efficiency benefits.​
          Its even much more efficient than the 3090 and yes all 6*50 models are overclocked and have faster RAM making them less efficient but the 6950XT is still more efficient than 3090ti. https://www.techpowerup.com/review/m...x-trio/37.html

          and Nvidias DLSS has more games who have it in the engine compared to FSR1.0/2.0...
          Yes and those that don't have dlss can still use RSR, so it's better to compare native and not skew the results with upscalers.

          Comment


          • Originally posted by NM64 View Post
            And that's why I lean towards Pluton not being present across the server/desktop/dragon range, uh, range being more likely than Pluton being present, especially since it'd take more work for AMD to implement Pluton than to leave it out for this market segment.
            They already implemented it in the 6000 series, what is the problem with just c&p'ing that ip block?

            This also kind of goes back to what I said about USB4 not even being integrated into AM5 Ryzen 7000's SOC despite USB4 having much more obvious benefits.
            USB4 in a server? it would be more flexible to implement that over pcie.

            I would think that Pluton is at least doing something different from TPM if Pluton straight up requires 3rd party certificates to be disabled.
            It "requires" it as much as Win 11 requires TPM https://www.tomshardware.com/how-to/...pm-requirement

            Maybe, maybe not. Honestly, just like the above regarding differences with TPM, it's pretty difficult to say for sure if it even works like that or not since, by its nature, Pluton is very much a "black box".
            That we can definitly agree on.

            Comment


            • Oct. 2022 UPDATE: Welp, if Anandtech's Ryzen 7950X review is to be believed, then desktop Zen4 / Ryzen 7000 does indeed have Pluton:
              ————————————————————————————————

              ORIGINAL POST

              Originally posted by Anux View Post
              They already implemented it in the 6000 series, what is the problem with just c&p'ing that ip block?
              This starts getting more into the "black box" thing - we really have no idea if it's that simple or not.


              Originally posted by Anux View Post
              USB4 in a server? it would be more flexible to implement that over pcie.
              Which could very well be why, on AM5, USB4 isn't present on the SOC and why it requires an extra chip on the motherboard.

              This leads farther credence of desktop Ryzen being more of a "trickle down" from server than server being a "trickle up" from desktop.


              Originally posted by Anux View Post
              It "requires" it as much as Win 11 requires TPM https://www.tomshardware.com/how-to/...pm-requirement
              On a Ryzen 6000 device, you straight-up can't boot Linux unless allowing 3rd party certificates is enabled in the BIOS which is the corresponding Pluton BIOS option I was referring to (in theory, enabling that option disables Pluton).
              Last edited by NM64; 02 October 2022, 01:55 AM.

              Comment


              • Originally posted by Dukenukemx View Post
                Anux said 3090, not 2080. There's a big difference.
                man there was a typo 2090.. and there is no 2090 so it is clear that it means 3090...

                so what is the point of the 6900XT is just more efficient because of low clock speed and low-TDP power limit?

                and the 6950XT clearly lose all efficiency advantages compared to the 3090... because of higher clocks and higher power limit.

                and userbenchmark claims the 3090 is +27% faster than the 6950xt...
                https://gpu.userbenchmark.com/Compar...4081vsm1843533

                also just remember i have a vega64 and i would buy the 6950XT because i like opensource drivers...

                but i would not claim the 6950XT on 7nm beats a nvidia 3090 on 8nm... as soon there is FSR2.0/DLSS2.x and raytracing in the game...

                so for me it still stands: nvidia on 8nm > amd on 7nm > intel on 6nm...

                this means intel can still lose even if they have tonns of 3nm/4nm/5nm TSMC wavers...

                intel is very good in producing bullshit.

                Originally posted by Dukenukemx View Post
                Better than nothing. World of Warcraft surprisingly supports FSR.
                looks like on the list of games who supports this natively Nvidia is the clear winner:
                https://www.pcgamingwiki.com/wiki/Li...lity_upscaling

                but again i would buy the 6950XT and i don't care what nvidia does or not does

                but i know as soon as you benchmark 4K or 5K and you put in FSR/DLSS and Raytraing in the benchmark Nvidia wins.

                so technically for me this is true: nvidia on 8nm > amd on 7nm > intel on 6nm.

                this means intel can do a lot of bullshit even if they have 3nm 4nm 5nm. node.

                Originally posted by Dukenukemx View Post
                Yea and DLSS is Nvidia exclusive plus earlier implementations kinda sucked.
                you know what is the joke about the word earlier in your sentense ? the joke is: earlier AMD had no solution at all.
                FSR1.0 was years later and FSR2.0 was even another year later.
                Last edited by qarium; 06 September 2022, 10:41 PM.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • Originally posted by Anux View Post
                  What does that have to do with process node?

                  if you count in the fact that if you compare the 6950xt instead of 6900xt amd has no real efficiency advantage...
                  the 6900xt only has lower power consumion because of low clock and low power TDP maximum set.
                  this means this is still true:

                  nvidia on 8nm > amd on 7nm > intel on 6nm

                  this means even if intel has 3nm or 4nm or 5nm wavers from TSMC they can still lose.

                  Originally posted by Anux View Post
                  something doesn't add up there, linux market share is 1% AMDs is 20%. I'm not sure why Nvidia is​ still that dominant because their most sold cards are ones that don't support RT or are much to slow for RT, that rules RT out as it's selling point. I guess it's just like with Intel, consumers need time to realize AMD is competitive again.

                  normal people just watch these websites:
                  Based on 164,130 user benchmarks for the AMD RX 6950-XT and the Nvidia RTX 3090, we rank them both on effective speed and value for money against the best 713 GPUs.


                  Produktvergleich für Zotac Gaming GeForce RTX 3090 Trinity OC, 24GB GDDR6X, HDMI, 3x DP (ZT-A30900J-10P), Sapphire Nitro+ Radeon RX 6950 XT, 16GB GDDR6, HDMI, 3x DP, lite retail (11317-02-20G)








                  as soon as someone want 4K/5K resolution and FSR/DLSS and raytracing they will buy the 3090.

                  even if you say raytracing in gaming is just an impossibility people will still claim they can use the raytracing units for OptiX and will still go with nvidia.

                  also ROCm/HIP is just an replacement for Cuda and optiX is still faster and also there are 1000 times more CUDA software than RocM/HIP ready software.

                  "consumers need time to realize AMD is competitive again"

                  outside of linux/opensource AMD is only competitive in the FPS per Dollar game. but i did send you a comparison the different between 6950xt and 3090 is 228€

                  on linux opensource drivers is the strongend selling point.

                  Originally posted by Anux View Post
                  that could very well be the shitty drivers
                  the consumer don't care for the reason why this happens. they see 6nm intel and it is shit.

                  ​​
                  Originally posted by Anux View Post
                  Its even much more efficient than the 3090 and yes all 6*50 models are overclocked and have faster RAM making them less efficient but the 6950XT is still more efficient than 3090ti. https://www.techpowerup.com/review/m...x-trio/37.html
                  i am sure the consumer don't care about a fair comparison without raytracing and without DLSS...

                  and i am sure that the 3090 beats the 6950xt in effiency in 4/5K + FSR/DLSS+raytracing benchmarks.

                  Based on 164,130 user benchmarks for the AMD RX 6950-XT and the Nvidia RTX 3090, we rank them both on effective speed and value for money against the best 713 GPUs.

                  as soon as you do this nvidia is 27% faster.

                  if the 6900xt beats the 3090 in efficiency without raytracing and without DLSS some people like you care most people don't care.

                  ​​
                  Originally posted by Anux View Post

                  Yes and those that don't have dlss can still use RSR, so it's better to compare native and not skew the results with upscalers.
                  Nvidia with 8nm beats AMD with 7nm if you put in all features... only people like you who exclude features like raytracing and dlss come to a different conclusion.

                  Phantom circuit Sequence Reducer Dyslexia

                  Comment


                  • Originally posted by NM64 View Post
                    Heck, there is even AM4 server hardware now as seen by the following being from Asrock Rack rather than plain-old Asrock:
                    https://www.asrockrack.com/general/p...?Model=X470D4U
                    ASRock Rack pretty much has the market cornered on AM4 server boards. That tells me it's a very niche market, and not something AMD thinks about very much.

                    BTW, I'm still waiting for availability and prices to improve on this board's successor, the X570D4U (preferably the 10 Gig version). I've had my eye on it pretty much since it launched, but foolishly assumed its price would drop since then, yet it has only gone up. At least I can again find them in stock, recently.

                    Comment


                    • Originally posted by NM64 View Post
                      This leads farther credence of desktop Ryzen being more of a "trickle down" from server than server being a "trickle up" from desktop.
                      This is definitely the case, if you compare the overall desktop non-APU market size vs. sever CPU market that AMD is going for. I think that has several implications, including as AMD's perf/W advantage over Intel, and even potentially explains why Zen2 and Zen3 CPUs haven't clocked as high (hint: server CPUs limit clock speeds, for better scalability).

                      This makes it very interesting to see how high Zen4 is reportedly clocking. I wonder how much potential IPC they sacrificed, in order to achieve that.

                      Comment

                      Working...
                      X