Announcement

Collapse
No announcement yet.

AMD Announces Radeon RX 7900 XTX / RX 7900 XT Graphics Cards - Linux Driver Support Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by Khrundel View Post
    As longtime AMD user (well, not so long, I had 580 and 5700xt cards), I was heavily disappointed. These cards show overkill performance in rasterization (who needs 200+ fps in 4k?) but lack any progress toward RT support. According to AMD they show some decent performance in current RT games but you need to remember that these are games which use RT for some limited effect like shadows or reflections and some of them have dedicated "AMD RT" render mode with lowest possible settings. I've expected, AMD, being an underdog at RT implementation, will try to reach at least some low hanging fruits and will bump their RT performance at least 2-2.5x gen2gen. Instead, according to AMD's own figures, they have up to +70% with rasterization and up to +60% with RT.
    Worry less about the implementation of real-time ray tracing in game, it will take several generation of videocards to get large adoption. Visually, a skilled gaming developers can match with rasteration without real tracing. Let's wait and see before making a conclusion.

    One have to be an idiot to buy 7900tx*. For $900-1000 buyer will get 300+ FPS in counter strike go and 3060ti-like performance with modern games with high RT workload.
    Same applies for someone spending more that $2000 (a price for an entire mid-to-high end PC system) to get a RTX 4090 for ~30% performance the 6950XT and $1000 more.

    Comment


    • #72
      Originally posted by finalzone View Post
      Worry less about the implementation of real-time ray tracing in game, it will take several generation of videocards to get large adoption. Visually, a skilled gaming developers can match with rasteration without real tracing. Let's wait and see before making a conclusion.


      Same applies for someone spending more that $2000 (a price for an entire mid-to-high end PC system) to get a RTX 4090 for ~30% performance the 6950XT and $1000 more.
      I’m not sure why anyone wouldn’t want a GPU with better ray tracing performance. It takes less time for developers to implement ray tracing which is the reason we see it often in indie titles. The reason we don’t see it often in triple A titles is due to them also targeting older consoles. Making the addition of ray tracing an additional task that need to complete.

      A RTX 4090 is on average 64% faster than a RX 6950 XT at 4K in rasterization and 100% faster in 4K ray traced titles. While using the same power as a RX 6950 XT: https://tpucdn.com/review/asus-gefor...wer-gaming.png

      Comment


      • #73
        Originally posted by Mahboi View Post
        This post is a perfect description of how ridiculously out of touch the Linux community is to its own software.

        Linux is a million times less convenient and full of gotchas, config files, tweakable things, and configuration requirements than Windows is.
        I dropped Windows to come to Linux and I sincerely miss the simplicity of usage and lack of necessary documentation reading every few days. My PC used to "just work", provided I accepted it to be owned by MS. Now I get to spend hours every week having to understand why X doesn't work out of the box and learning tons of stuff I don't care about and never wanted to learn. I get to see glitches and bugs that come from things that are staples in Windows since forever. I have to mind tons of little things and learn a billion little cogs in the machine.

        Linux isn't getting popular because it's full of options that require hours of learning.
        Windows is still dominating because despite having almost no options, it requires almost no learning.
        The day the Linux community realises that their problem is that they can't look at the inconvenience that using Linux is, then Linux will actually take a step forward.

        Linux offers all the choice in the universe with no oversight: it's a mess of programs and features that do not mesh together and each require their own little world of config. You can find posts online saying "Wayland is the future, but it's not for right now" that are 15 years old.
        Some of the "advanced" features that just work decently enough anywhere else are a 50% chance of failing for some reason on Linux. VRR, multi monitor, an external hdd that you just unplug and forget to plug back when you restart the PC...all of this is seamless on Windows and a pain on Linux, crashes all over, glitches of all types, thing won't even start because /etc/fstab expects an HDD that I unplugged. So now I have to rely on the DE to mount my external HDD because the kernel is thought up in a 1970s server fashion where a missing drive is somehow a critical failure that won't let the thing come out of hibernation without some endless timeout.
        I still am baffled that programs like "find" which are the most obvious of the obvious, require a syntax like "find . -name name". It's so out of the 80s with 0 self-cricitism on its impracticality that it's just shocking. So because Linux is the Land of Freedom, you have cool things like "fd" that replace it. Great. But now we get a useless POS program in the core that will never be removed, and we have a better one that people have to actually find.

        Linux is FULL of extremely deprecated programs and principles, design ideas that have nothing to do with a modern desktop usage. It's full of stuff that requires to understand a certain config's syntax, language and expectations. Your GRUBs, your Xs, your DEs, your weird awk, sed, find, grep and all their million flags.
        If Linux had any sense of "convenience with no adjustments", there's a thousand things that should've been done in the past 20 years. Unfortunately, none of them will happen because they'd require an actual authority. They'd require to forcibly remove X for Wayland, to kick out old 80s programs for modern replacements, to standardize ALL config files to something modern like TOML, YAML, JSON, to have clear cut definitions of responsibilities for each programs. Dumb but pure example: my notifications on Linux sometimes don't disappear. They don't do it even after hours. Why? Because the spec says that it's the sender's job to give a timeout. If it doesn't, it just stays until I manually click. Is there a way to automate this away? Sure. You just need to read however many online pages and explanations and you'll eventually script something away. Meanwhile in Windows the thing stays 15s and goes away and MS doesn't care about the sender or the spec.

        The absolute problem in Linux is that they refuse to see the inadequacy of their OS towards the common user's case. The common user wants simplicity and no surprises. A nice config program that's bloated and pretty looking and offers buttons with labels that do things instantly. A 'find name" that finds the thing called name. A config system that is quick to read, universal and doesn't demand schooling to use it. A freaking "I shut down my external DAC AFTER I shut down the PC, and when I start it again, it doesn't hang on boot like a moron because it can't find the DAC anymore and just lets me restart the DAC in 10 seconds".

        Linux is everything but convenient. And as someone who actually made the big jump and refused to have even a Windows VM on my machine, I find the inconvenience of Linux to be a recurring problem that plops back pretty much every 2-3 weeks. Uninstall a game? Good luck getting it to start when you reinstall it. Your mouse pointer somehow won't let your character change direction in game? Just Alt-Tab in and out till it fixes itself. Hardware encoding won't work after you spent literal hours trying to install all the VA-API and extra stuff? Tough luck, read more and maybe you'll find it, or maybe it won't work.

        I usually wouldn't be so pissed off about Linuxians being so self-obsessed with the Righteousness of Their Mighty OS, but talking about "convenience with no adjustments" is just pushing it. Linux is the most inconvenient big OS. I must regrettably say that I jumped to Linux to own my PC instead of MS owning it, and it is an incredibly more complicated, annoying and time-consuming experience than Windows. It's getting to the point where every bug I find, every inconvenience that demands more reading about specifics due to a poor design done in the 1990s, is pushing me back to just burn Win10 back on my SSD and ignore Linux as a daily OS.

        When your OS is so much more inconvenient to use than the competition, and so incredibly full of specifics that demand insane amounts of time sunk into them instead of "just working without adjustments", that people seriously consider going back to being owned by Microsoft rather than dealing with your shit, at the very least don't have the arrogance of giving lectures about convenience and how things should work without adjusments!

        Oh and also, if people want to have useless bling bling, it's their choice. Especially when the bling bling is easy, practical and nice looking, and doesn't demand 3 hours of reading man pages and stack overflow or arch linux forums reading.
        Posts like this make my day It is so funny...

        Comment


        • #74
          Originally posted by drakonas777 View Post
          During presentation Lisa said "AM4 is going to continue for long long time". I find it interesting to mention this just after emphasizing the longevity of AM5. Feels like either they are planning to maintain AM4 and ZEN3 as a budget platform far longer than I was expecting or some new SKUs are still coming for AM4. I'd love to see some ZEN3 IO + ZEN4c CCD stuff for example, with sane power limits and affordable price
          Your post made me think of a "Moore's Law is Dead" posting, in which several of his sources claimed a AM4 cpu based on zen 4 chiplets was developed, but not actually released. I could see AMD deciding to go ahead and release it since the economic situation has changed quite a bit in the last year ( end of covid, less cpu demand, inflation leading to less upgrades, etc.). Note also that AMD jumped from 5000 to 7000, leaving 6000 as a convenient naming scheme. It would explain all the firesales on 5000 cpu's, and if the excess stock is eliminated by the end of 1st quarter next year, then it could be "introduced". I am sure they would not order a lot of new 7nm zen 3 chiplets. Zen 4 with ddr5 would still be faster and the AM5 socket upgrade-able, and the "budget" option would still be a worthwhile final upgrade for AM4, especially if they only produced the upper tier models (with and without v-cache), The 7nm zen 3 could still be used in the real budget options. The 5nm chiplets could be used in either socket am4 or socket am5 depending on demand.

          IMHO, it would make a lot sense.

          Comment


          • #75
            Originally posted by piotrj3 View Post
            It doesn't count that much,
            It does, though. If you go through the Hot Chips presentations of the purpose-built AI accelerators, they all feature lots of super-fast on-die SRAM. As fast as 5.3 TB/s seems, they get even faster speeds, with one company devoting over half its die space for 900 MB @ ~10 TB/sec, IIRC.

            Originally posted by piotrj3 View Post
            Cache more counts on stuff you reiterate multiple times,
            Like deep learning, with batching.

            Originally posted by piotrj3 View Post
            But cache doesn't count in GPUs that much,
            Sure it does. Frame buffer, Z-buffer, etc. For like 2 generations of XBox consoles, Microsoft went out of their way to incorporate in-package memory for that stuff, because you hit it so hard.

            Originally posted by piotrj3 View Post
            GPU if it needs to read 8GBs to make a single frame, for rtx 4090 it will take 8ms.
            You don't though. Textures have MIP-maps, so that you only need to access them at the resolution which is visible. Texture compression further reduces bandwidth requirements. With geometry, you can do on-the-fly tessellation, but I'd bet that game engines still compute & store multiple LoD models, to further reduce bandwidth.

            It's worth noting that MIP-maps & multiple LoD models reduce bandwidth requirements at the expense of increasing memory footprint. Not a huge footprint increase (only 33% overhead for MIP maps), but it's an example of where you're not having to read the majority of GDDR memory contents to compute each frame.

            There are also engines which do their own software-driven caching. I think AoTS pioneered a technique of caching projected texture maps, for instance.

            Originally posted by piotrj3 View Post
            in grand big scene 96MB of cache is pitful comparing to 8-10GBs of VRAM usage you have in game.
            The mistake I think you're making is to assume memory accesses are fairly uniform in memory space. I already gave some examples of hot spots, but others arise from acceleration structures used in ray tracing, physics, and perhaps even visibility & scene traversal.

            The proof of the pudding is in the eating. It seems to me that Infinity Cache is what enabled RDNA2 to catch Nvidia on rasterization performance. And they did it with lower GDDR bandwidth. So, that proves its worth. That they doubled-down on it and sped it up by 3.5x tells us how much AMD truly believes in it!

            You can also look at how Nvidia is upping the cache in its GPUs. According to this, the RTX 4090 has 72 MB of L2 cache, up from a mere 6 MB in the RTX 3090!

            Last edited by coder; 04 November 2022, 12:44 PM.

            Comment


            • #76
              Originally posted by trueblue View Post
              Note also that AMD jumped from 5000 to 7000, leaving 6000 as a convenient naming scheme.
              Actually, they are using Ryzen 6000-branding for Zen 3+ laptop CPUs, made on TSMC N6 process node. Michael even reviewed one:


              It would be interesting to see a Zen 4 "backport" to AM4, but they have to weigh the potential sales volume * profit margin against the engineering, support, and marketing/channel costs. And I'm skeptical enough additional DIY-ers with an old (but not too old!) AM4 board would upgrade to a Zen 4 CPU, that wouldn't otherwise upgrade to a Ryzen 5000 or do a full-system upgrade to a Ryzen 7000. Plus, there would be a small number of people buying a new AM4 board + one of these CPUs, although that number is going to shrink as the new boards & DDR5 get cheaper.
              Last edited by coder; 04 November 2022, 12:57 PM.

              Comment


              • #77
                Originally posted by bachchain View Post
                So where's the not-$900 versions

                Sometime early next year I assume. The flagships are always out first so they can battle it out in the performance/gaming benchmarks arena and mark their market share. They will probably be not that cheap I expect. The mainstream price bracket has been shifted so I wouldn't be surprised at $400 "1080p ready" cards. As posted already, your best bet right now is with higher end 6xxx cards (6700, 6700XT, 6750XT, 6800XT) as those will keep dropping in price and performance should be close to what the "mainstream" 7xxx cards will offer (unless AMD does their shenanigans again). Of course, there are always the next gen features that you will be missing out (we have yet to see how much AMD will cut down the mainstream offerings in this regard).
                Last edited by Melcar; 04 November 2022, 01:04 PM.

                Comment


                • #78
                  Originally posted by Melcar View Post
                  As posted already, your best bet right now is with higher end 6xxx cards (6700, 6700XT, 6750XT, 6800XT) as those will keep dropping in price and performance should be close to what the "mainstream" 7xxx cards will offer
                  The big footnote is that the new generation is allegedly 54% more efficient*. So, while you can probably get more perf/$ on a discounted old-gen card (near when the overlapping cards launch), users looking for best energy efficiency will probably prefer the 7000-series.

                  * I'm not clear on exactly how AMD measured this, or how it will scale down to their lower-end models.

                  Comment


                  • #79
                    Originally posted by coder View Post
                    The big footnote is that the new generation is allegedly 54% more efficient*. So, while you can probably get more perf/$ on a discounted old-gen card (near when the overlapping cards launch), users looking for best energy efficiency will probably prefer the 7000-series.

                    * I'm not clear on exactly how AMD measured this, or how it will scale down to their lower-end models.
                    Yes, we still don't know how AMD is cutting down the lower end parts. They barely tried with the 6xxx series and anything below $400 was a joke. One of my main concerns is that how will they slot PCIe bandwidth. I expect them to go PCIe 5.0 x8 (and hopefully not smaller) on their mainstream parts so we shall see how it will hurt budget consumers still on PCIe 4.0 platforms.
                    Last edited by Melcar; 04 November 2022, 01:18 PM.

                    Comment


                    • #80
                      Originally posted by coder View Post
                      The big footnote is that the new generation is allegedly 54% more efficient*. So, while you can probably get more perf/$ on a discounted old-gen card (near when the overlapping cards launch), users looking for best energy efficiency will probably prefer the 7000-series.

                      * I'm not clear on exactly how AMD measured this, or how it will scale down to their lower-end models.
                      The RTX 4090 also gets 50%+ more performance per watt compared to a RX 6950 XT: https://www.guru3d.com/articles-page...review,30.html

                      The only special feature the 7000-series has over Nvidia is DP 2.1 which is useless unless people utilize FSR to get more than 240 FPS+ at 4K since VSC is capable of 4K/240Hz and 8K/120Hz. Looking at actual power consumption when not overclocked the RTX 4090 competes normally: https://tpucdn.com/review/nvidia-gef...wer-gaming.png
                      Last edited by WannaBeOCer; 04 November 2022, 01:38 PM.

                      Comment

                      Working...
                      X