Announcement

Collapse
No announcement yet.

AMD Announces Radeon RX 7900 XTX / RX 7900 XT Graphics Cards - Linux Driver Support Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Melcar View Post
    One of my main concerns is that how will they slot PCIe bandwidth. I expect them to go PCIe 5.0 x8 (and hopefully not smaller) on their mainstream parts so we shall see how it will hurt budget consumers still on PCIe 4.0 platforms.
    I think it would make more sense for them to drop to PCIe 4.0 x16, before going to x8. However, maybe the cost-savings are greater by doing 5.0 x8, instead.

    For the sake of older platforms, I think the 7800 tier will be x16 of either 4.0 or 5.0. It's only the 7700 tier, where I think they'd go to x8. At that level, there shouldn't be a significant difference in PCIe 4.0 x8 or x16. TechPowerUp has regularly revisited the subject of PCIe scaling, and while I disagree with some of their conclusions, it's true that the impact (on any but the newest, top-tier cards) tends to be limited to a handful of outliers. And even then, we're still talking about single-digit % slowdowns.

    Comment


    • #82
      Originally posted by coder View Post
      Actually, they are using Ryzen 6000-branding for Zen 3+ laptop CPUs, made on TSMC N6 process node. Michael even reviewed one:


      It would be interesting to see a Zen 4 "backport" to AM4, but they have to weigh the potential sales volume * profit margin against the engineering, support, and marketing/channel costs. And I'm skeptical enough additional DIY-ers with an old (but not too old!) AM4 board would upgrade to a Zen 4 CPU, that wouldn't otherwise upgrade to a Ryzen 5000 or do a full-system upgrade to a Ryzen 7000. Plus, there would be a small number of people buying a new AM4 board + one of these CPUs, although that number is going to shrink as the new boards & DDR5 get cheaper.
      I'd say it mostly depends on how much effort is required for ZEN3 IOD to be coupled with ZEN4/c CCD. ZEN4c CCD is going to contain 16 cores (2x8C CCX'es) while being not so larger than regular ZEN4 CCD most likely. For starters, I would image AM4 could be a nice "vehicle" to commercialize ZEN4c CCDs with defects, since regular 7000 series are expected to be ZEN4 and adding stripped cores to this line would be weird. Also, AMD could just use a different 8C ZEN4c CCD, which would be very small, like ~0.6 ZEN4 die or something. That should be economical enough and fast enough to hold ground in the low/mid products against Intel DDR4 based builds.

      Of course all this is just a wishful thinking, but it would be interesting to have such SKUs or some other ones.

      Comment


      • #83
        Originally posted by middy View Post
        in a perfect world, things should just magically work and read people's minds. but that's a perfect world, utopian thinking. real world, things don't work that way. the control panel is about basic control. its about tweaking, its about monitoring, its about enabling and disabling stuff because not everything is universal. and this may come to a surprise, but there are people that use their video cards as more than just a display adapter.

        ...

        if linux ever wants to grow in the enthusiast market, with out a doubt stuff like amd gpus need a control panel for at bare minimum overclocking support. overclocking scene is big and there are a lot of people that don't even game, they just love overclocking and run benchmarks. i never understood why linux didn't try to really focus on that market. you have people who love tweaking hardware and tweaking their system to extract every last drop of performance. with linux open nature, linux is a dream in the tweaking regard. where you can go as far as mess with your kernel and run your own custom kernel. linux SHOULD be the defacto overclocking and benchmarking platform.

        but ranting aside, i would love for amd to finally have an official supported control panel on linux that provides similar functionality.
        This lack of hardware control and X11-/Wayland-woes are why I'm very happy with vfio productivity/gaming. It's not perfect but it's the best compromise I've found until various graphical issues have stabilized. That said, I've had a lot of issues with Windows WHQL drivers where games would lag badly (144 fps drop to constant 10fps) in new games. Then when I update to latest (non-WHQL) then games crash or show artifacts. Developers are not testing before releasing, oef. Finally a quick rant, I cannot for the life of me find out how to disable Shift+Backspace Radeon Chill hotkey in AMD GPU drivers on Windows. I disabled all the global hotkeys but it's still enabled and makes sticky-keys like sounds... I also cannot remap it to something else because there's no Shift+Backspace mapping under the hotkeys section.

        https://www.reddit.com/r/AMDHelp/com..._a_sound_from/

        Comment


        • #84
          Originally posted by WannaBeOCer View Post
          The RTX 4090 also gets 50%+ more performance per watt compared to a RX 6950 XT: https://www.guru3d.com/articles-page...review,30.html
          Thanks for the link! According to that*, relative efficiency is:

          GPU Relative Efficiency (J/f)
          RX 6900 XT 1.38
          RTX 3090 1.40
          RX 6800 XT 1.52
          RTX 3090 Ti 1.58
          RX 6950 XT 1.78

          So, you're right that it's quite a bit more efficient than the RX 6950 XT, but you're comparing it against one of AMD's least-efficient models. That said, it is comparing flagships, and if it's what AMD compared their RX 7900 XTX against, then it makes sense.

          To Melcar 's point about the efficiency of lower-end models, the RX 6600 is the most-efficient, and bested only by the RTX 3070, within that generation. However, their XT and especially their mid-gen 6050 refresh models are quite a bit less efficient. Check out the link, for that data. I picked only the most relevant comparisons, for my table.

          * I really wish they separated it out by resolution, ray-tracing, DLSS/FSR, etc. I don't really know how much stock to put in those gross metrics.

          Comment


          • #85
            When I saw that the Ryzen 7000 series has a small GPU in the I/O die I had the thought, that their higher performance APUs will have a GPU die next to the CCD.
            With the introduction of the RDNA 3 cards I am more certain about this:

            The new AM5 APU will be a GCD and a CCD and an I/O die.
            - the separation of the functionality follows that of their CPUs
            - they don't have to create a different CCD to integrate the graphics
            - APUs usually don't have 2 CCD-s anyway.
            - Infinity fabric is for this purpose.

            It seems logical to me.
            What are your thoughts?

            Comment


            • #86
              Originally posted by luno View Post
              - 2.1 displayport
              - AV1
              - lower power
              - good price

              didn't know that was possible
              "lower" power 350 watts.
              AV1 - probably 5bucks on the bill of materials
              2.1: fair enough if you're going 8k anytime soon. I'm going to relax my scepticism and say this is the only actually noteworthy thing here.
              "good price" 1000 bucks. Got it. Need to adjust my expectations.

              This is a perfectly decent card _only in the context of the 1699 4090_. It's otherwise a total rip off, especially that the modern stuff, ray tracing, decent inter-frame DLSS-style interpolation, is completely MIA.

              Honestly, this thing is just the baseline you'd expect from the new process node, with absolutely nothing worth talking about other than that, and I include in that assessment the off-chip cache which shows that AMD has NOT been able to ryzen-ify the GPU because only the memcache has gone off the main die. Actual compute stays very much on the main die.

              This is not a noteworthy card.
              Last edited by vegabook; 04 November 2022, 09:17 PM.

              Comment


              • #87
                Originally posted by vegabook View Post
                This is a perfectly decent card _only in the context of the 1699 4090_. It's otherwise a total rip off, especially that the modern stuff, ray tracing, decent inter-frame DLSS-style interpolation, is completely MIA.
                They're promising FSR 3 will have some form of temporal super-sampling. It'd due in Q1 of 2023, IIRC.

                Originally posted by vegabook View Post
                Honestly, this thing is just the baseline you'd expect from the new process node, with absolutely nothing worth talking about other than that,
                Don't agree. The dual-issue change is the biggest ALU change they've done since introducing Wave32. That's largely responsible for the significant increase in fp32 throughput.

                Then, you have the new "AI" units, the infinity cache bandwidth that's 3.5x as high, and the new BVH-construction hardware. All new features.

                Originally posted by vegabook View Post
                and I include in that assessment the off-chip cache which shows that AMD has NOT been able to ryzen-ify the GPU because only the memcache has gone off the main die. Actual compute stays very much on the main die.
                That part was a little anti-climactic, but when you consider the infinity cache is 40% of the die area, it probably makes a difference that they can fab it on a cheaper node. There's also a rumor that they can dual-source it from either TSMC or Samsung.

                As for the practical implications of the compute die remaining monolithic, I think that's only hurting their flagship on price. It's the big dies where you stand to gain the most by segmenting them. For the smaller die sizes, it should make very little difference whether they're monolithic or chiplets. Also, GPUs should be a lot less sensitive to defects, so long as you don't require 100% enablement of all shaders. And chiplets are mainly about defect management.

                I think the only real disappointment is the small degree of ray tracing improvement, considering how far behind they were. I was expecting/hoping this gen to at least have ray tracing performance on par with the RTX 3000-series. I guess I'm also somewhat underwhelmed by the relatively small improvement provided by their AI cores.
                Last edited by coder; 04 November 2022, 10:02 PM.

                Comment


                • #88
                  Originally posted by Mahboi View Post
                  This post is a perfect description of how ridiculously out of touch the Linux community is to its own software.

                  Linux is a million times less convenient and full of gotchas, config files, tweakable things, and configuration requirements than Windows is.
                  I dropped Windows to come to Linux and I sincerely miss the simplicity of usage and lack of necessary documentation reading every few days. My PC used to "just work", provided I accepted it to be owned by MS. Now I get to spend hours every week having to understand why X doesn't work out of the box and learning tons of stuff I don't care about and never wanted to learn. I get to see glitches and bugs that come from things that are staples in Windows since forever. I have to mind tons of little things and learn a billion little cogs in the machine.

                  Linux isn't getting popular because it's full of options that require hours of learning.
                  Windows is still dominating because despite having almost no options, it requires almost no learning.
                  The day the Linux community realises that their problem is that they can't look at the inconvenience that using Linux is, then Linux will actually take a step forward.

                  Linux offers all the choice in the universe with no oversight: it's a mess of programs and features that do not mesh together and each require their own little world of config. You can find posts online saying "Wayland is the future, but it's not for right now" that are 15 years old.
                  Some of the "advanced" features that just work decently enough anywhere else are a 50% chance of failing for some reason on Linux. VRR, multi monitor, an external hdd that you just unplug and forget to plug back when you restart the PC...all of this is seamless on Windows and a pain on Linux, crashes all over, glitches of all types, thing won't even start because /etc/fstab expects an HDD that I unplugged. So now I have to rely on the DE to mount my external HDD because the kernel is thought up in a 1970s server fashion where a missing drive is somehow a critical failure that won't let the thing come out of hibernation without some endless timeout.
                  I still am baffled that programs like "find" which are the most obvious of the obvious, require a syntax like "find . -name name". It's so out of the 80s with 0 self-cricitism on its impracticality that it's just shocking. So because Linux is the Land of Freedom, you have cool things like "fd" that replace it. Great. But now we get a useless POS program in the core that will never be removed, and we have a better one that people have to actually find.

                  Linux is FULL of extremely deprecated programs and principles, design ideas that have nothing to do with a modern desktop usage. It's full of stuff that requires to understand a certain config's syntax, language and expectations. Your GRUBs, your Xs, your DEs, your weird awk, sed, find, grep and all their million flags.
                  If Linux had any sense of "convenience with no adjustments", there's a thousand things that should've been done in the past 20 years. Unfortunately, none of them will happen because they'd require an actual authority. They'd require to forcibly remove X for Wayland, to kick out old 80s programs for modern replacements, to standardize ALL config files to something modern like TOML, YAML, JSON, to have clear cut definitions of responsibilities for each programs. Dumb but pure example: my notifications on Linux sometimes don't disappear. They don't do it even after hours. Why? Because the spec says that it's the sender's job to give a timeout. If it doesn't, it just stays until I manually click. Is there a way to automate this away? Sure. You just need to read however many online pages and explanations and you'll eventually script something away. Meanwhile in Windows the thing stays 15s and goes away and MS doesn't care about the sender or the spec.

                  The absolute problem in Linux is that they refuse to see the inadequacy of their OS towards the common user's case. The common user wants simplicity and no surprises. A nice config program that's bloated and pretty looking and offers buttons with labels that do things instantly. A 'find name" that finds the thing called name. A config system that is quick to read, universal and doesn't demand schooling to use it. A freaking "I shut down my external DAC AFTER I shut down the PC, and when I start it again, it doesn't hang on boot like a moron because it can't find the DAC anymore and just lets me restart the DAC in 10 seconds".

                  Linux is everything but convenient. And as someone who actually made the big jump and refused to have even a Windows VM on my machine, I find the inconvenience of Linux to be a recurring problem that plops back pretty much every 2-3 weeks. Uninstall a game? Good luck getting it to start when you reinstall it. Your mouse pointer somehow won't let your character change direction in game? Just Alt-Tab in and out till it fixes itself. Hardware encoding won't work after you spent literal hours trying to install all the VA-API and extra stuff? Tough luck, read more and maybe you'll find it, or maybe it won't work.

                  I usually wouldn't be so pissed off about Linuxians being so self-obsessed with the Righteousness of Their Mighty OS, but talking about "convenience with no adjustments" is just pushing it. Linux is the most inconvenient big OS. I must regrettably say that I jumped to Linux to own my PC instead of MS owning it, and it is an incredibly more complicated, annoying and time-consuming experience than Windows. It's getting to the point where every bug I find, every inconvenience that demands more reading about specifics due to a poor design done in the 1990s, is pushing me back to just burn Win10 back on my SSD and ignore Linux as a daily OS.

                  When your OS is so much more inconvenient to use than the competition, and so incredibly full of specifics that demand insane amounts of time sunk into them instead of "just working without adjustments", that people seriously consider going back to being owned by Microsoft rather than dealing with your shit, at the very least don't have the arrogance of giving lectures about convenience and how things should work without adjusments!

                  Oh and also, if people want to have useless bling bling, it's their choice. Especially when the bling bling is easy, practical and nice looking, and doesn't demand 3 hours of reading man pages and stack overflow or arch linux forums reading.
                  Mahboi, from your post, you have some good observations, but either your system is something special or your choice of distro is messing with you.
                  I was lucky growing up with a C64, because when I got a computer it was DOS 6.0 and Windows 3.1 and it was a mess. So much tweaking autoexe.bat and config.sys just to get those extra kb of memory to play a certain game. Win95 I detested, because it used so much memory, but eventually I had to migrate, but it wasn't before Win98 was out. It was a challenge, most of the stuff worked, but still a challenge with the new stuff, network, graphics, you name it, every new fancy thing was a hassle, but boy did I learn how to manipulate my OS, my oh my... Moving forward through Win2000, XP, Vista, 7, I felt that I lost control of the inner workings of my OS, and when a problem occurred, I had to "google it".

                  I decided that maybe it's time to check out Linux(since we are using it as our target platform at work). And after going through quite a steep learning curve, I was back in control . It is amazing to see everybit of your hardware is exposed to you as a user, and all you need is a terminal. I understand most of what is going on with my OS, just like back in the pre Win95 days. I have tried so many distros, Ubuntu first of course, a mess that gave me an error message the moment I moved my mouse, and after trying so so many different distros in the end I landed on "the-distro-who-must-not-be-named" and Yocto(because of work). Because it works with my system, it is bleeding edge and I have control. I've installed it on many different machine configurations and I've have none of the problems you describe. So many times at work, with Windows machines I've tought, 2 sec in the terminal and it is solved, but alas there is no easy way to fix it on that platform. There is some hoops to hop, but that is the fun of it. I still use Windows 11, but I feel like a guest in the OS, not the master, and if something breaks, shit.... www.google.com.

                  This is not a appeal for "the-distro-who-must-not-be-named", but check out some other distros then you are using, or figure out what is special about your system. Because the from what I can read from your post, I've had concluded that your system requires special attention.

                  Best,
                  torjv

                  Don't give up just yet, because we are in a time where Linux is turning in to something amazing!

                  Comment


                  • #89
                    Originally posted by torjv View Post
                    when I got a computer it was DOS 6.0 and Windows 3.1 and it was a mess. So much tweaking autoexe.bat and config.sys just to get those extra kb of memory to play a certain game.
                    Same.

                    Originally posted by torjv View Post
                    I decided that maybe it's time to check out Linux(since we are using it as our target platform at work). And after going through quite a steep learning curve, I was back in control . It is amazing to see everybit of your hardware is exposed to you as a user, and all you need is a terminal. I understand most of what is going on with my OS, just like back in the pre Win95 days.
                    Pretty much the same, for me. When I started really digging into Linux, I loved the sense I got that it wasn't try to hide anything from me. Most things I wanted to know about my system could be seen in /proc/ or with some fairly simple commands.

                    I loved the way I could compose interesting functions by chaining together commands. For instance, in the days before rsync, you could copy a directory tree over the network by piping tar into rsh or ssh, and having it execute the complementary tar, on the remote machine.

                    I also loved how filesystems were a separate abstraction from devices, and you could trivially see either, if you wanted. Even text-based config files were something I quickly came to appreciate, since Linux didn't dictate what tools you used to edit or manage them (in a time before git, I started using RCS to track my config file changes).

                    It changed the way I thought about software development, as well. I started my career around the time MS launched Visual Studio and I even used it for a few years. Although I missed some of its conveniences, I began to like the sense that I wasn't "trapped" inside an IDE. If I'm working in a codebase I know, then I find the IDE features have almost zero value for me.
                    Last edited by coder; 04 November 2022, 11:28 PM.

                    Comment


                    • #90
                      Originally posted by WannaBeOCer View Post
                      I’m not sure why anyone wouldn’t want a GPU with better ray tracing performance. It takes less time for developers to implement ray tracing which is the reason we see it often in indie titles. The reason we don’t see it often in triple A titles is due to them also targeting older consoles. Making the addition of ray tracing an additional task that need to complete.
                      Current implementation of real-time ray-tracing in gaming are unimpressive to the point a better rasteurized version can match it with hardly visual difference for less performance hit. A high quality game on old console like Nintendo Switch proved that compared to say Playstation 5 version showing what a skilled gaming developer can do with the available tool. Consoles and mid-to-low end system are the larger market where gaming developers make money.

                      A RTX 4090 is on average 64% faster than a RX 6950 XT at 4K in rasterization and 100% faster in 4K ray traced titles. While using the same power as a RX 6950 XT: https://tpucdn.com/review/asus-gefor...wer-gaming.png
                      The site basically compares a MSRP $1100 older generation vs MSRP $1600 newer for just 1.7 times performance while seemly overlooking the fact MSRP $1000 Radeon 7900 XTX at is nearly within a range of the latter by 20%. AMD basically outsmarted Nvidia in term of performance / price as the latter has yet to release the similar priced card.

                      Comment

                      Working...
                      X