Announcement

Collapse
No announcement yet.

AMD Announces Radeon RX 7900 XTX / RX 7900 XT Graphics Cards - Linux Driver Support Expectations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    [QUOTE=Hibbelharry;n1355635]
    Originally posted by Danny3 View Post
    For the same reasons a Windows user needs it for!

    Except next to noone uses any of those panels, even in Windows.

    Mostly useless bling bling.



    Things should just work with no adjustments needed, thats convenience.


    ​​
    Most of that information is there. Someone just needs to write a GUI for that...except noone seems to care at all.
    in a perfect world, things should just magically work and read people's minds. but that's a perfect world, utopian thinking. real world, things don't work that way. the control panel is about basic control. its about tweaking, its about monitoring, its about enabling and disabling stuff because not everything is universal. and this may come to a surprise, but there are people that use their video cards as more than just a display adapter.

    here is my radeon control panel on windows 11:

    the control panel gives a stupid easy way to configure fan profiles and other stuff like overclocking, underclocking, boost behavior, etc. i use it because i hate zero rpm and always disable, and stock firmware fan control maxes out at 50% fan speed and i set it to max out at 100%. cooler the card, the higher it boosts itself. but sadly, you have people who care more about muh sound, than muh performance.:

    provides basic stats of the gpu + even the rest of the system:

    has an ability to record game play:

    per video game settings and even records basic stuff like average fps per game and how many hours of game play:

    you DO NOT want radeon chill enabled and "just work" universally. that is subjective and per game basis and NOT EVERY SINGLE PERSON WILL WANT IT ENABLED. same thing goes with super resolution, image sharpening, and other features like enhanced sync, or enabling disabling freesync in certain games because some rare occasions you might have flickering problems or crashes with it enabled.


    if linux ever wants to grow in the enthusiast market, with out a doubt stuff like amd gpus need a control panel for at bare minimum overclocking support. overclocking scene is big and there are a lot of people that don't even game, they just love overclocking and run benchmarks. i never understood why linux didn't try to really focus on that market. you have people who love tweaking hardware and tweaking their system to extract every last drop of performance. with linux open nature, linux is a dream in the tweaking regard. where you can go as far as mess with your kernel and run your own custom kernel. linux SHOULD be the defacto overclocking and benchmarking platform.

    but ranting aside, i would love for amd to finally have an official supported control panel on linux that provides similar functionality.
    Last edited by middy; 04 November 2022, 03:09 AM.

    Comment


    • #52
      Originally posted by coder View Post
      • AMD and Nvidia likely didn't design their latest-gen GPUs to sell at the pre-pandemic/pre-mining price points. That could mean a higher price floor, no matter how weak demand gets.
      • TSMC N5 is a more complex & therefore expensive process. N6 is a little better.
      • Shipping costs & parts costs are down from the peak, but still significantly elevated vs. pre-pandemic.
      • TSMC is still backlogged.
      Item 1 and 2 - I'd argue that the chiplet design is exactly meant to drive down costs to save on die space on the expansive node. At least AMD realized that they cannot force a higher price floor to their customer's beyond a certain level which most housholds cannot afford any longer and still expect great revenue to impress Wall Street. Let's wait and see if Nvidia gets that lesson soon, or they won't sell as much as they hoped for to consumers, tanking their stock in the process.

      You might have seen the news about new GPU vendors from China (albeit the US government might force them out of business for protecting their own companies disguised as "security measure"). While Chinese GPUs might take a while to get interesting to western markets, I'd jump right to them if they offered a decent experience for a fair price. The current pricing floor (and the neglect of any substantial value in the sub-500 EUR market, and no, the 6600 or 6600XT is not much better from what I already got with my Vega). This shows that we still need more competition to hold Nvidia, AMD and Intel in check and I strongly oppose any form of technology nationalism that hurts our wallets.

      Item 3 - You are not quite up-to-date on the shipping costs: Recent figures show rates to the US west coast plunged 20% in one week in early October alone, to $2,361. That compares with $20,000 a year ago [Source]. A decline from over 20.000 USD to 2.361 USD compared with last year's figures is a dramatic change! If I remember the BOM costs as compiled by MLID earlier this year for an entry-level AMD card, shipping costs were equal to the actual silicon cost and which made up a large chunk in overall costs. However, I don't see this change for the better reflected in the current pricing.

      Item 4 - No, TSMC just cut orders to their suppliers by 50% [Source]​. I think that figure speaks for itself for how much backlog is still left. In fact most of their customers face inventory issues right now and cut new orders.
      Last edited by ms178; 04 November 2022, 04:43 AM.

      Comment


      • #53
        Main reason I buy AMD is their support for Linux is great. I would buy an Intel card.. but their availability is limited here in AU and their current cards are kinda wonky.

        Comment


        • #54
          VEGAs were sold near the loss, so comparing and/or wanting similar price for AMD GPUs is absolutely irrational. Almost as much irrational as comparing used server CPUs and noname garbage MBs from China with newly released mid range products and making the point prices are "bad".

          I think it's fair to say that prices of PC components grew a lot faster than the BOMs for those components, but you can forget "historical low" (inflation adjusted) prices for foreseeable future.

          Comment


          • #55
            As longtime AMD user (well, not so long, I had 580 and 5700xt cards), I was heavily disappointed. These cards show overkill performance in rasterization (who needs 200+ fps in 4k?) but lack any progress toward RT support. According to AMD they show some decent performance in current RT games but you need to remember that these are games which use RT for some limited effect like shadows or reflections and some of them have dedicated "AMD RT" render mode with lowest possible settings. I've expected, AMD, being an underdog at RT implementation, will try to reach at least some low hanging fruits and will bump their RT performance at least 2-2.5x gen2gen. Instead, according to AMD's own figures, they have up to +70% with rasterization and up to +60% with RT.

            One have to be an idiot to buy 7900tx*. For $900-1000 buyer will get 300+ FPS in counter strike go and 3060ti-like performance with modern games with high RT workload.

            Comment


            • #56
              Well, RT performance seems to have good improvements. The only problem is that raster & compute improvements are on the same level or more, with feature like dual-issue...

              What RDNA3 would have needed is a radical new RT accelerator, not the incremental improvement that we've got.

              Comment


              • #57
                Originally posted by coder View Post
                Infinity cache bandwidth is like 3.5x, though. That's what counts, if you can use batching. 96 MB isn't huge, but it's big enough.

                In fact, when I saw that 5.3 TB/sec figure, I immediately thought AMD did it for the express purpose of AI inferencing. But, it seems like the tensor horsepower isn't there to truly take advantage of it.


                I doubt it. That's a pretty small market, compared with gaming.


                RDNA3's ray tracing performance didn't even improve as much as the RTX 4000s' did, meaning they're even further behind than RDNA2 was.
                It doesn't count that much, and you forget RDNA2 already has infinite cache pretty large.

                Cache more counts on stuff you reiterate multiple times, and raytracing kind of is this type of workload. But cache doesn't count in GPUs that much, you have commonly scene with even 10GB VRAM usage at 4k, and i mean actual usage. And you need to access almost everything in it to make image.

                Dual channel DDR5 is around 70GB/s read. Top of line GPU like 4090 is 1006GB/s.

                Cache in CPU mostly serves as purpose to avoid big latency of reading stuff directly from RAM.

                GPU if it needs to read 8GBs to make a single frame, for rtx 4090 it will take 8ms. 8ms means at best you gonna have 125FPS. Of course GPU itself is trying to be clever and avoid reading unnecessery data from VRAM, (like if you load human and entire textures into VRAM, during processing you don't need often to read texture behind his back). 96MB of cache helps for cases when you need to iterate over something large multiple times, but in grand big scene 96MB of cache is pitful comparing to 8-10GBs of VRAM usage you have in game.

                Comment


                • #58
                  [QUOTE=middy;n1355703]
                  Originally posted by Hibbelharry View Post
                  in a perfect world, things should just magically work and read people's minds. but that's a perfect world, utopian thinking. real world, things don't work that way. the control panel is about basic control. its about tweaking, its about monitoring, its about enabling and disabling stuff because not everything is universal. and this may come to a surprise, but there are people that use their video cards as more than just a display adapter.

                  here is my radeon control panel on windows 11:

                  the control panel gives a stupid easy way to configure fan profiles and other stuff like overclocking, underclocking, boost behavior, etc. i use it because i hate zero rpm and always disable, and stock firmware fan control maxes out at 50% fan speed and i set it to max out at 100%. cooler the card, the higher it boosts itself. but sadly, you have people who care more about muh sound, than muh performance.:

                  provides basic stats of the gpu + even the rest of the system:

                  has an ability to record game play:

                  per video game settings and even records basic stuff like average fps per game and how many hours of game play:

                  you DO NOT want radeon chill enabled and "just work" universally. that is subjective and per game basis and NOT EVERY SINGLE PERSON WILL WANT IT ENABLED. same thing goes with super resolution, image sharpening, and other features like enhanced sync, or enabling disabling freesync in certain games because some rare occasions you might have flickering problems or crashes with it enabled.


                  if linux ever wants to grow in the enthusiast market, with out a doubt stuff like amd gpus need a control panel for at bare minimum overclocking support. overclocking scene is big and there are a lot of people that don't even game, they just love overclocking and run benchmarks. i never understood why linux didn't try to really focus on that market. you have people who love tweaking hardware and tweaking their system to extract every last drop of performance. with linux open nature, linux is a dream in the tweaking regard. where you can go as far as mess with your kernel and run your own custom kernel. linux SHOULD be the defacto overclocking and benchmarking platform.

                  but ranting aside, i would love for amd to finally have an official supported control panel on linux that provides similar functionality.
                  I strongly agree and I vote for it with my hands and legs. IT is even ironic we have such tool for Nvidia (Greenwithenvy) that is closed source but we don't have one for AMD.

                  Comment


                  • #59
                    Originally posted by albatorsk View Post
                    I'm a long time Geforce user and I'm seriously considering getting a new RX 7900, as I especially like AMD's stance on open source, but I am utterly confused about the driver situation. I'm using Ubuntu and I'm used only have one driver to install (nvidia-driver-###) and then I'm all set. What's messing with my mind is the bit below:



                    To an outsider like it it seems like there are several different drivers, or combination of drivers. Will I (most likely) need to upgrade to a newer kernel than what's included in Ubuntu 22.10 by default? What is "the RADV Vulkan driver". How does it relate to "RadeonSI Gallium3D", if at all? How do I figure out what I should use? Can both be installed at the same time? Do they provide the same functionality? Is RADV required for Vulkan? Does that driver also support OpenGL for all non-Vulkan titles? There's also something called AMDGPU and AMDGPU-PRO. How do they fit in with all this?

                    Or am I just overthinking all this, and all I have to do is plop in an AMD graphics card and it'll just work?
                    It's easy: for early support latest kernel (packages easily available for every distro) + mesa ppa, later on just use whatever is available by default in your distro and forget about it. Much easier than anything Nvidia.
                    ## VGA ##
                    AMD: X1950XTX, HD3870, HD5870
                    Intel: GMA45, HD3000 (Core i5 2500K)

                    Comment


                    • #60
                      Originally posted by Khrundel View Post
                      These cards show overkill performance in rasterization (who needs 200+ fps in 4k?)
                      The same people that buy high end cards for ridiculous prices and monitors with 240+ fps or those that want to play above 4k. Shouldn't you be happy that you can stay in your price pocket and just get a 7700 to play at 60 fps 4k?

                      I've expected, AMD, being an underdog at RT implementation, will try to reach at least some low hanging fruits and will bump their RT performance at least 2-2.5x gen2gen.
                      If you're willing to spend 2000$ on a graphics card you can still buy the 4090. AMD also has a much smaller userbase and can't sell overpriced GPUs to this many users. Someone has to pay for a 20 B difference in transistor count and the bad yields and energy consumption that it brings.

                      I was a fan of Ray/Path-tracing long before Nvidia came along with it. But to be realistic even the 4090 plays Quake 2 with 4k slightly above 60 FPS and it looks shitty. We will stay at "some RT effects" for a very long time and all stuff I saw so far in videos issn't realy worth the performance hit.

                      Comment

                      Working...
                      X