Announcement

Collapse
No announcement yet.

Radeon Vega 12 Support Called For Pulling Into Linux 4.17 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by TemplarGR View Post
    TDP is not power consumption. It is ridiculous that across the whole internet we keep reading the same misconcemption, and even "tech sites" and youtubers make this mistake.

    TDP is a specification for the cooling solution. How much energy it has to be able to dissipate in order to be used. Most chips rarely if ever reach the TDP value in energy consumption. Especially AMD ones, since AMD in general tends to use higher TDP values for their products, than let's say Intel and Nvidia.
    First, you clearly haven't ever seen the power dissipation measurements for Vega under load, or you wouldn't be striking this tone. Second, theriddick 's only mistake was using the term TDP instead of "power consumption".

    The power efficiency of Vega is a problem. It's significantly worse than Nvidia's Pascal products. It's not simply due to AMD being more conservative - it's due to how their GPUs actually behave under load. This is an area where AMD has lagged significantly behind Nvidia, since Maxwell. Do your homework.

    Comment


    • #22
      Originally posted by oooverclocker View Post
      The main reason you don't get Titan Xp-performance with Vega often is that many parts of it remain "unused" during gaming situations but they still consume energy.
      First, I'm sure you don't have the data to substantiate this claim. Second, they use clock gating to mitigate such issues. Third, we can only judge them on the end result, regardless of the underlying reason. There's no shortage of usage data to support complaints that Vega has much worse power-efficiency in games than comparable Nvidia products.

      Originally posted by oooverclocker View Post
      I made the experience that you can usually make very big GPUs like an R9 390X consume just half the energy with optimal settings when you relinquish about 10-20% of performance.
      This is why it could make sense for them to replace some Polaris GPUs with larger Vegas running lower clocks. Perhaps there are some efficiency gains to be had that will deliver better performance at comparable price & power envelopes to current Pascal products.

      What I know is that Nvidia will release new gaming GPUs sometime, this year. If AMD does nothing to their lineup, Polaris will be left in the dust.

      Comment


      • #23
        I'm waiting for Vega 12, I hope either the old GPUs will get cheaper or Vega 12 will significantly reduce it's power draw. I want to buy a 4K monitor and upgrade my GPU to a Vega 56 or above. The current prices doesn't justify this purchase, even if I have the money. I currently have a GTX 1060 and want to 'upgrade' to AMD because of their Open Source drivers, one step towards open sourciness. When I bought my GTX 1060 6gb, it cost 254,55€, so going to something like Vega at these prices..

        Comment


        • #24
          Originally posted by bridgman View Post

          The product has not been announced yet, so there should not be any good info out there about what it will be.

          Sometimes stuff leaks out, but I haven't seen any for Vega12 yet.
          Thanks

          Comment


          • #25
            Originally posted by oooverclocker View Post
            Yes, gaming on Windows does for example lead to more measured FPS. But in comparison to how smoothly games run on Linux with our wonderful Mesa drivers with 25 to 30 FPS, on Windows they just run like crap, even with 40 FPS. It matters much more how much time there is between each frame than how many you get in a given time span like one second in this case.
            There are some good points in your post, but I'm not sure what you mean in the quoted paragraph above. The FPS value implies a fixed, average time between frames. eg A framerate of 40 fps implies that on average there was one frame every 25ms. Are you saying that the frames come at more regular intervals on AMD with Mesa than on Windows with Nvidia or AMD? Or are you saying that the minimal interval on AMD + Mesa is higher so you get less micro-stutters. If so do you have any data to support this claim or is this just your personal experience; the difference in feel?

            I've been paying attention to minimum FPS of AMD+Mesa in Michael's benchmarks and although the AMD+Mesa combo is competitive with AMD+closed_driver & nvidia+closed_driver, I haven't noticed it being a clear winner. It's better in some games and worse in others.

            Comment


            • #26
              Originally posted by cybertraveler View Post
              I've been paying attention to minimum FPS of AMD+Mesa in Michael's benchmarks and although the AMD+Mesa combo is competitive with AMD+closed_driver & nvidia+closed_driver, I haven't noticed it being a clear winner. It's better in some games and worse in others.
              I don't think anyone is claiming it is a clear winner in all cases - but it is an open-source driver that is competitive with the best of the closed source drivers, which is pretty interesting in its own right.
              Test signature

              Comment


              • #27
                Originally posted by bridgman View Post

                I don't think anyone is claiming it is a clear winner in all cases - but it is an open-source driver that is competitive with the best of the closed source drivers, which is pretty interesting in its own right.
                I get that and I love it! My comment was in regard to oooverclocker seeming to claim that the AMD+Mesa driver is superior to the AMD+closed_driver and Nvidia+close_driver with respect to minimum frame rate or the regularity of intervals between frames. I would love that to be the case and I've been looking for evidence of that, but haven't seen it yet.

                IIRC, one of the selling points of the AMD Vega cards that I saw promoted when it was announced was that it was capable of achieving higher minimum frame rates than equivalent Nvidia cards. If it really can do that, that would be a compelling reason to buy AMD over Nvidia even if the average FPS was slightly higher with Nvidia. I certainly agree with oooverclocker's sentiment that tech journalists often put far too much emphasis on the raw, average FPS. There's a lot of other important stuff to consider.

                Comment


                • #28
                  Originally posted by bridgman View Post
                  The product has not been announced yet, so there should not be any good info out there about what it will be.

                  Sometimes stuff leaks out, but I haven't seen any for Vega12 yet.
                  Eh, I would have bet it was that Fenghuang Raven APU.
                  So we'll have to keep guessing.

                  Comment


                  • #29
                    Originally posted by cybertraveler View Post
                    I get that and I love it! My comment was in regard to oooverclocker seeming to claim that the AMD+Mesa driver is superior to the AMD+closed_driver and Nvidia+close_driver with respect to minimum frame rate or the regularity of intervals between frames. I would love that to be the case and I've been looking for evidence of that, but haven't seen it yet.
                    Got it. My impression was that Mesa was generally better than the AMD closed driver for minimum frame rate, although that was based more on anecdotal user comments about smoothness and perceived consistency of frame rate rather than benchmarks.

                    I don't remember seeing recent comparisons with NVidia.
                    Last edited by bridgman; 25 March 2018, 02:48 AM.
                    Test signature

                    Comment


                    • #30
                      Originally posted by cybertraveler View Post
                      The FPS value implies a fixed, average time between frames. eg A framerate of 40 fps implies that on average there was one frame every 25ms. Are you saying that the frames come at more regular intervals on AMD with Mesa than on Windows with Nvidia or AMD? Or are you saying that the minimal interval on AMD + Mesa is higher so you get less micro-stutters.
                      I would say it's mostly both - when the average interval is a fixed number of e.g. 25 I would say both sentences imply each other. When there are different frame times in average on Windows and Linux it can of course appear more smoothly to the eye when the image is refreshed more regularly but the FPS numbers must still stay above a specific limit when there are not enough frames anymore that more regular refresh rates would help.

                      It's an impression that I also had comparing (Windows) measurements of the Vega GPUs with the 1080 Ti and 1080. Vega has pretty often about the same minimum FPS count or only slightly less. So that might explain so many people can't tell a difference in the posted blind test video. But the Nvidia GPUs do have higher spikes. And both things might have to do with HBM2 vs. GDDR5X but it's just my own impression that looks like a possible explanation to me.
                      However when there are not that many FPS - probably below 100 - I think that it might look less smoothly when you have a frame time of 11 - 20ms vs. 17 - 21ms, even though 11-20ms seem to be very clearly better on paper because that would mean 50 - 90 FPS vs. 48 - 60 FPS. But having a refresh rate with either high spikes or low depths like +/- 40 FPS this can lead to even more issues with a monitor that can dynamically change the refresh rate (Adaptive Sync[Freesync]/G-Sync). Because when you drop below a number of images per second the monitor must double, triple... the monitor refresh rate in order to keep the same color depiction and prevent streaking. And this means your monitor switches the output mode quite often which might appear pretty erratic as well.

                      Originally posted by cybertraveler View Post
                      If so do you have any data to support this claim or is this just your personal experience; the difference in feel?
                      I have made measurements with the Afterburner OSD on Windows some years ago. And at least I can tell you that gaming on Windows is much less than optimal beside that you can choose from more games. Sometimes there are massive spikes of 500ms for example that don't influence the average FPS count at all. Now there is an OSD in the Radeon Software for Windows itself so no Afterburner would be necessary anymore.
                      But since I only buy Linux games and usually just play them on Linux when I have time I haven't had that much of a desire to analyze the frame times etc. because I usually just do this when I have issues. And on Linux there was mostly no behavior that I would have wanted to look at.
                      So for Linux it's just my visual observation that the same games do run more smoothly and that can only happen when the frame time is more regularly.

                      I wouldn't say that there is a big difference on Windows between GPU manufacturers in regard of this topic. But I am sure that there are several reasons why the open source drivers on Linux tend to work more smoothly and I think that the reasons are in the nature of open source, in the way the community looks at them and that Valve supports them as well to get a more console like gaming experience potentially more oriented toward smoothness than measurements. Because the public you reach with "consoles" that run either SteamOS or other distributions with Steam installed is pretty different from the people who are mostly interested in benchmark numbers.
                      Edit: But I think the Windows CPU scheduler is another reason for some games to run very badly on this platform.
                      Last edited by oooverclocker; 25 March 2018, 02:50 PM.

                      Comment

                      Working...
                      X