Announcement

Collapse
No announcement yet.

Radeon's Open-Source Linux GPU Driver Has Nearly Caught Up With Windows' Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by agd5f View Post

    Waiting for what?
    Support, of course.

    I mean, proper, reviewed and approved support. Not someone's personal repository with stuff that may work as long I don't do something yet untested. Call me spoiled, but that's what I've come to expect, not having time to continuously build my own OS.

    Comment


    • #72
      Originally posted by bug77 View Post
      I mean, proper, reviewed and approved support. Not someone's personal repository with stuff that may work as long I don't do something yet untested. Call me spoiled, but that's what I've come to expect, not having time to continuously build my own OS.
      Everything that goes into the staging trees is reviewed and approved first, generally via the amd-gfx list, so if lack of review was your main concern you have that proper, reviewed and approved support today. Is the problem just that we are pushing the trees to agd5f's personal repo on FDO (the usual convention for open source developing) rather than something on amd.com ?

      Agree that you shouldn't have to continuously build your own OS, and again the usual practice is that third parties (distros or individuals) do that building for you and maintain package repos you can use. I'll check in the morning (going zzz now) but I believe user mystro256 here is maintaining a repo of staging tree builds, as an example.

      You may also be able to pick up the kernel driver portion of the AMDGPU-PRO driver packages today, of course, and we are planning to extend the AMDGPU-PRO install model to simplify updating the all-open stack as well. That said, all the time we spend on interim measures is time we are not spending on getting the display code upstreamed, so there is a bit of a tradeoff there.

      Which distro are you using these days ?
      Last edited by bridgman; 08 June 2017, 04:51 AM.
      Test signature

      Comment


      • #73
        Originally posted by Mike Frett View Post
        Now if only most of the Games would work with and support MESA...
        They do today AFAIK... there are few exceptions but the list is shrinking fast. Is there something specific you have in mind ?
        Test signature

        Comment


        • #74
          Too bad there are a ton of game ports with less then half the performance.
          Other than that (mostly game developers'/porters' fault), great work by AMD. Thank you for the good quality drivers we currently have.

          I can't wait for OpenSource graphics control panel.

          Comment


          • #75
            Originally posted by bridgman View Post

            Everything that goes into the staging trees is reviewed and approved first, generally via the amd-gfx list, so if lack of review was your main concern you have that proper, reviewed and approved support today. Is the problem just that we are pushing the trees to agd5f's personal repo on FDO (the usual convention for open source developing) rather than something on amd.com ?

            Agree that you shouldn't have to continuously build your own OS, and again the usual practice is that third parties (distros or individuals) do that building for you and maintain package repos you can use. I'll check in the morning (going zzz now) but I believe user mystro256 here is maintaining a repo of staging tree builds, as an example.

            You may also be able to pick up the kernel driver portion of the AMDGPU-PRO driver packages today, of course, and we are planning to extend the AMDGPU-PRO install model to simplify updating the all-open stack as well. That said, all the time we spend on interim measures is time we are not spending on getting the display code upstreamed, so there is a bit of a tradeoff there.

            Which distro are you using these days ?
            You're on to something. I just don't have the time to chase kernels around or to mix and match.

            I have KDE Neon at home. I'm not exactly thrilled because I'm stuck with a rather old kernel, but since I can't name some functionality I'm missing because of that, it will do for now.
            I think it's over ten years now of me installing Kubuntu, adding Nvidia's blob and bam: OpenGL acceleration, OpenCL/CUDA support, hw accelerated video, working HDMI audio. All updated on the fly to work with the latest kernel (and X). It's hard to move away from all that, because my main hobby is not OS tinkering.

            Edit: Zzz tight.

            Comment


            • #76
              Originally posted by bridgman View Post

              We wrote the Vulkan driver a couple of years ago and use it across multiple OSes and environments, not just Linux. We started open sourcing the Vulkan driver a year or so ago so we could have the kind of cooperation you suggest.

              The RADV driver is more recent, started a bit less than a year ago.
              Ok, I can actually understand how you are thinking here. Just don't like the situation that we seem to go into. That we have a driver in mesa (RADV) and you official driver that is outside of mesa. But better several good drivers than none...

              Originally posted by bridgman View Post
              Rather than what - open sourcing the Vulkan driver ? Remember that we have to maintain our Vulkan driver for other OSes anyways, so it's not a case of "if we don't do X we free up resources for Y". On the other hand if we stop using our Vulkan driver and shift to RADV we have to *add* developers since we would now need to maintain two Vulkan driver code bases.
              What you are saying actually makes sense...it is easy to believe that you guys would get less to do if you just supported RADV instead but in reality it generates more work for you since it is 2 different code bases and probably 2 (slightly?) different ways of doing things.

              Originally posted by bridgman View Post
              We already have other developers working on upstreaming KFD - if you remember my posts over the last year or so we had a lot of other work to do first (primarily implementing eviction logic so that hard-pinned ROCm buffers would not interfere when the graphics stack needed memory); we now have that running and so are able to resume upstreaming efforts.

              Next steps were integrating KFD into the hybrid driver stack - (done, you'll see it in upcoming driver releases) and shifting the ROCm & internal trees onto newer kernel versions (WIP, next ROCm stack release should be 4.11-based). At that point we have all the foundation work done and can start pushing patches upstream.
              I can't wait for the upcoming releases...then it will be time for making the mrs vein unhappy by spending nights with the HCC compiler .
              Ah, I wish I could work with something exiting like this instead of trying to get this damn install4j shit to work...

              Comment


              • #77
                Originally posted by leipero View Post
                On topic, great wrok from mesa devs, i wanted to add shader loop unrolling to r600, only to realize I'm cluless and it's to big of a bite for me .
                I feel your pain. I'm trying to fix an interpolation bug ATM (mostly for getting experience, I don't think that bug annoys anyone), and I can tell you, sb code is quite frustrating. I mean, sb algorithms might stand on shoulders of giants, its "notes.markdown" mentions at least 6 papers the creator used. But the implementation is a bunch of functions with non-informative names, many functions just return a value, some functions don't even have a code; and yesterday I saw a 10-15 lines function that initializes variables, and then runs a cycle with condition on those variables, however variables themselves are never changed, i.e. the do-while runs a single cycle. And there're a bunch of functions that could've been deleted without any harm.

                Author clearly have had some cool pattern in their mind, but I didn't find a single comment explaining the architecture, so whatever was it, lost in sands of time.

                It reminds me of my first serious project I did on my work, I initially got there doing a practice from institute. Being a newbie I tried to be clever anyway. And I thought: the more abstractions, the easier to maintain; so I blindly put a bunch of them, I hoped to handle in abstractions every possible change. It could've worked. But what an irony, the next change I was asked to do was probably the single situation I didn't foresee, and I either had to rework everything, or make an ugly hack. And then one more on top of it. And another one. That said, at least my code was rich on comments.

                So, the problem is, you can't understand sb-compiler by visual introspection, but I've found how to deal with that. I just getting a debug output, and then run everything in gdb, and by stepping through the code I understand more or less where's some logic, and how it works.
                Last edited by Hi-Angel; 08 June 2017, 06:50 AM.

                Comment


                • #78
                  Originally posted by Hi-Angel View Post
                  I feel your pain. I'm trying to fix an interpolation bug ATM (mostly for getting experience, I don't think that bug annoys anyone), and I can tell you, sb code is quite frustrating. I mean, sb algorithms might stand on shoulders of giants, its "notes.markdown" mentions at least 6 papers the creator used. But the implementation is a bunch of functions with non-informative names, many functions just return a value, some functions don't even have a code; and yesterday I saw a 10-15 lines function that initializes variables, and then runs a cycle with condition on those variables, however variables themselves are never changed, i.e. the do-while runs a single cycle. And there're a bunch of functions that could've been deleted without any harm.

                  Author clearly have had some cool pattern in their mind, but I didn't find a single comment explaining the architecture, so whatever was it, lost in sands of time.

                  It reminds me of my first serious project I did on my work, I initially got there doing a practice from institute. Being a newbie I tried to be clever anyway. And I thought: the more abstractions, the easier to maintain; so I blindly put a bunch of them, I hoped to handle in abstractions every possible change. It could've worked. But what an irony, the next change I was asked to do was probably the single situation I didn't foresee, and I either had to rework everything, or make an ugly hack. And then one more on top of it. And another one. That said, at least my code was rich on comments.

                  So, the problem is, you can't understand sb-compiler by visual introspection, but I've found how to deal with that. I just getting a debug output, and then run everything in gdb, and by stepping through the code I understand more or less where's some logic, and how it works.
                  Looks to me like you're getting proper experience. Most production code is like that. Sadly.

                  Comment


                  • #79
                    Originally posted by Mike Frett View Post
                    Now if only most of the Games would work with and support MESA...
                    Theres a list maintained of broken games on gamingonlinux's wiki @ https://www.gamingonlinux.com/wiki/Games_broken_on_Mesa
                    As you can see, many of them have already been fixed.

                    Personally I would like to see some love for the non-native (id engine based games especially)... I've got a copy of Doom on my steam library that I've been waiting to play for months now, I could install pro drivers but It's too much hassle for one game.

                    Comment


                    • #80
                      Originally posted by Kakarott View Post

                      How do I check or test?
                      You'll notice that the fans stop spinning when the screen goes off. Please note that I'm not talking about suspend, just the screen turning off after a certain time.

                      Also, if you have a wattmeter, you'll noticed a decrease in power consumption (my GPU goes from about 17 W in idle to 3 W in zerocore mode).

                      Comment

                      Working...
                      X