Announcement

Collapse
No announcement yet.

The State Of Open-Source Radeon Driver Features

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by bridgman View Post
    Other than power management, which was a whole lot simpler when we kicked this off back in 2007, I imagine they're pretty pleased with the features and performance. Launch-time support (buy new HW, install a recent distro, use the system) was a higher priority than features and performance.

    The common thread among the customers was that (a) they were building big compute farms with our CPUs, (b) they were running Linux on those farms, (c) they did most of their related SW development on Linux, and (d) they wanted in-box support for the systems used for SW development and related activities.
    Actually, it seems that the OSS drivers are even better then the fgrlx blobs at the moment.

    I've got a 7750 Low Profile ( http://www.sapphiretech.com/presenta...n=&lid=1&leg=0 ) to which the latest ubuntu beta driver still says 'unsupported' in the watermark. I guess that's fair enough with the rendering glitches it still has atm It's kinda sad though as this chipset is out for well over a year now

    Comment


    • Originally posted by bridgman View Post
      My favorite post from there is :



      Things really are that bad.

      The next one is a recurring source of pain. Most GPUs have a display controller where the width is programmed as an integer number of bytes. For some bizarre reason the 1366x768 panel (something like 170 and 3/4 bytes) managed to become a standard anyways.
      Nofi to the chinese/koreans, but that IS how it works. They push out hardware with crappy drivers that are so horribly bad, it makes you cringe. Software, to them (be it 'firmware' in micro controllers or drivers) is just an 'afterthought' that requires minimal effort as that 'costs' time/money. After a year, the product has long been forgotten and users are left hanging often with outdated insecure systems.

      Comment


      • Originally posted by bridgman View Post
        Please don't be waiting for us to "release PM". There's a lot of PM info out there already, enough to make major improvements in the existing code.
        So, was Airled's complaint about atombios interface not being well tested correct? As I recall, and I mentioned in an earlier post to you, he said that the documented path has problems and that in order to make further progress they'd have to figure out how catalyst is doing it. Now, this may have been only for the case of dynamic power management thus there may be some other areas that could be worked on with the given docs.
        Would you mind clarifying this a bit?

        Thanks/Liam

        Comment


        • AFAIK it was correct, although a couple of posts later we realized we were talking about slightly different "next steps" anyways. Airlied was talking more about full DPM; I was talking more about "finer grained static PM" especially in the case where power tables had very few entries.

          We agree on where we want to end up; I had just guessed that there would be some community-driven interim steps along the way and I guessed wrong.

          Comment


          • Originally posted by oliver View Post
            Actually, it seems that the OSS drivers are even better then the fgrlx blobs at the moment.

            I've got a 7750 Low Profile ( http://www.sapphiretech.com/presenta...n=&lid=1&leg=0 ) to which the latest ubuntu beta driver still says 'unsupported' in the watermark. I guess that's fair enough with the rendering glitches it still has atm It's kinda sad though as this chipset is out for well over a year now
            It launched mid-Feb 2012, so ~11 months ago.

            Comment


            • And card in question was announced...

              http://www.sapphiretech.com/presenta...articleID=4667

              So it is today less then six months old

              Comment


              • From the article
                Christian thought that compute shaders (with their lower overhead) might be..
                Which compute shaders? The ones announced in GL 4.3 or the usual shaders (from GL 2.0+) used (somehow) for computing?

                Comment


                • Dreaming when mine integrated and discrete card can be used.

                  Comment


                  • Originally posted by mark45 View Post
                    From the article

                    Which compute shaders? The ones announced in GL 4.3 or the usual shaders (from GL 2.0+) used (somehow) for computing?
                    The ones we added in HD5xxx -- used for DX11, OpenCL and GL 4.3 among other things.

                    Comment


                    • So it's about the new compute shader stage introduced in GL 4.3, I think I get it now why it was mentioned later "(with their lower overhead)" - compared to the usual (vertex/fragment) shaders the compute shader from GL4.3 probably are suited (a lot) better for video decoding and probably allow for a lower overhead of back and forth computing.

                      Comment


                      • Kano hit the nail on the head concerning power management. If AMD can't get it right in open-source (whether it's their "fault" or not), they should update their legacy fglrx driver to support newer kernels/Xservers so that is an option going forward. Actually, they should update fglrx legacy anyway because some users are willing to put up with the blob's drawbacks to get better 3D performance. I understand if AMD doesn't want to update really old legacy drivers (like 8.28), but Catalyst 12-6 legacy should definitely be updated and Catalyst 9-3 legacy would be nice too.

                        It's really hard to recommend AMD graphics to prospective buyers right now. For the user who's more interested in HTPC functions and doesn't care so much about 3D, Intel is the obvious choice because of their open driver, superior power management on mobile devices, and support for VA-API. For the desktop gamer, Nvidia is the obvious choice because of their better blob driver (and much longer support for legacy blobs) as well as VDPAU support.

                        As much as I appreciate AMD's open-source efforts and recognize the incredible progress they've made (especially in terms of 3D features/performance), the AMD KoolAid is a bittersweet drink at the moment.

                        Comment


                        • Originally posted by mark45 View Post
                          So it's about the new compute shader stage introduced in GL 4.3, I think I get it now why it was mentioned later "(with their lower overhead)" - compared to the usual (vertex/fragment) shaders the compute shader from GL4.3 probably are suited (a lot) better for video decoding and probably allow for a lower overhead of back and forth computing.
                          Yep. Key point though is that the hardware capabilities have been around for a few years, and you don't need to wait for GL 4.3 to use them in a video driver.

                          Comment


                          • There's no logic to fglrx and cost saving

                            Bridgman, you know, I'd think for AMD's sake they could save lots of money in getting to a point of dropping fglrx and providing the Open Source driver a plugin mechanism to load a DRM rights shared library.

                            Why would AMD spend millions on focused on a binary driver costing the company time/effort and lots of money when they could just get open source people to work on a driver and some AMD developers?

                            Maybe I'm not the only one to see the cost savings for AMD here...

                            Comment


                            • Thats been discussed many times before.

                              The deal as I understand it is that for DRM to be effective it must be in control from beginning to end. You can't have open code working with a data set and then switch to closed code working on the same data set.

                              Comment


                              • Originally posted by spstarr View Post
                                I'd think for AMD's sake they could save lots of money in getting to a point of dropping fglrx
                                If they were only developing the fglrx/Catalyst for Linux, then maybe you'd have a point, but lots of code and labor time are shared between Windows Catalyst and Linux Catalyst. There's also other upsides to fglrx like same-day support for newly released products (in theory) and pleasing corporate ($$$) customers who demand a proprietary driver for various reasons.

                                Comment

                                Working...
                                X