Announcement

Collapse
No announcement yet.

The State Of Open-Source Radeon Driver Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kano hit the nail on the head concerning power management. If AMD can't get it right in open-source (whether it's their "fault" or not), they should update their legacy fglrx driver to support newer kernels/Xservers so that is an option going forward. Actually, they should update fglrx legacy anyway because some users are willing to put up with the blob's drawbacks to get better 3D performance. I understand if AMD doesn't want to update really old legacy drivers (like 8.28), but Catalyst 12-6 legacy should definitely be updated and Catalyst 9-3 legacy would be nice too.

    It's really hard to recommend AMD graphics to prospective buyers right now. For the user who's more interested in HTPC functions and doesn't care so much about 3D, Intel is the obvious choice because of their open driver, superior power management on mobile devices, and support for VA-API. For the desktop gamer, Nvidia is the obvious choice because of their better blob driver (and much longer support for legacy blobs) as well as VDPAU support.

    As much as I appreciate AMD's open-source efforts and recognize the incredible progress they've made (especially in terms of 3D features/performance), the AMD KoolAid is a bittersweet drink at the moment.

    Comment


    • Originally posted by mark45 View Post
      So it's about the new compute shader stage introduced in GL 4.3, I think I get it now why it was mentioned later "(with their lower overhead)" - compared to the usual (vertex/fragment) shaders the compute shader from GL4.3 probably are suited (a lot) better for video decoding and probably allow for a lower overhead of back and forth computing.
      Yep. Key point though is that the hardware capabilities have been around for a few years, and you don't need to wait for GL 4.3 to use them in a video driver.
      Test signature

      Comment


      • There's no logic to fglrx and cost saving

        Bridgman, you know, I'd think for AMD's sake they could save lots of money in getting to a point of dropping fglrx and providing the Open Source driver a plugin mechanism to load a DRM rights shared library.

        Why would AMD spend millions on focused on a binary driver costing the company time/effort and lots of money when they could just get open source people to work on a driver and some AMD developers?

        Maybe I'm not the only one to see the cost savings for AMD here...

        Comment


        • Thats been discussed many times before.

          The deal as I understand it is that for DRM to be effective it must be in control from beginning to end. You can't have open code working with a data set and then switch to closed code working on the same data set.

          Comment


          • Originally posted by spstarr View Post
            I'd think for AMD's sake they could save lots of money in getting to a point of dropping fglrx
            If they were only developing the fglrx/Catalyst for Linux, then maybe you'd have a point, but lots of code and labor time are shared between Windows Catalyst and Linux Catalyst. There's also other upsides to fglrx like same-day support for newly released products (in theory) and pleasing corporate ($$$) customers who demand a proprietary driver for various reasons.

            Comment


            • Originally posted by spstarr View Post
              Why would AMD spend millions on focused on a binary driver costing the company time/effort and lots of money when they could just get open source people to work on a driver and some AMD developers?
              Because that binary driver runs on Windows.

              The Linux-specific parts of fglrx are very few. Most of the blob is the same as the Windows driver. And they can't stop making the windows driver, obviously.

              Comment


              • Right. If we had been writing a separate Linux driver (rather than code sharing) it would be open source.

                One thing I've said a lot of times is that ability to share code across multiple OSes is the primary reason for proprietary drivers to exist.

                You pretty much have to take the secrecy requirements of all the OSes sharing code and follow the most restrictive of them all, which is how shared-code drivers like Catalyst Linux (and the corresponding NVidia drivers) end up being proprietary. We are the only vendor willing to support both proprietary drivers (for workstation and other performance-critical markets) AND open source drivers.

                Note that the 3D workstation drivers have historically been associated with high end GPUs, which we make but not everyone does. If we hadn't made big-ass GPUs we probably wouldn't have needed a workstation driver in the past. Things are getting interesting now because high end APUs (eg Trinity/Richland) have enough HW performance to be viable at the low end of the workstation market, given the right drivers.
                Last edited by bridgman; 17 January 2013, 12:50 PM.
                Test signature

                Comment


                • I know what the responses will be, but I'm gonna say it anyway.

                  I think AMD needs to hire more OSS developers. The ones they have are doing a fantastic job, but there just aren't enough of them to tackle the numver of issues.You guys claim that with more developers you'll start having issues with them stepping on each others toes but I disagree. If it was managed in such a way that each was asigned a task in something resembling a pipeline with quality control and legal review incorporated into that pipeline it would seem to be a better solution. You could add as many developers as you need.

                  Even if you don't hire more developers you DO need to start implementing timeline based feature releases. You guys of all people know how processing works. Your development cycles could benefit from a process that resembles a pipeline.
                  Last edited by duby229; 17 January 2013, 01:12 PM.

                  Comment


                  • Originally posted by duby229 View Post
                    IYou guys claim that with more developers you'll start having issues with them stepping on each others toes but I disagree. If it was managed in such a way that each was asigned a task in something resembling a pipeline with quality control and legal review incorporated into that pipeline it would seem to be a better solution.
                    Don't think we have ever said that.

                    Remember the idea was "we provide info, community writes the driver", not "AMD writes the driver".

                    Originally posted by duby229 View Post
                    Even if you don't hire more developers you DO need to start implementing timeline based feature releases. You guys of all people know how processing works. Your development cycles could benefit from a process that resembles a pipeline.
                    Don't understand this. We run on the common kernel and mesa development cycles. The X driver releases are aligned with kernel & mesa, but the X driver doesn't change much these days since most of the code is now in the kernel.
                    Test signature

                    Comment


                    • Originally posted by bridgman View Post
                      Right. If we had been writing a separate Linux driver (rather than code sharing) it would be open source.

                      One thing I've said a lot of times is that ability to share code across multiple OSes is the primary reason for proprietary drivers to exist.

                      You pretty much have to take the secrecy requirements of all the OSes sharing code and follow the most restrictive of them all, which is how shared-code drivers like Catalyst Linux (and the corresponding NVidia drivers) end up being proprietary. We are the only vendor willing to support both proprietary drivers (for workstation and other performance-critical markets) AND open source drivers.

                      Note that the 3D workstation drivers have historically been associated with high end GPUs, which we make but not everyone does. If we hadn't made big-ass GPUs we probably wouldn't have needed a workstation driver in the past. Things are getting interesting now because high end APUs (eg Trinity/Richland) have enough HW performance to be viable at the low end of the workstation market, given the right drivers.
                      I've seen some numbers recently on workstation GPU market, and AMD isnt doing very well there. FGLRX isnt stable and most folks who use linux know that. They stay away from it like it was a plague. I think you'd be much better off putting your resources into the OSS side of the equation. At the very minimum it is stable. It just needs support for the features that the hardware provides.

                      And despite what you say I don't think DRM on linux is even an issue at all. I'm convinced that anybody who uses linux in the first place doesnt care one tiny little bit about DRM, otherwize they wouldnt be using linux. They only things that implement DRM is the proprietary drivers and EVERYTHING else circumvents it. I can't name a single proprietary video player that works well on linux. All of the OSS players are able to work by circumventing DRM.

                      I think DRM is nothing more than an excuse.

                      Comment

                      Working...
                      X