Announcement

Collapse
No announcement yet.

The State Of Open-Source Radeon Driver Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by bridgman View Post
    Other than power management, which was a whole lot simpler when we kicked this off back in 2007, I imagine they're pretty pleased with the features and performance. Launch-time support (buy new HW, install a recent distro, use the system) was a higher priority than features and performance.

    The common thread among the customers was that (a) they were building big compute farms with our CPUs, (b) they were running Linux on those farms, (c) they did most of their related SW development on Linux, and (d) they wanted in-box support for the systems used for SW development and related activities.
    Actually, it seems that the OSS drivers are even better then the fgrlx blobs at the moment.

    I've got a 7750 Low Profile ( http://www.sapphiretech.com/presenta...n=&lid=1&leg=0 ) to which the latest ubuntu beta driver still says 'unsupported' in the watermark. I guess that's fair enough with the rendering glitches it still has atm It's kinda sad though as this chipset is out for well over a year now

    Comment


    • Originally posted by bridgman View Post
      My favorite post from there is :



      Things really are that bad.

      The next one is a recurring source of pain. Most GPUs have a display controller where the width is programmed as an integer number of bytes. For some bizarre reason the 1366x768 panel (something like 170 and 3/4 bytes) managed to become a standard anyways.
      Nofi to the chinese/koreans, but that IS how it works. They push out hardware with crappy drivers that are so horribly bad, it makes you cringe. Software, to them (be it 'firmware' in micro controllers or drivers) is just an 'afterthought' that requires minimal effort as that 'costs' time/money. After a year, the product has long been forgotten and users are left hanging often with outdated insecure systems.

      Comment


      • Originally posted by bridgman View Post
        Please don't be waiting for us to "release PM". There's a lot of PM info out there already, enough to make major improvements in the existing code.
        So, was Airled's complaint about atombios interface not being well tested correct? As I recall, and I mentioned in an earlier post to you, he said that the documented path has problems and that in order to make further progress they'd have to figure out how catalyst is doing it. Now, this may have been only for the case of dynamic power management thus there may be some other areas that could be worked on with the given docs.
        Would you mind clarifying this a bit?

        Thanks/Liam

        Comment


        • AFAIK it was correct, although a couple of posts later we realized we were talking about slightly different "next steps" anyways. Airlied was talking more about full DPM; I was talking more about "finer grained static PM" especially in the case where power tables had very few entries.

          We agree on where we want to end up; I had just guessed that there would be some community-driven interim steps along the way and I guessed wrong.
          Test signature

          Comment


          • Originally posted by oliver View Post
            Actually, it seems that the OSS drivers are even better then the fgrlx blobs at the moment.

            I've got a 7750 Low Profile ( http://www.sapphiretech.com/presenta...n=&lid=1&leg=0 ) to which the latest ubuntu beta driver still says 'unsupported' in the watermark. I guess that's fair enough with the rendering glitches it still has atm It's kinda sad though as this chipset is out for well over a year now
            It launched mid-Feb 2012, so ~11 months ago.
            Test signature

            Comment


            • And card in question was announced...



              So it is today less then six months old

              Comment


              • From the article
                Christian thought that compute shaders (with their lower overhead) might be..
                Which compute shaders? The ones announced in GL 4.3 or the usual shaders (from GL 2.0+) used (somehow) for computing?

                Comment


                • Dreaming when mine integrated and discrete card can be used.

                  Comment


                  • Originally posted by mark45 View Post
                    From the article

                    Which compute shaders? The ones announced in GL 4.3 or the usual shaders (from GL 2.0+) used (somehow) for computing?
                    The ones we added in HD5xxx -- used for DX11, OpenCL and GL 4.3 among other things.
                    Test signature

                    Comment


                    • So it's about the new compute shader stage introduced in GL 4.3, I think I get it now why it was mentioned later "(with their lower overhead)" - compared to the usual (vertex/fragment) shaders the compute shader from GL4.3 probably are suited (a lot) better for video decoding and probably allow for a lower overhead of back and forth computing.

                      Comment

                      Working...
                      X