Announcement

Collapse
No announcement yet.

AMD Linux Catalyst: Hardware Owners Screwed?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by yaji View Post
    http://www2.ati.com/drivers/hotfix/c...x86.x86_64.zip

    12.6

    edit.

    Instaling them right now, lets see if they work with my HD4850.

    edit2.

    Well, they do not work with my HD4850. Buing nvidia next time.
    I would actually suggest avoiding nvidia simply because of the fact that AMD actively supports the Open Source driver development. Sure, the Open Source driver does not have the performance issues fixed yet, but that is constantly changing and the driver is improving by leaps and bounds. If anything, if you have an ATI/AMD Graphics card and want to use wine, the best bet is to use the open source driver and keep a close eye on that driver.



    Also, Look at Mesa with the Hyperz added for perf:
    https://bugs.freedesktop.org/show_bug.cgi?id=36602

    Originally posted by evolution View Post
    Well, I think and agree that using FOSS ATI drivers is the way to go on GNU/Linux, but I'd use FOSS drivers more often if:

    - I had PROPER H.264 VA-API/VDPAU support. It's unacceptable that such a basic 2D feature is still unsupported by FOSS ATI drivers. (btw, VDPAU on MPEG2 videos already works well...)
    The work on OpenCL that is going into the Mesa driver recently is actually key to this. If you have reasonable OpenCL and encoder/decoder that is implemented in OpenCL that will prefer the GPU over the CPU by default, you can expect the Video decoders to work properly.

    Originally posted by evolution View Post
    - OpenGL games were not LIMITED BY THE CPU / MESA DRIVER, but rather BY THE GPU (on slow systems, you'll suffer performance penalties even if you use a High-End card).
    I Can somewhat agree with this, but the MESA driver is improving. If anything I'd like reasonable OpenGL Performance on an athlon 64x2 (Socket 939 version, so ddr1 memory) with an radeon hd 4550 or radeon x1900gt.

    Originally posted by evolution View Post
    - BETTER POWER MANAGEMENT (at least on laptop graphics cards). The only option we currently have if we buy a laptop with AMD/ATI graphics card is using it with Catalyst. Otherwise, you can expect to send you computer to the manufacturer before the guarantee expires (due to heating issues)...
    Power management has not been an issue with the Radeon HD 6000 series ( A6 series cpu with a hd6650 mobile graphics card) because I am able to easily see about 4 hours on my laptop

    Originally posted by evolution View Post
    With these news, I'm feeling a little more happier that I switched to Intel+nVidia (Bumblebee)... Maybe my next desktop system will also suffer a "change"...

    Cheers

    p.s.: I hope that Catalyst 12.5 (if it gets released) supports r600/r700-generation cards...
    I Actually believe that the cards that where dropped with the 12.6 driver actually is a sign that the Open Source driver is actually getting enough steam. Sure the performance is not where it needs to be yet, and features are missing, but at least you are getting active support from AMD on this front ( initial support, and then of course the documentation for the various graphics registers for the card)... Actually, The only reason that the Open Source driver has poor performance and feature support is that there are not enough developers working on the driver actively. If anything, There is probably less than 10 developers actively working on the open source driver ( MESA and KMS ), and often times these developers are working on at least 5 different graphics cards at the same time.

    Comment


    • Originally posted by Dandel View Post
      I would actually suggest avoiding nvidia simply because of the fact that AMD actively supports the Open Source driver development.
      You are right. I can see the advances in open source drivers when they still not work correctly with hardware released 2008 (and still sold) when I can use Nvidias proprietary drivers with hardware released 2004, using the whole hardware and not only a part of it.

      The work on OpenCL that is going into the Mesa driver recently is actually key to this. If you have reasonable OpenCL and encoder/decoder that is implemented in OpenCL that will prefer the GPU over the CPU by default, you can expect the Video decoders to work properly.
      As seen posted in a different thread by bridgman, shader based video acceleration is NOT the solution. They stopped working on that and are working on UVD again, without knowing if they ever can release that code.

      [QUOTEI Actually believe that the cards that where dropped with the 12.6 driver actually is a sign that the Open Source driver is actually getting enough steam. Sure the performance is not where it needs to be yet, and features are missing, but at least you are getting active support from AMD on this front ( initial support, and then of course the documentation for the various graphics registers for the card)...][/QUOTE]So dropping support for so called "legacy" cards, which surprisingly are the top of the line cards when it comes to integrated video solutions for chipsets for their top of the line CPUs and still are sold, and then putting that support into free drivers that only support half of the chip you just paid for and that with bad performance compared to the proprietary driver is a good sign? Then I would like to know what for you is a bad sign.

      Comment


      • Originally posted by Dandel View Post
        I would actually suggest avoiding nvidia simply because of the fact that AMD actively supports the Open Source driver development. Sure, the Open Source driver does not have the performance issues fixed yet, but that is constantly changing and the driver is improving by leaps and bounds. If anything, if you have an ATI/AMD Graphics card and want to use wine, the best bet is to use the open source driver and keep a close eye on that driver.
        Well, I also don't like too much the fact that nVidia doesn't support Optimus/FOSS drivers on Linux, but the fact is that they seem to work at least at some extent (with Bumblebee). And a thing I've learned in engineering is that some things sometimes can be "beautiful" (such as the FOSS ATI driver project), but if they don't work as expected, they're (almost) useless.

        Also, Look at Mesa with the Hyperz added for perf:
        https://bugs.freedesktop.org/show_bug.cgi?id=36602
        It seems the support is still buggy. Btw, my intel HD3000 IGPU already does that for quite a while... But maybe I'll give it a try on my r700 card when it gets more stable...

        The work on OpenCL that is going into the Mesa driver recently is actually key to this. If you have reasonable OpenCL and encoder/decoder that is implemented in OpenCL that will prefer the GPU over the CPU by default, you can expect the Video decoders to work properly.
        But it seems most of the work on OpenCL has been done on Evergreen-generation cards. Currently, I don't own any card from that generation (I've a r700-generation one). It seems OpenCL on HD4xxx series will be treated as a second-class citizen... OpenCL can be effective for encoding, but I'm afraid it might not work so well for decoding... I just don't know how, for instance, Intel can give a proper video-acceleration API with support for full-HD H.264 whereas AMD can't do the same with UVD...

        Power management has not been an issue with the Radeon HD 6000 series ( A6 series cpu with a hd6650 mobile graphics card) because I am able to easily see about 4 hours on my laptop
        For me, it might not be completly a "show-stopper", but it'd be nice to have an easier way of managing power management than having to write commands on the terminal each time we want to change radeon's power profile (I know how to do it with sysV systems, but currently, I'm already using systemd, so I can't use rc.local and I'm stuck with the maximum frequency from the "default" profile at boot). I also suspect the PM features are more optimized for all-AMD platforms...

        For the "average-joe" Linux users, changing things on terminals is an easy way to make them return to WinBlows/Mac OS X (aka paid Linux)...

        I Actually believe that the cards that where dropped with the 12.6 driver actually is a sign that the Open Source driver is actually getting enough steam. Sure the performance is not where it needs to be yet, and features are missing, but at least you are getting active support from AMD on this front ( initial support, and then of course the documentation for the various graphics registers for the card)... Actually, The only reason that the Open Source driver has poor performance and feature support is that there are not enough developers working on the driver actively. If anything, There is probably less than 10 developers actively working on the open source driver ( MESA and KMS ), and often times these developers are working on at least 5 different graphics cards at the same time.
        We can consider it as "support", but at the moment it's not usable for the "average Joe" user... If AMD would really like to give a REAL effort on FOSS drivers, they should've focus more on FOSS AMD drivers and less on Catalyst, and give us some of the features I've mentioned in earlier posts... Why can't they provide a full-FOSS driver (that works as expected) like Intel does?

        Cheers
        Last edited by evolution; 06-07-2012, 09:51 AM.

        Comment


        • Originally posted by Dandel View Post


          Power management has not been an issue with the Radeon HD 6000 series ( A6 series cpu with a hd6650 mobile graphics card) because I am able to easily see about 4 hours on my laptop
          Power management with a dedicated mobile card is close to unusable in my experience though. My older laptop has a mobility hd2600 512mb. With the oss drivers, even with profile forced to low I get VERY high temps compared to catalyst, and very low battery life. The fan howls like a wounded bear if I so much as move a window. This is the main dealbreaker for me with the oss drivers.

          Comment


          • Originally posted by TobiSGD View Post
            As seen posted in a different thread by bridgman, shader based video acceleration is NOT the solution. They stopped working on that and are working on UVD again, without knowing if they ever can release that code.
            Actually my post got cut off somehow and I didn't have time to retype the whole thing. There are still some shader options which seem attractive but haven't been pursued yet (eg using compute shaders, which have lower overhead) but the timing seemed right to push ahead on UVD.

            Originally posted by TobiSGD View Post
            So dropping support for so called "legacy" cards, which surprisingly are the top of the line cards when it comes to integrated video solutions for chipsets for their top of the line CPUs and still are sold, and then putting that support into free drivers that only support half of the chip you just paid for and that with bad performance compared to the proprietary driver is a good sign? Then I would like to know what for you is a bad sign.
            ??? The idea of putting more work into the open drivers is to, in your words, "support the other half of the chip" (actually more like 10%) and improve performance. How can that be a Bad Thing ?
            Last edited by bridgman; 06-07-2012, 10:08 AM.

            Comment


            • Basically there is nothing against open source drivers, but when you are no oss purist you just want a fully functional driver, no matter if open or not. But when you hear that binary driver support for hardware is dropped while still sold - you still can buy 880G boards for AM3+ - trinity is slower than those chips for those boards - then it really hurts your customers. As you noticed yourself, the performance and features are still far away from fglrx.

              Comment


              • Originally posted by TobiSGD View Post
                You are right. I can see the advances in open source drivers when they still not work correctly with hardware released 2008 (and still sold) when I can use Nvidias proprietary drivers with hardware released 2004, using the whole hardware and not only a part of it.
                You haft to remember that AMD acquired ATI Technology in 2006. This means that the Current Generation of hardware ( The Radeon HD 5000 and up ) is the first true designs where AMD has had enough time to fully develop the hardware.

                Originally posted by TobiSGD View Post
                As seen posted in a different thread by bridgman, shader based video acceleration is NOT the solution. They stopped working on that and are working on UVD again, without knowing if they ever can release that code.
                OpenCL is not Shader based. OpenCL is an actual programming language that can do general purpose computing where everything can run on either the CPU, Graphics card, or any other device so long as it has support implemented.

                Originally posted by TobiSGD View Post
                I Actually believe that the cards that where dropped with the 12.6 driver actually is a sign that the Open Source driver is actually getting enough steam. Sure the performance is not where it needs to be yet, and features are missing, but at least you are getting active support from AMD on this front ( initial support, and then of course the documentation for the various graphics registers for the card)...]
                So dropping support for so called "legacy" cards, which surprisingly are the top of the line cards when it comes to integrated video solutions for chipsets for their top of the line CPUs and still are sold, and then putting that support into free drivers that only support half of the chip you just paid for and that with bad performance compared to the proprietary driver is a good sign? Then I would like to know what for you is a bad sign.
                I actually do not think this is as bad as you make it out... Yes It's not exactly nice to remove support early, however when you look at both ATI's and AMD's history with video card support. The total time that each gets supported ranges from 5 to 7 years. This is usually directly linked to when Microsoft releases DirectX API Changes, and when Windows Specific changes are made. The Rage series of cards was released around 1995, and the last update to the driver was in 2001/2002 for ( windows only, D3D 3 to 6). The radeon R100 to R200 set initially released around 2001, and the last driver was released in 2006 ( windows only, D3D7/D3D8). The Radeon r300 to r500 set was released around 2002, and support ended with the 9.3 driver after about 7 years ( linux and windows, D3D9). Now it's the Radeon R600 to R700's turn, and the initial release for this was 2007, and with the support removed this last month where the driver supported both linux and windows ( this is D3D10/10.1). This time the support removal is a year ahead of the history, but at least the overall support for the generation set of cards is still 5 years to 6 years.



                Originally posted by Kano View Post
                Basically there is nothing against open source drivers, but when you are no oss purist you just want a fully functional driver, no matter if open or not. But when you hear that binary driver support for hardware is dropped while still sold - you still can buy 880G boards for AM3+ - trinity is slower than those chips for those boards - then it really hurts your customers. As you noticed yourself, the performance and features are still far away from fglrx.
                I would recommend looking at an a-series APU or an affordable radeon hd6670 video card. Either way you are looking at anywhere between 3 and 5 more years, with the added benefit of a few more years for the open source drivers to mature.

                Comment


                • All AMD fusion chips lack L3 cache. With a few execeptions all AM3(+) Phenoms all have got L3 on the chip and are therefore faster. Basically the A-cpus are only Athlons (brand name without L3 for AM3) with GPU combined, thats bad for speed records. If you want speed fusion chips are too slow, even if you are an AMD fan. But if mainly need CPU speed and not GPU you usually get something like 780G or 880G based boards - there are no newer chipsets with gfx. And there is no replacement currenty, so they should not stop supporting those boards or there have to be cpus with L3 cache combined with a GPU. I definitely do NOT think that everybody who wants to buy an AMD cpu (for whatever reasons) will go for the slower fusion FM1 cpus...

                  Comment


                  • Kano, we only have mobile Trinity benchmarks AFAIK. Do you have a desktop Trinity around, being able to say for sure it is slower?

                    Comment


                    • No, but i have got i7-3770S, i prefer that one

                      Comment


                      • Originally posted by Kano View Post
                        All AMD fusion chips lack L3 cache. With a few execeptions all AM3(+) Phenoms all have got L3 on the chip and are therefore faster. Basically the A-cpus are only Athlons (brand name without L3 for AM3) with GPU combined, thats bad for speed records. If you want speed fusion chips are too slow, even if you are an AMD fan. But if mainly need CPU speed and not GPU you usually get something like 780G or 880G based boards - there are no newer chipsets with gfx. And there is no replacement currenty, so they should not stop supporting those boards or there have to be cpus with L3 cache combined with a GPU. I definitely do NOT think that everybody who wants to buy an AMD cpu (for whatever reasons) will go for the slower fusion FM1 cpus...
                        I do agree that currently the Fusion chips lack L3 cache. However, that does not necessarely mean that the processor is a complete loss. For specific tasks, like Compiling software, The L3 Cache is a must. However, for most end users, there is no need to have l3 cache, and you will not notice much difference in those cases. There are more desktop processors lines from amd that have L3 cache that are currently in production that have L3 cache.

                        AMD FX series- All Models crurently have L3 Cache. Currently No exceptions are made.
                        AMD Phenom series - All models have L3 cache. The only exception are models released with a "Propus" Codename and "Regor" Codename. ( as you mentioned )
                        Phenom II mobile - All models currently do not have l3 cache.

                        Comment


                        • It is not the point if the chips have L3 or not. The point is that AMD is dropping support for the only integrated video solution they have for their top of the line CPU, which is uarguably the AMD FX, noting that they dropped support for legacy hardware, despite the fact that it is still sold and that they have not delivered even one successor to this "legacy" hardware and, even worse, they recommend to use the open driver, which is not able to use all the units (UVD), lacks essential functions (proper power management) and has not the same performance as the proprietary drivers.

                          Comment


                          • Thats what i said, but did not especially mention the fx series because many do not think that they are really faster than the phenom ii cpus before. amd combined 2 integer units together with 1 fpu into a functional part. then they used 4 of em for the so called 8 core chips. But you only get 4 fpus, when you compare that to the older phenom x6, there you got 6 fpus. So if you would use the same frequency and expect the same effiency then you have got a 33% increase in integer performance and a decrease of 33% of fpu speed. Depends on your workload if the fx cpus are better or not. But certainly what is lacking is the onchip gpu, they need more space, that means shrink the chip. Most likely amd could easyly add the gpu if they would not use a 32nm process but a 22nm one like Intel does. Maybe amd should ask intel to build the cpus for em, they do not need their old amd factory anyway because they paided em to have got the free choice where they produce now...

                            Comment


                            • Originally posted by Kano View Post
                              Thats what i said, but did not especially mention the fx series because many do not think that they are really faster than the phenom ii cpus before. amd combined 2 integer units together with 1 fpu into a functional part. then they used 4 of em for the so called 8 core chips. But you only get 4 fpus, when you compare that to the older phenom x6, there you got 6 fpus. So if you would use the same frequency and expect the same effiency then you have got a 33% increase in integer performance and a decrease of 33% of fpu speed. Depends on your workload if the fx cpus are better or not. But certainly what is lacking is the onchip gpu, they need more space, that means shrink the chip. Most likely amd could easyly add the gpu if they would not use a 32nm process but a 22nm one like Intel does. Maybe amd should ask intel to build the cpus for em, they do not need their old amd factory anyway because they paided em to have got the free choice where they produce now...
                              You are right, I own a Phenom II X6 and I wouldn't even consider to buy that Bulldozer crap, at least unless they increase the number of modules to 6 or 8. Nonetheless, AMD decided to end the life of the Phenom II, so that the AMD FX is now the best CPU they have, even if we know that their so called "first 8-core desktop CPU" is just a marketing lie.

                              Comment


                              • AMD Vs. Nvidia

                                I have just decided, after much thought, to buy an Nvidia card to replace my aging ATI 3870. It took me quite a while to decide this. I want to believe in, and use, Open Source graphics but they left me behind. Screw that! Nvidia may be Blob only but they never left me hanging! I truly hate to say what I just said. But, my mind is now made up. I am not even sure what I will do for Mid-Year 2013 system upgrade. I may just skip it entirely, after all my Q6600 is still chugging along pretty good. Once I get a graphics card that will have current drivers, I will be pretty much covered. Damn I hate saying this...

                                Comment

                                Working...
                                X