Announcement

Collapse
No announcement yet.

AMD Linux Catalyst: Hardware Owners Screwed?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by TobiSGD View Post
    As seen posted in a different thread by bridgman, shader based video acceleration is NOT the solution. They stopped working on that and are working on UVD again, without knowing if they ever can release that code.
    Actually my post got cut off somehow and I didn't have time to retype the whole thing. There are still some shader options which seem attractive but haven't been pursued yet (eg using compute shaders, which have lower overhead) but the timing seemed right to push ahead on UVD.

    Originally posted by TobiSGD View Post
    So dropping support for so called "legacy" cards, which surprisingly are the top of the line cards when it comes to integrated video solutions for chipsets for their top of the line CPUs and still are sold, and then putting that support into free drivers that only support half of the chip you just paid for and that with bad performance compared to the proprietary driver is a good sign? Then I would like to know what for you is a bad sign.
    ??? The idea of putting more work into the open drivers is to, in your words, "support the other half of the chip" (actually more like 10%) and improve performance. How can that be a Bad Thing ?
    Last edited by bridgman; 07 June 2012, 10:08 AM.
    Test signature

    Comment


    • Basically there is nothing against open source drivers, but when you are no oss purist you just want a fully functional driver, no matter if open or not. But when you hear that binary driver support for hardware is dropped while still sold - you still can buy 880G boards for AM3+ - trinity is slower than those chips for those boards - then it really hurts your customers. As you noticed yourself, the performance and features are still far away from fglrx.

      Comment


      • Originally posted by TobiSGD View Post
        You are right. I can see the advances in open source drivers when they still not work correctly with hardware released 2008 (and still sold) when I can use Nvidias proprietary drivers with hardware released 2004, using the whole hardware and not only a part of it.
        You haft to remember that AMD acquired ATI Technology in 2006. This means that the Current Generation of hardware ( The Radeon HD 5000 and up ) is the first true designs where AMD has had enough time to fully develop the hardware.

        Originally posted by TobiSGD View Post
        As seen posted in a different thread by bridgman, shader based video acceleration is NOT the solution. They stopped working on that and are working on UVD again, without knowing if they ever can release that code.
        OpenCL is not Shader based. OpenCL is an actual programming language that can do general purpose computing where everything can run on either the CPU, Graphics card, or any other device so long as it has support implemented.

        Originally posted by TobiSGD View Post
        I Actually believe that the cards that where dropped with the 12.6 driver actually is a sign that the Open Source driver is actually getting enough steam. Sure the performance is not where it needs to be yet, and features are missing, but at least you are getting active support from AMD on this front ( initial support, and then of course the documentation for the various graphics registers for the card)...]
        So dropping support for so called "legacy" cards, which surprisingly are the top of the line cards when it comes to integrated video solutions for chipsets for their top of the line CPUs and still are sold, and then putting that support into free drivers that only support half of the chip you just paid for and that with bad performance compared to the proprietary driver is a good sign? Then I would like to know what for you is a bad sign.
        I actually do not think this is as bad as you make it out... Yes It's not exactly nice to remove support early, however when you look at both ATI's and AMD's history with video card support. The total time that each gets supported ranges from 5 to 7 years. This is usually directly linked to when Microsoft releases DirectX API Changes, and when Windows Specific changes are made. The Rage series of cards was released around 1995, and the last update to the driver was in 2001/2002 for ( windows only, D3D 3 to 6). The radeon R100 to R200 set initially released around 2001, and the last driver was released in 2006 ( windows only, D3D7/D3D8). The Radeon r300 to r500 set was released around 2002, and support ended with the 9.3 driver after about 7 years ( linux and windows, D3D9). Now it's the Radeon R600 to R700's turn, and the initial release for this was 2007, and with the support removed this last month where the driver supported both linux and windows ( this is D3D10/10.1). This time the support removal is a year ahead of the history, but at least the overall support for the generation set of cards is still 5 years to 6 years.



        Originally posted by Kano View Post
        Basically there is nothing against open source drivers, but when you are no oss purist you just want a fully functional driver, no matter if open or not. But when you hear that binary driver support for hardware is dropped while still sold - you still can buy 880G boards for AM3+ - trinity is slower than those chips for those boards - then it really hurts your customers. As you noticed yourself, the performance and features are still far away from fglrx.
        I would recommend looking at an a-series APU or an affordable radeon hd6670 video card. Either way you are looking at anywhere between 3 and 5 more years, with the added benefit of a few more years for the open source drivers to mature.

        Comment


        • All AMD fusion chips lack L3 cache. With a few execeptions all AM3(+) Phenoms all have got L3 on the chip and are therefore faster. Basically the A-cpus are only Athlons (brand name without L3 for AM3) with GPU combined, thats bad for speed records. If you want speed fusion chips are too slow, even if you are an AMD fan. But if mainly need CPU speed and not GPU you usually get something like 780G or 880G based boards - there are no newer chipsets with gfx. And there is no replacement currenty, so they should not stop supporting those boards or there have to be cpus with L3 cache combined with a GPU. I definitely do NOT think that everybody who wants to buy an AMD cpu (for whatever reasons) will go for the slower fusion FM1 cpus...

          Comment


          • Kano, we only have mobile Trinity benchmarks AFAIK. Do you have a desktop Trinity around, being able to say for sure it is slower?

            Comment


            • No, but i have got i7-3770S, i prefer that one

              Comment


              • Originally posted by Kano View Post
                All AMD fusion chips lack L3 cache. With a few execeptions all AM3(+) Phenoms all have got L3 on the chip and are therefore faster. Basically the A-cpus are only Athlons (brand name without L3 for AM3) with GPU combined, thats bad for speed records. If you want speed fusion chips are too slow, even if you are an AMD fan. But if mainly need CPU speed and not GPU you usually get something like 780G or 880G based boards - there are no newer chipsets with gfx. And there is no replacement currenty, so they should not stop supporting those boards or there have to be cpus with L3 cache combined with a GPU. I definitely do NOT think that everybody who wants to buy an AMD cpu (for whatever reasons) will go for the slower fusion FM1 cpus...
                I do agree that currently the Fusion chips lack L3 cache. However, that does not necessarely mean that the processor is a complete loss. For specific tasks, like Compiling software, The L3 Cache is a must. However, for most end users, there is no need to have l3 cache, and you will not notice much difference in those cases. There are more desktop processors lines from amd that have L3 cache that are currently in production that have L3 cache.

                AMD FX series- All Models crurently have L3 Cache. Currently No exceptions are made.
                AMD Phenom series - All models have L3 cache. The only exception are models released with a "Propus" Codename and "Regor" Codename. ( as you mentioned )
                Phenom II mobile - All models currently do not have l3 cache.

                Comment


                • It is not the point if the chips have L3 or not. The point is that AMD is dropping support for the only integrated video solution they have for their top of the line CPU, which is uarguably the AMD FX, noting that they dropped support for legacy hardware, despite the fact that it is still sold and that they have not delivered even one successor to this "legacy" hardware and, even worse, they recommend to use the open driver, which is not able to use all the units (UVD), lacks essential functions (proper power management) and has not the same performance as the proprietary drivers.

                  Comment


                  • Thats what i said, but did not especially mention the fx series because many do not think that they are really faster than the phenom ii cpus before. amd combined 2 integer units together with 1 fpu into a functional part. then they used 4 of em for the so called 8 core chips. But you only get 4 fpus, when you compare that to the older phenom x6, there you got 6 fpus. So if you would use the same frequency and expect the same effiency then you have got a 33% increase in integer performance and a decrease of 33% of fpu speed. Depends on your workload if the fx cpus are better or not. But certainly what is lacking is the onchip gpu, they need more space, that means shrink the chip. Most likely amd could easyly add the gpu if they would not use a 32nm process but a 22nm one like Intel does. Maybe amd should ask intel to build the cpus for em, they do not need their old amd factory anyway because they paided em to have got the free choice where they produce now...

                    Comment


                    • Originally posted by Kano View Post
                      Thats what i said, but did not especially mention the fx series because many do not think that they are really faster than the phenom ii cpus before. amd combined 2 integer units together with 1 fpu into a functional part. then they used 4 of em for the so called 8 core chips. But you only get 4 fpus, when you compare that to the older phenom x6, there you got 6 fpus. So if you would use the same frequency and expect the same effiency then you have got a 33% increase in integer performance and a decrease of 33% of fpu speed. Depends on your workload if the fx cpus are better or not. But certainly what is lacking is the onchip gpu, they need more space, that means shrink the chip. Most likely amd could easyly add the gpu if they would not use a 32nm process but a 22nm one like Intel does. Maybe amd should ask intel to build the cpus for em, they do not need their old amd factory anyway because they paided em to have got the free choice where they produce now...
                      You are right, I own a Phenom II X6 and I wouldn't even consider to buy that Bulldozer crap, at least unless they increase the number of modules to 6 or 8. Nonetheless, AMD decided to end the life of the Phenom II, so that the AMD FX is now the best CPU they have, even if we know that their so called "first 8-core desktop CPU" is just a marketing lie.

                      Comment

                      Working...
                      X