Announcement

Collapse
No announcement yet.

AMD To Drop Radeon HD 2000/3000/4000 Catalyst Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Qaridarium
    its only the discrepancy between what is and what could be
    Not really. It's sad that AMD doesn't have more money to spend on managing the Catalyst drivers, they could probably hire one or two more people to work full-time on porting the legacy-branch Catalyst drivers to new kernel/Xorg versions. But since 2006-2007 AMD have been paying the open source team (including all the lawyers and non-coding staff) to set up an infrastructure to support cards that are considered legacy. They could have spent that same money on keeping fglrx closed and supported multiple branches for multiple generations of cards, which is exactly what nvidia does, but they chose a different option.

    AMD hoped the free drivers would eventually reach 80% of the performance of fglrx, that's the only real failure because the performance isn't there. Bridgman has said the free driver development will happen faster with future generations of cards, and it'll have to get quicker if AMD don't have the resources to maintain fglrx for longer than 3 for an older generation of cards.

    I don't know what I'd buy if I had to buy a new linux desktop system now, free drivers are important to me but so is performance, so I think it'd be a toss-up between an integrated Intel system or a discrete AMD system with a mid-range Radeon card. I still don't think I need the performance of the closed-source nvidia options but their support by Wine games is always tempting.

    Comment


    • Originally posted by grantek View Post
      AMD hoped the free drivers would eventually reach 80% of the performance of fglrx, that's the only real failure because the performance isn't there.
      I actually said we expected 60-70% based on an estimate of how much community development would happen. IIRC what we are seeing now is a bit higher on 3xx-5xx (especially 5xx) and a bit lower on 6xx and higher.

      Originally posted by grantek View Post
      Bridgman has said the free driver development will happen faster with future generations of cards, and it'll have to get quicker if AMD don't have the resources to maintain fglrx for longer than 3 for an older generation of cards.
      It *has* been starting earlier each generation and will continue to do so. For the current generation we started development well before launch but not as early as the proprietary driver. For the next generation we are more-or-less aligned with proprietary driver development.
      Test signature

      Comment


      • Originally posted by Qaridarium
        you claim 40-50% with a Radeon HD 6550D ?

        its more like 10-20%
        At the same clocks, yes I think so (maybe a bit lower because there's been no tuning for the memory controller). At default clocks, obviously not.
        Test signature

        Comment


        • Me to but Im still angry...

          Originally posted by chrisr View Post
          I dumped fglrx ages ago and not regretted it. Ordinary Open Source users simply aren't its target audience. The driver was always behind new kernels and X servers, bugs remained unfixed, no-one was interested in receiving bug reports and its stability/usability fluctuated wildly from month to month anyway.

          The Open Source drivers may not have all the benchmark performance, but in my experience they have a stability and consistency that fglrx could never achieve in its darkest dreams.
          I mean my warranty just expired and the card basically gets EOL. I bought this card (4890) because of amds committment to open-source but fglrx has been one long painfull childbirth. My patience has really been tested, but around 10.9 I decided to drop fglrx for good and rely on open-source driver, it just wasnt worth the hassle. Even if Linus doesnt seem to be that interested of gfx he should point the "eye" this way because i believe gfx is one of the last excuses for not using linux.
          Last edited by bac0n; 31 May 2012, 01:06 AM.

          Comment


          • It's not that simple, on laptops to we have to use catalyst, because otherwise laptop is like frying pan, fans ar all up...
            But 3xxx M card + catalyst for me was success over nvidia NVS140M. Catalyst power management rocked, hibernate and suspend was flawless, everything was good, with nvidia - crap all the way of 2 years I was using that computer...

            However this week I had to upgrade my sons old computer and I went for nvidia, just because I heard that catalyst will be discontinued for less than 5xxx series...
            In the long term this is good decision I think, currently there is a fuss about it as 4xxx series really do the job with playing latest games. As I'm not that much of a gamer (I play a game or two in handful of months usually) that doesn't bother me much, even having phenom + 4850 for gaming

            Comment


            • Originally posted by bac0n View Post
              Even if Linus doesnt seem to be that interested of gfx he should point the "eye" this way because i believe gfx is one of the last excuses for not using linux.
              Well, graphics surely... and the lack of games, the lack of necessary commercial applications, the constantly breaking updates, shoddy system software that requires a full-time sysadmin to keep a home desktop running smoothly, the inability to install software your friends sent you http links to, GNOME 3, the lack of high-productivity development tools for corporate in-house developers, lack of management tools for large organizations, software incompatibility between distros and even between versions of the same distro, the mind-numbing FOSS politics, the lack of ever actually working printer support, laptop battery usage problems, the essentially non-existent security model (for 2012; Win95 is no longer the bar to compete against, people), the rough edges on every single major desktop component and application, the bugs and error dialogs you frequently run into even on fresh installs of the two most popular distros, the lack of automatic silent seamless updates due to update instability, legacy software support issues, library API/ABI churn, training costs, God I could just keep going.

              Linux is a neat hobbyist OS. It's great for tinkerers and nerds who want to play with the guts of an OS and write little scripts to patch over problems that properly QA'd desktop OSes don't have in the first place. It even can suffice for the more dedicated nerds as a primary desktop OS. It's a fantastic server appliance OS. When suitably gutted, hacked, modified, and mostly replaced by walled-garden pseudo-FOSS bits, it even makes a great phone/tablet OS. It's never, ever, EVER going to replace OS X or Windows as a consumer desktop OS. Ever. It took 15 years to fail to catch up to the ease of use and consumer-friendliness of Windows 95. It is lightyears behind Windows 7. Even as crappy as Windows 8 is shaping up to be, GNOME 3 is even crappier. By the time the hobbyists finally get around to (briefly) focusing on quality and polish again like they did in the early days of GNOME 2, Windows 10 will already be out, and OS X may well have gone from 0% to 10% marketshare while the old "10x10" (10% marketshare by 2010) Linux desktop goals are already long dead.

              Comment


              • Originally posted by elanthis View Post
                Well, graphics surely... and the [rant snipped]
                Have you tried adminning a Windows box for someone for a long period of time? Both families of OS are rock-solid for simple stuff, and break spectacularly in different ways as the complexity increases. Android and IOS are taking a new approach to users who aren't admins, and Windows 8 is going that way too, but I don't believe for a second that computers won't ever suck and frustrate us enough to go on such rants

                The relevance to this thread is that the tinkerers and enthusiasts often drive the habits of people whose specialties aren't computers, even if it's indirect. So if you had excellent-performing free drivers for a class of ARM SOCs or something that made a fantastic tinkerer's Android device or something, that'd help shape the purchasing habits of millions of people who just want a smartphone. Or if you had a fantastic Linux graphics workstation setup for games (AMD and nVidia have strengths and weaknesses), that will shape our recommendations for notebooks and x86 tablets etc.

                Comment


                • You cant pick bits and pieces...

                  [/quote] Well, graphics surely... [/quote]

                  Its quite common to pick good and compare bad, linux will never be comercial, it will not even be for you, linux is for *me* and what I make of it. Linux is the ultimate ego trip, you cant tell it what to do, politics usually fails badly, you may say please, if you want something, just do it, no one is stopping you. I'm on my first debian install and have been so for the past 10 years, and it gets better with each passing year. When it comes to updates there are usual two things, you fail to recognize the problem and deal with it, or you dont known what you are doing, but yes I have start to notice it too, all of a sudden I have something on my system I didnt order, probably some requirement. The breaking can be annoying, even if one only break bad once in there lifetime the fragmentation of linux can lead to alot of breaks. One of the main thing why I switched to linux was just because of development, I think there are some real nice tools for development and the total control of the system makes it ideal for development. Hehe security, can you even say windows and security in the same sentence (i will not even comment on that). You have to know that most of the limitation in windows is not technical, its there to protect its business model. If you know your history you know that MS almost brought internet to a standstill and was quite happy with it. You can see MS starts to cut time between releases just because people start to ask why they dont have that next cool thing on there system. Maybe i have got it totally wrong but isnt it big business thats pushes open-source, just to get away from control of others and the bigger the business get the more open-sourcish they get. GFX will be/are that next big battlefield for linux, and linus the "morronizer" needs to get involved and start morronize those pesky trolls standing in the way so people can do some real advanced voodoo. If linux can get gfx right there will not be any real excuse not supporting it, or I should probably say that one of the excuses out there is just the status of gfxs.

                  Comment


                  • Originally posted by bridgman View Post
                    It *has* been starting earlier each generation and will continue to do so. For the current generation we started development well before launch but not as early as the proprietary driver. For the next generation we are more-or-less aligned with proprietary driver development.
                    Does the fact that you are aligned with the proprietary driver development schedule, also mean that the level of robustness and performance of the open driver is going to be similarly aligned with the proprietary driver? Keep in mind here I'm just talking about raw performance -- fill rate, FPS, ability to keep the shader pipeline busy so it doesn't stall for 1/5 second every other frame, and so on... I'm not talking about "features" like video encode/decode, OpenCL, support for newer versions of OpenGL, quad buffers / 3D (e.g. 3d bluray or OpenGL 4.x 3D), and so on and so forth. So just to get those things off the table... performance... if you start work at the same time as the proprietary driver, are you going to be able to achieve comparable performance to the proprietary driver as well?

                    It's an innocent question; I honestly don't know the answer. I'm not sure what your (slightly expanded) team of programmers can do if they're given that enormous amount of time to work on a chip, and IIRC you said that the next generation (HD8000) is going to be a much less radical departure from GCN than GCN was from pre-GCN. I guess a part of me is optimistic that with all the extra time and more programmers working from the very beginning of the cycle, you might have a chance to chunk out 70 or 80% of the performance of Catalyst, assuming PCI-E 2.0 support enabled and we set the GPU/VRAM clocks correctly? Or is that a pipe dream?

                    Comment


                    • Originally posted by allquixotic View Post
                      Does the fact that you are aligned with the proprietary driver development schedule, also mean that the level of robustness and performance of the open driver is going to be similarly aligned with the proprietary driver? Keep in mind here I'm just talking about raw performance -- fill rate, FPS, ability to keep the shader pipeline busy so it doesn't stall for 1/5 second every other frame, and so on... I'm not talking about "features" like video encode/decode, OpenCL, support for newer versions of OpenGL, quad buffers / 3D (e.g. 3d bluray or OpenGL 4.x 3D), and so on and so forth.
                      There's not really any direct connection (although I'll talk about some indirect connections below). I think the devs would tell you that their plan is to keep the level of robustness *higher* than the proprietary driver anyways

                      The most obvious advantage of starting sooner is that we can finish sooner relative to HW launch.

                      What it also means is that there will be less time spent "learning the same lessons about hardware quirks from scratch" (since we'll all be sharing information as the new hardware is brought up for the first time) but that is traded off against the fact that developing earlier is harder because we are not able to rely on other teams having worked through HW issues and worked around them in VBIOS and microcode, and because testing needs to be done on simulators rather than on (faster) real hardware. On balance I expect we will come out a bit ahead.

                      The big difference I expect is that we will be less likely to get "stuck" on hardware issues the way we have been on SI at a couple of points, since everyone else will be working on the same HW at the same time as the open source devs. That's one of those "the worst case won't be as bad" advantages though, sort of like a higher minimum frame rate

                      Originally posted by allquixotic View Post
                      So just to get those things off the table... performance... if you start work at the same time as the proprietary driver, are you going to be able to achieve comparable performance to the proprietary driver as well?
                      I guess the most obvious point is that starting earlier doesn't actually give us more calendar time since we'll need to start work earlier on the *next* generation as well. We will have a bit more time since we won't be "catching up" any more, but that's only going to be maybe a 15% increase in per-generation development time.

                      Originally posted by allquixotic View Post
                      It's an innocent question; I honestly don't know the answer. I'm not sure what your (slightly expanded) team of programmers can do if they're given that enormous amount of time to work on a chip, and IIRC you said that the next generation (HD8000) is going to be a much less radical departure from GCN than GCN was from pre-GCN.
                      I don't think we know the answer either, but I do expect that the smaller architectural changes from SI to subsequent parts should definitely make the next generation easier than SI.

                      Originally posted by allquixotic View Post
                      I guess a part of me is optimistic that with all the extra time and more programmers working from the very beginning of the cycle, you might have a chance to chunk out 70 or 80% of the performance of Catalyst, assuming PCI-E 2.0 support enabled and we set the GPU/VRAM clocks correctly? Or is that a pipe dream?
                      Definitely not a pipe dream, but don't treat it as a given either.

                      There are a number of open questions right now :

                      First is how expensive the next round of performance improvements are going to be in the current open source driver stack once things like tiling and hyper-Z are enabled by default. My guess is that there should still be some low hanging fruit related to performance of slower games like Warsow.

                      The second is whether the combination of new shader architecture in SI and the use of LLVM in the open source driver's shader compiler will raise the general level of performance on shader-intensive apps -- we think it will but we don't have any testing to confirm or deny at this point.

                      The third is how close we can get power & thermal management to the proprietary driver.

                      Last is whether we will be able to usefully leverage the performance work done on the proprietary driver, since the two drivers have fairly different internal architectures (proprietary shares code across OSes, open shares code across HW vendors). If we are able to leverage some of the performance work then things get better than what I'm saying here, but since it's a "we don't know yet" I don't want to set any expectations.

                      We should know more about the first three over the next couple of months.
                      Last edited by bridgman; 31 May 2012, 01:22 PM.
                      Test signature

                      Comment

                      Working...
                      X