Announcement

Collapse
No announcement yet.

AMD To Drop Radeon HD 2000/3000/4000 Catalyst Support

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • It's not that simple, on laptops to we have to use catalyst, because otherwise laptop is like frying pan, fans ar all up...
    But 3xxx M card + catalyst for me was success over nvidia NVS140M. Catalyst power management rocked, hibernate and suspend was flawless, everything was good, with nvidia - crap all the way of 2 years I was using that computer...

    However this week I had to upgrade my sons old computer and I went for nvidia, just because I heard that catalyst will be discontinued for less than 5xxx series...
    In the long term this is good decision I think, currently there is a fuss about it as 4xxx series really do the job with playing latest games. As I'm not that much of a gamer (I play a game or two in handful of months usually) that doesn't bother me much, even having phenom + 4850 for gaming

    Comment


    • Originally posted by bac0n View Post
      Even if Linus doesnt seem to be that interested of gfx he should point the "eye" this way because i believe gfx is one of the last excuses for not using linux.
      Well, graphics surely... and the lack of games, the lack of necessary commercial applications, the constantly breaking updates, shoddy system software that requires a full-time sysadmin to keep a home desktop running smoothly, the inability to install software your friends sent you http links to, GNOME 3, the lack of high-productivity development tools for corporate in-house developers, lack of management tools for large organizations, software incompatibility between distros and even between versions of the same distro, the mind-numbing FOSS politics, the lack of ever actually working printer support, laptop battery usage problems, the essentially non-existent security model (for 2012; Win95 is no longer the bar to compete against, people), the rough edges on every single major desktop component and application, the bugs and error dialogs you frequently run into even on fresh installs of the two most popular distros, the lack of automatic silent seamless updates due to update instability, legacy software support issues, library API/ABI churn, training costs, God I could just keep going.

      Linux is a neat hobbyist OS. It's great for tinkerers and nerds who want to play with the guts of an OS and write little scripts to patch over problems that properly QA'd desktop OSes don't have in the first place. It even can suffice for the more dedicated nerds as a primary desktop OS. It's a fantastic server appliance OS. When suitably gutted, hacked, modified, and mostly replaced by walled-garden pseudo-FOSS bits, it even makes a great phone/tablet OS. It's never, ever, EVER going to replace OS X or Windows as a consumer desktop OS. Ever. It took 15 years to fail to catch up to the ease of use and consumer-friendliness of Windows 95. It is lightyears behind Windows 7. Even as crappy as Windows 8 is shaping up to be, GNOME 3 is even crappier. By the time the hobbyists finally get around to (briefly) focusing on quality and polish again like they did in the early days of GNOME 2, Windows 10 will already be out, and OS X may well have gone from 0% to 10% marketshare while the old "10x10" (10% marketshare by 2010) Linux desktop goals are already long dead.

      Comment


      • Originally posted by elanthis View Post
        Well, graphics surely... and the [rant snipped]
        Have you tried adminning a Windows box for someone for a long period of time? Both families of OS are rock-solid for simple stuff, and break spectacularly in different ways as the complexity increases. Android and IOS are taking a new approach to users who aren't admins, and Windows 8 is going that way too, but I don't believe for a second that computers won't ever suck and frustrate us enough to go on such rants

        The relevance to this thread is that the tinkerers and enthusiasts often drive the habits of people whose specialties aren't computers, even if it's indirect. So if you had excellent-performing free drivers for a class of ARM SOCs or something that made a fantastic tinkerer's Android device or something, that'd help shape the purchasing habits of millions of people who just want a smartphone. Or if you had a fantastic Linux graphics workstation setup for games (AMD and nVidia have strengths and weaknesses), that will shape our recommendations for notebooks and x86 tablets etc.

        Comment


        • You cant pick bits and pieces...

          [/quote] Well, graphics surely... [/quote]

          Its quite common to pick good and compare bad, linux will never be comercial, it will not even be for you, linux is for *me* and what I make of it. Linux is the ultimate ego trip, you cant tell it what to do, politics usually fails badly, you may say please, if you want something, just do it, no one is stopping you. I'm on my first debian install and have been so for the past 10 years, and it gets better with each passing year. When it comes to updates there are usual two things, you fail to recognize the problem and deal with it, or you dont known what you are doing, but yes I have start to notice it too, all of a sudden I have something on my system I didnt order, probably some requirement. The breaking can be annoying, even if one only break bad once in there lifetime the fragmentation of linux can lead to alot of breaks. One of the main thing why I switched to linux was just because of development, I think there are some real nice tools for development and the total control of the system makes it ideal for development. Hehe security, can you even say windows and security in the same sentence (i will not even comment on that). You have to know that most of the limitation in windows is not technical, its there to protect its business model. If you know your history you know that MS almost brought internet to a standstill and was quite happy with it. You can see MS starts to cut time between releases just because people start to ask why they dont have that next cool thing on there system. Maybe i have got it totally wrong but isnt it big business thats pushes open-source, just to get away from control of others and the bigger the business get the more open-sourcish they get. GFX will be/are that next big battlefield for linux, and linus the "morronizer" needs to get involved and start morronize those pesky trolls standing in the way so people can do some real advanced voodoo. If linux can get gfx right there will not be any real excuse not supporting it, or I should probably say that one of the excuses out there is just the status of gfxs.

          Comment


          • Originally posted by bridgman View Post
            At the same clocks, yes I think so (maybe a bit lower because there's been no tuning for the memory controller). At default clocks, obviously not.
            the consumer tends to use the defaults and even i don't know how to switch the clocks on a system like this.

            maybe amd should fix the "defaults" for there AMD produces instead of delivering apologies and excuses.

            Comment


            • Originally posted by bridgman View Post
              It *has* been starting earlier each generation and will continue to do so. For the current generation we started development well before launch but not as early as the proprietary driver. For the next generation we are more-or-less aligned with proprietary driver development.
              Does the fact that you are aligned with the proprietary driver development schedule, also mean that the level of robustness and performance of the open driver is going to be similarly aligned with the proprietary driver? Keep in mind here I'm just talking about raw performance -- fill rate, FPS, ability to keep the shader pipeline busy so it doesn't stall for 1/5 second every other frame, and so on... I'm not talking about "features" like video encode/decode, OpenCL, support for newer versions of OpenGL, quad buffers / 3D (e.g. 3d bluray or OpenGL 4.x 3D), and so on and so forth. So just to get those things off the table... performance... if you start work at the same time as the proprietary driver, are you going to be able to achieve comparable performance to the proprietary driver as well?

              It's an innocent question; I honestly don't know the answer. I'm not sure what your (slightly expanded) team of programmers can do if they're given that enormous amount of time to work on a chip, and IIRC you said that the next generation (HD8000) is going to be a much less radical departure from GCN than GCN was from pre-GCN. I guess a part of me is optimistic that with all the extra time and more programmers working from the very beginning of the cycle, you might have a chance to chunk out 70 or 80% of the performance of Catalyst, assuming PCI-E 2.0 support enabled and we set the GPU/VRAM clocks correctly? Or is that a pipe dream?

              Comment


              • Originally posted by allquixotic View Post
                Does the fact that you are aligned with the proprietary driver development schedule, also mean that the level of robustness and performance of the open driver is going to be similarly aligned with the proprietary driver? Keep in mind here I'm just talking about raw performance -- fill rate, FPS, ability to keep the shader pipeline busy so it doesn't stall for 1/5 second every other frame, and so on... I'm not talking about "features" like video encode/decode, OpenCL, support for newer versions of OpenGL, quad buffers / 3D (e.g. 3d bluray or OpenGL 4.x 3D), and so on and so forth.
                There's not really any direct connection (although I'll talk about some indirect connections below). I think the devs would tell you that their plan is to keep the level of robustness *higher* than the proprietary driver anyways

                The most obvious advantage of starting sooner is that we can finish sooner relative to HW launch.

                What it also means is that there will be less time spent "learning the same lessons about hardware quirks from scratch" (since we'll all be sharing information as the new hardware is brought up for the first time) but that is traded off against the fact that developing earlier is harder because we are not able to rely on other teams having worked through HW issues and worked around them in VBIOS and microcode, and because testing needs to be done on simulators rather than on (faster) real hardware. On balance I expect we will come out a bit ahead.

                The big difference I expect is that we will be less likely to get "stuck" on hardware issues the way we have been on SI at a couple of points, since everyone else will be working on the same HW at the same time as the open source devs. That's one of those "the worst case won't be as bad" advantages though, sort of like a higher minimum frame rate

                Originally posted by allquixotic View Post
                So just to get those things off the table... performance... if you start work at the same time as the proprietary driver, are you going to be able to achieve comparable performance to the proprietary driver as well?
                I guess the most obvious point is that starting earlier doesn't actually give us more calendar time since we'll need to start work earlier on the *next* generation as well. We will have a bit more time since we won't be "catching up" any more, but that's only going to be maybe a 15% increase in per-generation development time.

                Originally posted by allquixotic View Post
                It's an innocent question; I honestly don't know the answer. I'm not sure what your (slightly expanded) team of programmers can do if they're given that enormous amount of time to work on a chip, and IIRC you said that the next generation (HD8000) is going to be a much less radical departure from GCN than GCN was from pre-GCN.
                I don't think we know the answer either, but I do expect that the smaller architectural changes from SI to subsequent parts should definitely make the next generation easier than SI.

                Originally posted by allquixotic View Post
                I guess a part of me is optimistic that with all the extra time and more programmers working from the very beginning of the cycle, you might have a chance to chunk out 70 or 80% of the performance of Catalyst, assuming PCI-E 2.0 support enabled and we set the GPU/VRAM clocks correctly? Or is that a pipe dream?
                Definitely not a pipe dream, but don't treat it as a given either.

                There are a number of open questions right now :

                First is how expensive the next round of performance improvements are going to be in the current open source driver stack once things like tiling and hyper-Z are enabled by default. My guess is that there should still be some low hanging fruit related to performance of slower games like Warsow.

                The second is whether the combination of new shader architecture in SI and the use of LLVM in the open source driver's shader compiler will raise the general level of performance on shader-intensive apps -- we think it will but we don't have any testing to confirm or deny at this point.

                The third is how close we can get power & thermal management to the proprietary driver.

                Last is whether we will be able to usefully leverage the performance work done on the proprietary driver, since the two drivers have fairly different internal architectures (proprietary shares code across OSes, open shares code across HW vendors). If we are able to leverage some of the performance work then things get better than what I'm saying here, but since it's a "we don't know yet" I don't want to set any expectations.

                We should know more about the first three over the next couple of months.
                Last edited by bridgman; 05-31-2012, 01:22 PM.

                Comment


                • Originally posted by bridgman View Post
                  lots of information
                  Thanks!

                  I guess it is fortunate, in a way, that there should be substantial code sharing between SI and SI+1.... fortunate for me because I have SI.... so if you guys are working on SI+1 and you realize that as a prerequisite you need to get something done that will benefit both SI and SI+1, that's great for me!

                  SI is seeming like the lost generation right now, so hopefully that changes (it'll have to, or RadeonSI will be able to power neither SI nor SI+1, I think).

                  Also, how are your efforts that affect cards "across the board" (no pun intended) affecting your development schedule for SI+1 bringup? Certainly that has to take time away from working on the core hardware bringup code (which is huge, judging by RadeonSI). I know you have some folks working on OpenCL, power management, perhaps even newer versions of OpenGL support? These are certainly nice to have, if not downright desirable from my perspective, but lower priority for me than getting performance up there as well as full OpenGL 2.1 with no hardlocks / segfaults.

                  Comment


                  • I just tried 12,6_beta on my radeon HD 545v (Mobility Radeon HD 5xxx Series according to wikipedia). It doesn't work. when loading fglrx.ko it says "no such device" so support has been dropped also for some hd 5xxx card (this is 45XX based anyway).

                    I bought this laptop 1 years and 2 months ago, brand new. Congrats AMD, you will see another 0 $ from me, stay sure. This is not what a serious company should do.

                    NVIDIA: welcome back in the game (but I will buy intel only exept for my game PC).

                    Ok stop the rage now.
                    I'm sad for this.... really. There was a time I was a strong fan of AMD support for linux. I'm no more, not really because of fglrx, but more becouse of AMD really doesn't put a fair effort in radeon. And when I see this crazy drop of support (even without xorg-server-1.12 support in place! Now I unerstand why ubuntu didn't updated to 1.12) from the official driver...... well I can understand AMD gives a damn about linux and think only about windows. If NVIDIA can add support to old drivers (and BTW 3xx still support ge force 6xxx..... that's old guys!), I think AMD should put a little more effort to it too. I'm not asking for new features, but at least new kernel and Xorg support is a must.

                    Farewell AMD

                    Comment


                    • Thanks bridgeman for the detailed information it is most appreciated, On the strength of the current develpment I recently purchased Gigabyte Radeon HD7870 to be use at a later date when develpment comes along a bit more as by then it is likey the cards will no longer be availible (thats from past experience).

                      Comment


                      • @enrico.tagliavini

                        Thats called marketing, if a vendor needs a new card for an oem and there is no new hardware ready then they just rebrand an old one. The same issue is with my nvidia 405 card, usually you would say 4xx is fermi based, but no, that one is a rebranded nv 210 (405 is a pure oem card). Its of course very bad that support was dropped for dx10 cards that early but maybe amd reconsiders it and drops it after xserver 1.12, with would be at least enough for debian wheezy.

                        Comment


                        • I just upgraded from 3.3.7 to 3.5.0-rc1, it looks like it's quite a game changer, takes the sting out of this amd move, yet to test the powersave feature though.

                          Comment


                          • Originally posted by acer View Post
                            I just upgraded from 3.3.7 to 3.5.0-rc1, it looks like it's quite a game changer, takes the sting out of this amd move, yet to test the powersave feature though.
                            Would be nice if you can report back the CPU temperatures you get in comparison to Catalyst or earlier versions of radeon.

                            Comment


                            • Originally posted by enrico.tagliavini View Post
                              (even without xorg-server-1.12 support in place! Now I unerstand why ubuntu didn't updated to 1.12)
                              I think it's the other way round. The fglrx people are late with xorg 1.12 support because Ubuntu doesn't yet use that version.
                              Anyway, we're getting the same problems as with the R200-R500 series in 2009. I have never bought AMD hardware - and now I have an even better reason to stick to nvidia.

                              Comment


                              • Originally posted by Kano View Post
                                @enrico.tagliavini

                                Thats called marketing, if a vendor needs a new card for an oem and there is no new hardware ready then they just rebrand an old one. The same issue is with my nvidia 405 card, usually you would say 4xx is fermi based, but no, that one is a rebranded nv 210 (405 is a pure oem card). Its of course very bad that support was dropped for dx10 cards that early but maybe amd reconsiders it and drops it after xserver 1.12, with would be at least enough for debian wheezy.
                                That's correct, the name has nothing to do with the chip inside (at least it is no strict). I knew it was an r700 based card, but I tried. It would be very nice for AMD to do just another release with xorg-server 1.12 support, but I don't even try to hope it, I'm sick of be frustrated by AMD.

                                Comment

                                Working...
                                X