Page 21 of 25 FirstFirst ... 111920212223 ... LastLast
Results 201 to 210 of 243

Thread: AMD To Drop Radeon HD 2000/3000/4000 Catalyst Support

  1. #201
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by bridgman View Post
    It *has* been starting earlier each generation and will continue to do so. For the current generation we started development well before launch but not as early as the proprietary driver. For the next generation we are more-or-less aligned with proprietary driver development.
    Does the fact that you are aligned with the proprietary driver development schedule, also mean that the level of robustness and performance of the open driver is going to be similarly aligned with the proprietary driver? Keep in mind here I'm just talking about raw performance -- fill rate, FPS, ability to keep the shader pipeline busy so it doesn't stall for 1/5 second every other frame, and so on... I'm not talking about "features" like video encode/decode, OpenCL, support for newer versions of OpenGL, quad buffers / 3D (e.g. 3d bluray or OpenGL 4.x 3D), and so on and so forth. So just to get those things off the table... performance... if you start work at the same time as the proprietary driver, are you going to be able to achieve comparable performance to the proprietary driver as well?

    It's an innocent question; I honestly don't know the answer. I'm not sure what your (slightly expanded) team of programmers can do if they're given that enormous amount of time to work on a chip, and IIRC you said that the next generation (HD8000) is going to be a much less radical departure from GCN than GCN was from pre-GCN. I guess a part of me is optimistic that with all the extra time and more programmers working from the very beginning of the cycle, you might have a chance to chunk out 70 or 80% of the performance of Catalyst, assuming PCI-E 2.0 support enabled and we set the GPU/VRAM clocks correctly? Or is that a pipe dream?

  2. #202
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,413

    Default

    Quote Originally Posted by allquixotic View Post
    Does the fact that you are aligned with the proprietary driver development schedule, also mean that the level of robustness and performance of the open driver is going to be similarly aligned with the proprietary driver? Keep in mind here I'm just talking about raw performance -- fill rate, FPS, ability to keep the shader pipeline busy so it doesn't stall for 1/5 second every other frame, and so on... I'm not talking about "features" like video encode/decode, OpenCL, support for newer versions of OpenGL, quad buffers / 3D (e.g. 3d bluray or OpenGL 4.x 3D), and so on and so forth.
    There's not really any direct connection (although I'll talk about some indirect connections below). I think the devs would tell you that their plan is to keep the level of robustness *higher* than the proprietary driver anyways

    The most obvious advantage of starting sooner is that we can finish sooner relative to HW launch.

    What it also means is that there will be less time spent "learning the same lessons about hardware quirks from scratch" (since we'll all be sharing information as the new hardware is brought up for the first time) but that is traded off against the fact that developing earlier is harder because we are not able to rely on other teams having worked through HW issues and worked around them in VBIOS and microcode, and because testing needs to be done on simulators rather than on (faster) real hardware. On balance I expect we will come out a bit ahead.

    The big difference I expect is that we will be less likely to get "stuck" on hardware issues the way we have been on SI at a couple of points, since everyone else will be working on the same HW at the same time as the open source devs. That's one of those "the worst case won't be as bad" advantages though, sort of like a higher minimum frame rate

    Quote Originally Posted by allquixotic View Post
    So just to get those things off the table... performance... if you start work at the same time as the proprietary driver, are you going to be able to achieve comparable performance to the proprietary driver as well?
    I guess the most obvious point is that starting earlier doesn't actually give us more calendar time since we'll need to start work earlier on the *next* generation as well. We will have a bit more time since we won't be "catching up" any more, but that's only going to be maybe a 15% increase in per-generation development time.

    Quote Originally Posted by allquixotic View Post
    It's an innocent question; I honestly don't know the answer. I'm not sure what your (slightly expanded) team of programmers can do if they're given that enormous amount of time to work on a chip, and IIRC you said that the next generation (HD8000) is going to be a much less radical departure from GCN than GCN was from pre-GCN.
    I don't think we know the answer either, but I do expect that the smaller architectural changes from SI to subsequent parts should definitely make the next generation easier than SI.

    Quote Originally Posted by allquixotic View Post
    I guess a part of me is optimistic that with all the extra time and more programmers working from the very beginning of the cycle, you might have a chance to chunk out 70 or 80% of the performance of Catalyst, assuming PCI-E 2.0 support enabled and we set the GPU/VRAM clocks correctly? Or is that a pipe dream?
    Definitely not a pipe dream, but don't treat it as a given either.

    There are a number of open questions right now :

    First is how expensive the next round of performance improvements are going to be in the current open source driver stack once things like tiling and hyper-Z are enabled by default. My guess is that there should still be some low hanging fruit related to performance of slower games like Warsow.

    The second is whether the combination of new shader architecture in SI and the use of LLVM in the open source driver's shader compiler will raise the general level of performance on shader-intensive apps -- we think it will but we don't have any testing to confirm or deny at this point.

    The third is how close we can get power & thermal management to the proprietary driver.

    Last is whether we will be able to usefully leverage the performance work done on the proprietary driver, since the two drivers have fairly different internal architectures (proprietary shares code across OSes, open shares code across HW vendors). If we are able to leverage some of the performance work then things get better than what I'm saying here, but since it's a "we don't know yet" I don't want to set any expectations.

    We should know more about the first three over the next couple of months.
    Last edited by bridgman; 05-31-2012 at 01:22 PM.

  3. #203
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by bridgman View Post
    lots of information
    Thanks!

    I guess it is fortunate, in a way, that there should be substantial code sharing between SI and SI+1.... fortunate for me because I have SI.... so if you guys are working on SI+1 and you realize that as a prerequisite you need to get something done that will benefit both SI and SI+1, that's great for me!

    SI is seeming like the lost generation right now, so hopefully that changes (it'll have to, or RadeonSI will be able to power neither SI nor SI+1, I think).

    Also, how are your efforts that affect cards "across the board" (no pun intended) affecting your development schedule for SI+1 bringup? Certainly that has to take time away from working on the core hardware bringup code (which is huge, judging by RadeonSI). I know you have some folks working on OpenCL, power management, perhaps even newer versions of OpenGL support? These are certainly nice to have, if not downright desirable from my perspective, but lower priority for me than getting performance up there as well as full OpenGL 2.1 with no hardlocks / segfaults.

  4. #204

    Default

    I just tried 12,6_beta on my radeon HD 545v (Mobility Radeon HD 5xxx Series according to wikipedia). It doesn't work. when loading fglrx.ko it says "no such device" so support has been dropped also for some hd 5xxx card (this is 45XX based anyway).

    I bought this laptop 1 years and 2 months ago, brand new. Congrats AMD, you will see another 0 $ from me, stay sure. This is not what a serious company should do.

    NVIDIA: welcome back in the game (but I will buy intel only exept for my game PC).

    Ok stop the rage now.
    I'm sad for this.... really. There was a time I was a strong fan of AMD support for linux. I'm no more, not really because of fglrx, but more becouse of AMD really doesn't put a fair effort in radeon. And when I see this crazy drop of support (even without xorg-server-1.12 support in place! Now I unerstand why ubuntu didn't updated to 1.12) from the official driver...... well I can understand AMD gives a damn about linux and think only about windows. If NVIDIA can add support to old drivers (and BTW 3xx still support ge force 6xxx..... that's old guys!), I think AMD should put a little more effort to it too. I'm not asking for new features, but at least new kernel and Xorg support is a must.

    Farewell AMD

  5. #205
    Join Date
    Nov 2008
    Posts
    89

    Default

    Thanks bridgeman for the detailed information it is most appreciated, On the strength of the current develpment I recently purchased Gigabyte Radeon HD7870 to be use at a later date when develpment comes along a bit more as by then it is likey the cards will no longer be availible (thats from past experience).

  6. #206
    Join Date
    Aug 2007
    Posts
    6,610

    Default

    @enrico.tagliavini

    Thats called marketing, if a vendor needs a new card for an oem and there is no new hardware ready then they just rebrand an old one. The same issue is with my nvidia 405 card, usually you would say 4xx is fermi based, but no, that one is a rebranded nv 210 (405 is a pure oem card). Its of course very bad that support was dropped for dx10 cards that early but maybe amd reconsiders it and drops it after xserver 1.12, with would be at least enough for debian wheezy.

  7. #207
    Join Date
    May 2011
    Posts
    14

    Default

    I just upgraded from 3.3.7 to 3.5.0-rc1, it looks like it's quite a game changer, takes the sting out of this amd move, yet to test the powersave feature though.

  8. #208
    Join Date
    Apr 2012
    Location
    Germany
    Posts
    205

    Default

    Quote Originally Posted by acer View Post
    I just upgraded from 3.3.7 to 3.5.0-rc1, it looks like it's quite a game changer, takes the sting out of this amd move, yet to test the powersave feature though.
    Would be nice if you can report back the CPU temperatures you get in comparison to Catalyst or earlier versions of radeon.

  9. #209
    Join Date
    May 2011
    Posts
    353

    Default

    Quote Originally Posted by enrico.tagliavini View Post
    (even without xorg-server-1.12 support in place! Now I unerstand why ubuntu didn't updated to 1.12)
    I think it's the other way round. The fglrx people are late with xorg 1.12 support because Ubuntu doesn't yet use that version.
    Anyway, we're getting the same problems as with the R200-R500 series in 2009. I have never bought AMD hardware - and now I have an even better reason to stick to nvidia.

  10. #210

    Default

    Quote Originally Posted by Kano View Post
    @enrico.tagliavini

    Thats called marketing, if a vendor needs a new card for an oem and there is no new hardware ready then they just rebrand an old one. The same issue is with my nvidia 405 card, usually you would say 4xx is fermi based, but no, that one is a rebranded nv 210 (405 is a pure oem card). Its of course very bad that support was dropped for dx10 cards that early but maybe amd reconsiders it and drops it after xserver 1.12, with would be at least enough for debian wheezy.
    That's correct, the name has nothing to do with the chip inside (at least it is no strict). I knew it was an r700 based card, but I tried. It would be very nice for AMD to do just another release with xorg-server 1.12 support, but I don't even try to hope it, I'm sick of be frustrated by AMD.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •