Page 2 of 7 FirstFirst 1234 ... LastLast
Results 11 to 20 of 61

Thread: Where The Open-Source AMD Driver Is At For Modern GPUs

  1. #11
    Join Date
    Jun 2009
    Posts
    2,927

    Default

    Quote Originally Posted by birdie View Post
    The problem is that it will be like that forever. AMD/NVIDIA deploy over five hundred programmers to create fast/optimized/bug free drivers for their GPUs, while at most there is just a handful of programmers (I'd say 10 tops) working on open source drivers.
    We don't need to "defeat" the blobs, since they have a number of inherent faults that make them undesirable and which can not be fixed.

    What we need is to bring performance to the point where it does not matter any more, so all the inherent advantages of OSS drivers: KMS, out-of-the-box support, integration, support for X technologies, lack of spyware and malware, longer support, etc. take over.

    With r300g, we're already there. With r600g, it will take a while longer, but we'll get to 65-75% of the performance, and that's fast enough for most people. There are always people who will fuck around with blobs, dicking them into a kernel that was not designed for them, to obtain 30% more FPS, but I imagine that for most users, this perversion will become unnecessary, just like nobody is installing nforce ethernet binary driver nowadays.

    It's like GCC vs. ICC. ICC is much faster, but everyone uses GCC anyway, because speed is not everything. We need to cover the needs of the computer users, free software itself is THE killer argument.

  2. #12
    Join Date
    Mar 2010
    Posts
    27

    Default

    Quote Originally Posted by Kano View Post
    For some use cases it seem that oss drivers are not that bad. When you only play browser games up to quake live you can be happy already - depends a bit on the card however.
    Yup, I would definitely buy a HD5830 to play Quake Live. No wait, Quake 3 was working fine on GeForce2... Well, after thinking about it, I think I'll stick to Catalysts.

  3. #13
    Join Date
    Jun 2009
    Posts
    2,927

    Default

    Quote Originally Posted by yaji View Post
    Yup, I would definitely buy a HD5830 to play Quake Live. No wait, Quake 3 was working fine on GeForce2... Well, after thinking about it, I think I'll stick to Catalysts.
    You don't need an HD5830 to play QuakeLive, the cheapest card on the market will work just fine. Fanless, even:

    http://www.google.com/products/catal...wBQ#ps-sellers

  4. #14
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by pingufunkybeat View Post
    It's like GCC vs. ICC. ICC is much faster, but everyone uses GCC anyway, because speed is not everything. We need to cover the needs of the computer users, free software itself is THE killer argument.
    The difference being is that the gap in performance for a vast majority of compiled items is usually about 5-10% if there is a difference at all when it comes to icc vs gcc. Gcc can also be faster on some items again not by much. Also the resulting binary winds up with the same features no matter what compiles it. Comparing compilers to drivers isn't a good comparison.

  5. #15
    Join Date
    Jun 2009
    Posts
    2,927

    Default

    Quote Originally Posted by deanjo View Post
    The difference being is that the gap in performance for a vast majority of compiled items is usually about 5-10% if there is a difference at all when it comes to icc vs gcc. Gcc can also be faster on some items again not by much. Also the resulting binary winds up with the same features no matter what compiles it. Comparing compilers to drivers isn't a good comparison.
    Well, if the performance difference is 20% (if there is one at all), like is the case with r300g, then it is indeed a good comparison.

    Sure, the r600+ drivers have some catching up to do, but they did start far too late, and they did catch up a lot already. If the drivers reach the 75% mark (as they are expected to and like r300g did), then it will indeed be a good comparison.

    I also don't see why comparing compilers to drivers is not a good comparison, when a large part of what a modern GPU driver does is actually compiling. In fact, they translate OpenGL code and shaders into a form that the GPU (a processor) can execute.

  6. #16
    Join Date
    Feb 2011
    Location
    France
    Posts
    196

    Default

    Quote Originally Posted by Jimbo View Post
    it seems some type of "architectural" problem.
    Yes it is. As Jérôme Glisse said, there is several points :
    - the amd gpu conception (contrary to nvidia) need that the kernel take time to analyze the command buffer, for security reason. fglrx do not do that.
    - there is some limitation is the API, and it's not easy to fix. Moreover the kernel api is freeze, contrary to nouveau.
    - r600g have a design that is not the best to do what it do.

    To conclude, in fact, the main "problem" with r600g is in the kernel, not really in gallium side.

  7. #17
    Join Date
    May 2007
    Location
    Third Rock from the Sun
    Posts
    6,583

    Default

    Quote Originally Posted by pingufunkybeat View Post
    Well, if the performance difference is 20% (if there is one at all), like is the case with r300g, then it is indeed a good comparison.
    Only if the binary that is produced is time critical. Let's face it an end user is rarely going to care if it takes 54 seconds to encode a mp3 vs 60 seconds and the output would be the same but when you are dealing with real time input / output that is an entirely different matter.

  8. #18
    Join Date
    Jun 2009
    Posts
    2,927

    Default

    If 20% more performance costs $10, then it is as good as unimportant. Especially when dealing with 10-year-old games, which run faster than the refresh rate anyway, which is essentially the Linux situation.

  9. #19
    Join Date
    May 2007
    Posts
    231

    Default

    Quote Originally Posted by whitecat View Post
    Yes it is. As Jérôme Glisse said, there is several points :
    - the amd gpu conception (contrary to nvidia) need that the kernel take time to analyze the command buffer, for security reason. fglrx do not do that.
    - there is some limitation is the API, and it's not easy to fix. Moreover the kernel api is freeze, contrary to nouveau.
    - r600g have a design that is not the best to do what it do.

    To conclude, in fact, the main "problem" with r600g is in the kernel, not really in gallium side.
    Kernel side is one part of the issue if you want to compete with catalyst. Right now the biggest issue is in r600g itself. Thought given the number of r600g needs i fear that kernel side might also impact it a little bit more than r300g.

  10. #20
    Join Date
    Jun 2009
    Posts
    1,124

    Default

    mmmm Michael maybe is cuz my card is a 4850 but nexuiz in the most extreme preset is always around 50 fps at 1440x900 using mesa-git, natty, drm git, ddx git, 2.6.38 drm next (2.6.39 broke btrfs so i can't boot it), color tiling, swapwait off, st3c support, etc.

    well im using custom cflags but still those results are creepy slow, maybe pts is missing something?

    i know you do out of the box benchs but it won't hurt to create some sort of tweaked profile wich is more likely to be closer to the real state of the driver, remember r600 is under heavy development

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •