Announcement

Collapse
No announcement yet.

R500 Mesa Is Still No Match To An Old Catalyst Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Kano
    replied
    @BlackStar

    Did you send a patch to the xbmc devs? Then i could test it.

    Leave a comment:


  • monraaf
    replied
    Originally posted by Kano View Post
    Well i think ATI cards are not really common for xbmc dev, even VAAPI support is developed with NVIDIA...
    You're probably right. Just as with the wine devs, the Adobe flash player dev and I forgot the name of another Linux program where the devs only tested their code on NVidia cards. Probably historic reasons for that.

    Leave a comment:


  • nanonyme
    replied
    Originally posted by BlackStar View Post
    Even on nvidia, you can enable strict mode using a #version pragma (which will turn most portability warnings into errors and stop the code from running). Most developers don't bother to add version pragmas either.
    Evil way from driver developers' side (if nVidia guys could bother to do it) would be to regard the lack of a version pragma an error.

    Leave a comment:


  • whizse
    replied
    Mesa does have a stand-alone GLSL compiler:


    (And of course you can always run a software implementation of Mesa regardless of the 3D driver used. It's painfully slow, but can be good enough to check for problems.)

    Leave a comment:


  • BlackStar
    replied
    Even on nvidia, you can enable strict mode using a #version pragma (which will turn most portability warnings into errors and stop the code from running). Most developers don't bother to add version pragmas either.

    An offline validator wouldn't really help. No two GLSL implementations are 100% the same, so even if your program passes the validator it might not run on real hardware (intel being the biggest offender, but fglrx and nvidia have their share of bugs, too. Just try playing with arrays of structures/uniforms/varyings to see what I mean ).

    So far, I've found the least painful approach is to develop on Ati, port to Nvidia (about 95% chances of working out of the box) and try to port to Intel if absolutely necessary (about 0% chances of running without modifications). If you go with Nvidia first and then port to Ati, you'll have about 80% chance of running without issues, so it's not as efficient. If you go with Intel first, you'll simply waste your time - their OpenGL drivers simply don't follow the specs to any reasonable extent (admittedly, it's better on the Linux side, but their Windows drivers are simply awful).

    In any case, you will need at least two GPUs to test - at least if you value portability. Yes, we have it easy nowadays. Back in 2004, only nvidia produced working drivers for OpenGL, everything else was utter garbage!

    Leave a comment:


  • nanonyme
    replied
    (namely a parser, not a compiler; a parser should be able to spot the things against the standard, right?)

    Leave a comment:


  • nanonyme
    replied
    Oh, ick. Couldn't you write a GLSL validator for use with IDE's though?

    Leave a comment:


  • rohcQaH
    replied
    Originally posted by nanonyme View Post
    It kinda puzzles me though that you can't have the compiler cry out for stuff like that.
    Which compiler? GLSL is compiled by the gfx driver when your program is running.
    The driver will complain, but your IDE doesn't (cannot).

    Leave a comment:


  • nanonyme
    replied
    Hmm, I'll make a note of that if I end up writing OpenGL. It kinda puzzles me though that you can't have the compiler cry out for stuff like that. Should probably read into vec4 implementations.

    Leave a comment:


  • BlackStar
    replied
    Originally posted by nanonyme View Post
    Although if what BlackStar says is true, the biggest problem is not the developers not having ATi cards but instead being completely oblivious to language standards and implementing things wrong.
    To their defence, some mistakes are nigh impossible to catch without rigorous testing:
    Code:
    gl_FragColor = vec4(color, 1);
    That's incorrect by GLSL standards (no implicit conversions from int to float), yet nvidia will accept it. Fglrx/mesa, on the other hand, will raise an error. (The correct code is, of course, "vec4(color, 1.0)").

    Head over to the blender forums or gamedev.net and you'll see bugs like these are very very common. Most developers won't think twice before signing off a shader (hey, it works fine here!) but it's the users that pay the price in the end.

    In my experience, at least 50% of all GLSL shaders contain such defects. Maybe XBMC has actually hit a fglrx bug (does it work on mesa?) but I think a genuine programmer error is at least as likely.

    Leave a comment:

Working...
X