Announcement

Collapse
No announcement yet.

Khronos Releases OpenGL 3.3 & OpenGL 4.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by deanjo View Post
    Can you please be specific about what openCL bug you have found? I've been using them for quite some time and have yet to come across an issue with them in openCL development.
    "Real" bugs I have found is memleaking of the AMD driver, did not try that with the NVida though.

    Problems I have found are different handling of implicit conversions of AMD and Nvidia. AMD converts implicitly (allowed in the spec) e.g. cos(2) works, while Nvidia does not do that, here you have to convert that yourself.

    Further AMD converts vectors explicitly e.g.
    uint 4 u; ...
    float4 f; ...
    (float4)(u);
    I guess this is wanted behaviour though I would consider it a bug since it is (specifically) forbidden in the spec.

    Just these two examples result in code that will work with AMD devices or Intel CPUs yet not on Nvidia GPUs and that is bad.

    In fact if one knows that Nvidia does no implicit converting and that AMD ignores the spec at that example above one can easily not use these features. Though still that leaves a bad taste in the mouth and makes me wonder what else of the spec is not suppoted by whichever implementation or what is ignored.

    Comment


    • #42
      We have been having problems getting async readbacks to work on Nvidia, too. They seem to work fine on AMD's implementation, but Nvidia blocks as if it was a blocking (synced) readback.

      Not good.

      Comment


      • #43
        Another minor thing is that at least here Nvidia does often not return correct error codes as defined in the header file.

        Comment


        • #44
          Why am I getting a vibe that Nvidia is trying to shoehorn their OpenCL implementation on top of CUDA, similar to what they did with GLSL on top of Cg? Their GLSL compiler used to be god-awful and has caused lots of pain to developers and especially users. It used to accept D3D/HLSL code without an error or warning for god's sake!

          "(user) Hey, your game doesn't work on my Ati card."
          "(dimwitted dev) I don't test on Ati cards, Ati sucks. They don't support GLSL."
          "(user) Well, games X, Y and Z work and they do use GLSL."
          "(dimwitted dev) So what, the code runs fine here."
          "(smart dev) Let me take a look. Hey, you are using saturate(), implicit casts and other HLSL stuff in GLSL. What the hell is wrong with you?"
          "(dimwitted dev) Err... Ati sucks?"
          "(user) Well, uh, ok I'll go play some other game. Thanks anyway!"

          I just hope they know better this time and this is simply caused by immature drivers.

          Comment


          • #45
            Also

            float4 array[2] = {(float4)(0.0f), (float4)(0.0f)};

            works for AMD but does not for Nvidia.

            Comment


            • #46
              @BlackStar

              Did you ever try GLSL renderer with xbmc?

              Comment


              • #47
                And another thing that works on AMD but not on Nvidia:
                float2 a = (float2)(0.0f);
                a = pown(a, 3);

                Is this all enough to call it _not mature_?

                Comment

                Working...
                X