Announcement

Collapse
No announcement yet.

OpenCL 3.0 Bringing Greater Flexibility, Async DMA Extensions

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by MadeUpName View Post
    So they are releasing a new standard that stops standardizing any thing. How do you program for that in reality? This is the day openCl was declared dead. RIP.
    Exactly. After many years, instead of advancing the API, they just officially scrapped all advancements after November 2011 (the date 1.2 was published). For anyone wondering, it is almost May 2020 now. And we have a "3.0" version that pretty much is back to 2011 in terms of official functionality. And somehow that is a "good thing"? While CUDA is far more advanced than it today? Why would anyone develop on 3.0 while he can just use CL 1.2 (which is the same thing) or CUDA?

    Incompetent people like this were the reason OpenGL was destroyed in favor of D3D. I am not sure they aren't doing it on purpose in order to promote CUDA. This is disgusting. I am surprised they released a raytracing extension for Vulkan recently, they should have taken their time again, and let D3D Ultimate dominate. why risk making Vulkan relevant?

    Comment


    • #22
      Originally posted by MadeUpName View Post
      So they are releasing a new standard that stops standardizing any thing. How do you program for that in reality? This is the day openCl was declared dead. RIP.
      I mean... it's been dead for about a decade or so, and it's arguable whether it was really ever alive. They really just need to focus on a Vulkan Compute now instead.

      Comment


      • #23
        Originally posted by TemplarGR View Post
        Exactly. After many years, instead of advancing the API, they just officially scrapped all advancements after November 2011 (the date 1.2 was published).
        Sorry, but are you really programmer? If you would know what you are speaking about, you would probably also know that even Geometry shader is optional feature in Vulkan. And do you know how long Geometry shader exists in computer graphics? Was it since OpenGL 3.2? It is far more than your 2011 and it is still optional in such modern API as Vulkan. They probably know what they are doing. Guess why. And I fully support their idea.

        Comment


        • #24
          Originally posted by TemplarGR View Post

          Exactly. After many years, instead of advancing the API, they just officially scrapped all advancements after November 2011 (the date 1.2 was published). For anyone wondering, it is almost May 2020 now. And we have a "3.0" version that pretty much is back to 2011 in terms of official functionality. And somehow that is a "good thing"? While CUDA is far more advanced than it today? Why would anyone develop on 3.0 while he can just use CL 1.2 (which is the same thing) or CUDA?

          Incompetent people like this were the reason OpenGL was destroyed in favor of D3D. I am not sure they aren't doing it on purpose in order to promote CUDA. This is disgusting. I am surprised they released a raytracing extension for Vulkan recently, they should have taken their time again, and let D3D Ultimate dominate. why risk making Vulkan relevant?
          Maybe their reasoning is to move a step backwards to go two steps forward after that. While I agree that this looks confusing now and would very much like them to push things ahead further, I'd say that it might give an incentive to some stakeholders to push their implementations forward without needing to implement every feature which made it into the former spec (e.g. for the FPGA vendors). I am dubious myself if that is a successful strategy with developers in the end (at least for the desktop PC market), but more conforming advanced implementations (with the help of profiles) should widen the base for potential users and therefore might be an incentive for developers to target in the future.

          I find it a bit strange that we don't hear a lot on their convergence strategy in regard to Vulkan/Vulkan Compute anymore. Were there any updates on that front?
          Last edited by ms178; 27 April 2020, 02:43 PM.

          Comment


          • #25
            Originally posted by TemplarGR View Post
            OpenCL 3.0 is the same. They literally threw OpenCL 2.x in the garbage bin (optional features mean almost no one will use them) just because Nvidia refused to support it in order to push CUDA. Good job Khronos.
            I took it as an outreach to help Mesa and open source drivers. Since OCL 1.2 sorta exists with an open driver, it's a positive change to already have a base for OCL 3.0. Maybe it'll make it worth AMD's time to implement now since they won't have to do the huge overhaul 1.2 -> 2 was.

            Comment


            • #26
              It's a good thing that first poster jumped in and made this all about Nvidia, we got that out of the way now.

              But what about OpenCL's actual problem which is adoption? Does this 3.0 revision alleviate any of the problems programmers complain about? Because every time I ask about the API of choice, I hear "screw OpenCL, it's too hard to work with. with CUDA it's way easier".

              Comment


              • #27
                Originally posted by TemplarGR View Post
                Nice, NVIDIA can dictate whatever they want to Khronos and Khronos has to comply like a good lapdog.
                The president of Kronos is Neil Trevett, Nvidia vice-president.
                It's not normal.

                Comment


                • #28
                  Originally posted by Luke_Wolf View Post

                  I mean... it's been dead for about a decade or so, and it's arguable whether it was really ever alive.
                  In the last 3 years the OpenCl github repos has doubled (slide 5 of OpenCl 3.0 presentation).

                  Comment


                  • #29
                    Originally posted by TemplarGR View Post

                    There is nothing complicated about it in my opinion. I watched that part and all i see is that Nvidia and others didn't support SVM at the time, didn't want to support SVM, at least not at that time, and thus they ignored it. David seems to be of the opinion that Nvidia was "right" for taking that stance, and i find that irritating to say the least. All other companies are FORCED to implement certain features the way an API wants them to, or else. If they don't support it, they play catch up. See for example Raytracing, Nvidia pushed it, everyone else must become compatible with their shit. But AMD pushes for SVM? Nope, no need for it (even though CUDA supports a similar feature to unify memory). AMD pushes for async compute? Nope, no need for it. AMD pushes for tessellation back in 2002 with hypertransform? Nope no need for it.

                    You even see this mentality in Wayland. Nvidia does not want GEB? Oh, let's just rewrite code to support EGLSTREAMS, let's accommodate this garbage company some more. No one pushed Nvidia to support the standard, oh no, Nvidia pushes the standard to accommodate Nvidia. I am very surprised they didn't manage to switch everyone to EGLSTREAMS, given their track record in the past of forcing everyone to adopt their shit.
                    Well... unless you are a programmer, AND you are a programmer who bothered watch more than just "that part" of Dave Arlie's talk, then your opinion is... just your opinion.

                    And welcome to it. As a programmer, I'm quite satisfied with the diversity of Michael's site, and those who share their thoughts, yours as well. But as a programmer who watched Dave's entire talk, I can see why he (currently) supports Nvidia's (current) decision not to support SVM as it is (currently) spec'ed. Because the way Arlie presents it, OpenCL 2.x really is a pile of half-baked garbage that even Intel is tip-toeing around with Level 0 of their oneAPI about which, truth be told, I'm actually rather excited.

                    And the reason OpenCL 2.x is half-baked garbage is because, unlike Vulkan, C++, Fortran, and all other usable computer languages, the OpenCL standards committee does not insist upon two -- or sometimes even any -- reference implementations of features submitted and accepted into the spec, and that includes SVM for which -- according to Arlie's talk -- there is but one implementation that sorta kinda maybe conforms to a not-particularly well specified and ambiguous specification.

                    No, Nvidia has no strong corporate interest in supporting -- let alone help develop -- a oneAPI/SYCL/OpenCL 2.y ecosystem to compete with their CUDA. Why should they? Quadro/TESLA/CUDA are Nvidia's bread-and-butter. As soon as two or more GPU vendors support something like oneAPI on two or more independent platforms, look like they mean it and bring a host of open-source developers onboard to help push it forward, then Nvidia loses much of it's hard-won advantage, and much of the revenue that funds that ongoing advantage. OneAPI/OpenCL2.x are for Intel and AMD to figure out. Nvidia might throw in it's 2 cents if they begin to look serious. But don't expect Nvidia to carry Intel and AMD's water for them. Not Nvidia's job.

                    Comment


                    • #30
                      OpenCl 3.0 is great....if today was 27 April of 2015!!
                      I mean, OpenCl has big potentialities (see, for example, the Folding@Home project in these Covid days), but it was clear already 3 years ago that OpenCl 2.x was a lost battle.
                      So, why continue to fight?
                      Last edited by boboviz; 27 April 2020, 06:19 PM.

                      Comment

                      Working...
                      X