Announcement

Collapse
No announcement yet.

Intel's Open-Source Compute Runtime Appears To Be Ready For DG2/Alchemist dGPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel's Open-Source Compute Runtime Appears To Be Ready For DG2/Alchemist dGPUs

    Phoronix: Intel's Open-Source Compute Runtime Appears To Be Ready For DG2/Alchemist dGPUs

    Intel's open-source Compute Runtime for providing OpenCL and oneAPI Level Zero support on their graphics hardware appears to be in roughly good shape now for DG2/Alchemist based on external/independent monitoring of the effort...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    "Once the Intel Arc Graphics discrete graphics cards formally launch" - That is hopefully not too long away and doesn't reflect that great on them as Nvidia and AMD launch their new generation soon therafter which could make Arc obsolete sooner than expected. Intel missed a great opportunity for a better market entry by not launching last year or even by Q1 this year.

    Comment


    • #3
      Even if Intel dGPUs are half the speed of AMD, as long as the price is right we can finally have the competition to CUDA that ROCm promised forever and delivered never.

      The compute software angle is a big positive for this product line, not least because it'll be relevant on a lot of devices even iGPUs, which makes the coding effort well worth it.

      Intel hasn't put a foot wrong so far in this area and it's refreshing after looong suffering the slow motion train wreck that is AMDs cynical and shambolic effort.
      Last edited by vegabook; 10 July 2022, 12:07 PM.

      Comment


      • #4
        The prices being quoted by YT talking heads make these look like they will be crazy expensive. We will see.

        Comment


        • #5
          Originally posted by vegabook View Post
          The compute software angle is a big positive for this product line, not least because it'll be relevant on a lot of devices even iGPUs, which makes the coding effort well worth it.
          I've heared that they are going to simplify the programming model with one of Arc's successors, maybe as soon as Battlemage (as they use that iteration on Meteor Lake). This reminds me a lot of HSA, but the Intel-way. Would that make waiting for that generation more appealing from a compute point of view? In my thinking programmers are more likely to deal with that easier model and I suppose that would provide better long-term value for us end-users with that generation. I still remember how the Sandy Bridge iGPU got abandoned really fast as it didn't support some crucial hardware features and its feature set was already obsolete at launch. I hope Arc won't see the same fate.

          Comment


          • #6
            we do not need one gpu compute standard per company

            openCL/oneAPI for intel
            cuda for nvidia
            openCL/HIP ROCm for AMD...

            we need 1 compute standard for all ... like Webassembly/WebGPU
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #7
              Originally posted by ms178 View Post

              I've heared that they are going to simplify the programming model with one of Arc's successors, maybe as soon as Battlemage (as they use that iteration on Meteor Lake). This reminds me a lot of HSA, but the Intel-way. Would that make waiting for that generation more appealing from a compute point of view? In my thinking programmers are more likely to deal with that easier model and I suppose that would provide better long-term value for us end-users with that generation. I still remember how the Sandy Bridge iGPU got abandoned really fast as it didn't support some crucial hardware features and its feature set was already obsolete at launch. I hope Arc won't see the same fate.
              (This reply got a bit long, sorry. But I think it's worth it ☺️☺️. tl;dr skip to the last paragraph)

              Speaking as a programmer who has done some GPGPU work, and contract or ISV style the past 5 years...

              Important things to me are not having to learn a new API for the same class of hardware from the same company every few years, and code once written should work going forward as much as possible (binary compat is important, recompile to exploit new hw features is acceptable, rewrite to new programming model or language not so much). Because nobody wants to spend time and money reinventing the wheel.

              So back in 2006 when GPGPU became a thing and nV introduced CUDA, AMD actually had a competing product called CTM (Close to the Metal). CTM was akin to the IR code generated by nvcc, it was a low level shader style language.

              In a nutshell, CUDA docs were like "CUDA syntax is slightly extended C, CUDA will be forward compatible at the source level and in large part the binary level, and now let's talk about staging DMA transfers btwn VRAM and RAM efficiently". That was the essence of the proposal from nV in 2006, and it is still the essence of the proposal from nV.

              CTM docs were hundreds of pages of opcode definitions and fine print for every single one, and every extant AMD GPU supported a somewhat different set of opcodes, between major generations of AMD GPUs the arch and instruction set would change completely. There was every reason to believe the next gen GPU from AMD would use yet another set of low level opcodes. So another 500 pages of fine print to use CTM. AMD did not offer a high level language binding for CTM. AMD GPGPU in 2006/7 didn't even get mocked, it got ignored. As did the next effort from AMD which was at least not assembler for GPUs (iirc it was built around opencl, a language designed by a committee of corporations, succeeded about as well as might be expected). But AMD was out of the game very soon after.

              So four or five years ago AMD GPGPU hw is relevant again, and along comes ROCm from AMD. Initially supports a CUDA shim called HIP, and a full featured C++ binding based on C++AMP. Lotta people happy, C++AMP is C++ with slightly tweaked lambdas, it's freaking great to code for.

              But around v3 of ROCm AMD drops C++AMP and is like "Hahahaha, sorry about all you suckers that invested time & money on our shiny C++AMP binding, sure hope you didn't buy the book we recommended".

              And then AMD split it's hw streams and ROCm does not support GPUs that normal people can even attempt to purchase, much less afford. I bought Radeon 7's in 2019 for $CA1000 each for ROCm, and ROCm turns out to be neither feature nor detail stable between even minor releases. Did Lisa think mi50/100/200 would look like a safe bet to ppl like me - even if I could find a channel sales rep who would not look at an ISV as if it were a strange new insect? I know other people who got burned this way.

              Price of admission to investigate using HIP now? An mi100 card, something like $5000 if you can find someone who could be bothered to sell you just one. Price of admission to investigate using CUDA? A ten year old used 280 or something like on Ebay.
              nV only got around to dropping nvcc support for the 8800GT/X cards a year or two ago! So guess why nV won and still wins at GPGPU.

              With that admittedly extensive background covered:

              Once bitten twice shy.

              If I have reason to believe that Intel will substantially change it's GPGPU software stack scheme in a year or two then I have another reason to not migrate my GPGPU code away from CUDA. Just the fact that Intel has Level 0, 1, ... of it's promised stack makes me deeply loathe to invest much money or effort in it. To boot: Intel has a history of ditching multi billion dollar projects apropos of nothing, it's done it already once with GPGPU (Larabee). Can I even trust that Intel's new GPU thing will still exist in a few years?
              Last edited by hoohoo; 11 July 2022, 02:10 AM.

              Comment


              • #8
                Originally posted by vegabook View Post
                ...
                Intel hasn't put a foot wrong so far in this area and it's refreshing after looong suffering the slow motion train wreck that is AMDs cynical and shambolic effort.
                Intel does not currently have a foot in the game to put wrong.

                And... cough cough... Larabee.

                Comment


                • #9
                  Originally posted by hoohoo View Post

                  (This reply got a bit long, sorry. But I think it's worth it ☺️☺️. tl;dr skip to the last paragraph)
                  Thanks a lot for your insightful comment, I appreciate all the details. Let's hope that with their recent acquisitions AMD can come up with something sane for all of their chips. And I also was a vocal critic of their GPU architecture split, we got to see what this leads to for end users (being the least priority, RDNA1/2 compute support being absent for years).

                  Comment


                  • #10
                    Originally posted by hoohoo View Post
                    (This reply got a bit long, sorry. But I think it's worth it ☺️☺️. tl;dr skip to the last paragraph)

                    Speaking as a programmer who has done some GPGPU work, and contract or ISV style the past 5 years...

                    Important things to me are not having to learn a new API for the same class of hardware from the same company every few years, and code once written should work going forward as much as possible (binary compat is important, recompile to exploit new hw features is acceptable, rewrite to new programming model or language not so much). Because nobody wants to spend time and money reinventing the wheel.

                    So back in 2006 when GPGPU became a thing and nV introduced CUDA, AMD actually had a competing product called CTM (Close to the Metal). CTM was akin to the IR code generated by nvcc, it was a low level shader style language.

                    In a nutshell, CUDA docs were like "CUDA syntax is slightly extended C, CUDA will be forward compatible at the source level and in large part the binary level, and now let's talk about staging DMA transfers btwn VRAM and RAM efficiently". That was the essence of the proposal from nV in 2006, and it is still the essence of the proposal from nV.

                    CTM docs were hundreds of pages of opcode definitions and fine print for every single one, and every extant AMD GPU supported a somewhat different set of opcodes, between major generations of AMD GPUs the arch and instruction set would change completely. There was every reason to believe the next gen GPU from AMD would use yet another set of low level opcodes. So another 500 pages of fine print to use CTM. AMD did not offer a high level language binding for CTM. AMD GPGPU in 2006/7 didn't even get mocked, it got ignored. As did the next effort from AMD which was at least not assembler for GPUs (iirc it was built around opencl, a language designed by a committee of corporations, succeeded about as well as might be expected). But AMD was out of the game very soon after.

                    So four or five years ago AMD GPGPU hw is relevant again, and along comes ROCm from AMD. Initially supports a CUDA shim called HIP, and a full featured C++ binding based on C++AMP. Lotta people happy, C++AMP is C++ with slightly tweaked lambdas, it's freaking great to code for.

                    But around v3 of ROCm AMD drops C++AMP and is like "Hahahaha, sorry about all you suckers that invested time & money on our shiny C++AMP binding, sure hope you didn't buy the book we recommended".

                    And then AMD split it's hw streams and ROCm does not support GPUs that normal people can even attempt to purchase, much less afford. I bought Radeon 7's in 2019 for $CA1000 each for ROCm, and ROCm turns out to be neither feature nor detail stable between even minor releases. Did Lisa think mi50/100/200 would look like a safe bet to ppl like me - even if I could find a channel sales rep who would not look at an ISV as if it were a strange new insect? I know other people who got burned this way.

                    Price of admission to investigate using HIP now? An mi100 card, something like $5000 if you can find someone who could be bothered to sell you just one. Price of admission to investigate using CUDA? A ten year old used 280 or something like on Ebay.
                    nV only got around to dropping nvcc support for the 8800GT/X cards a year or two ago! So guess why nV won and still wins at GPGPU.

                    With that admittedly extensive background covered:

                    Once bitten twice shy.

                    If I have reason to believe that Intel will substantially change it's GPGPU software stack scheme in a year or two then I have another reason to not migrate my GPGPU code away from CUDA. Just the fact that Intel has Level 0, 1, ... of it's promised stack makes me deeply loathe to invest much money or effort in it. To boot: Intel has a history of ditching multi billion dollar projects apropos of nothing, it's done it already once with GPGPU (Larabee). Can I even trust that Intel's new GPU thing will still exist in a few years?
                    as fahr as i know ROCm supports vega64/Radeon7 and the amd 6800WX (what is a radeon 6800XT) so your claim that you need a 5000 dollar mi 100 is wrong.
                    but sure the vega64 and radeon7 is only to buy on ebay used and very expensive in comparison of other new cards.
                    the jump to the amd 6000series is also not optimal because this means amd does not really support the 5000cards in ROCm but they claim they work on this.

                    and i have to say this to you: do not blame AMD instead blame the "LAW"

                    Nvidia has a monopoly with CUDA and the LAW makes it agaist the law that AMD supports binary compatible CUDA api on their hardware.

                    ROCm/HIP goes over the source-code to make compatibility with CUDA because to support the CUDA apu directly on AMD hardware is agaist the law. this is seen in US law Oracle/Java vs google/andorid in this court case google only won because they did not sell a product that is 100% java compatible they did sell a incompatible version and had only similar functions in source code.

                    AMD can not solve any of these problems only the LAW maker could solve these problems.

                    the government should make the law in a way that makes monopoles agaist the law or else make CUDA binary api forced-opensource that any company can use it. or else the government need to spend 1000 of billions of dollars to make webassembly/WebGPU the new gpu compute standard.
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment

                    Working...
                    X