Announcement

Collapse
No announcement yet.

Intel Preps More Xe2 Lunar Lake & Battlemage Driver Code For Linux 6.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Preps More Xe2 Lunar Lake & Battlemage Driver Code For Linux 6.12

    Phoronix: Intel Preps More Xe2 Lunar Lake & Battlemage Driver Code For Linux 6.12

    Intel's open-source Linux graphics driver engineers continue feverishly working on the Xe2 graphics support both for imminently-launching Lunar Lake laptops and then the Battlemage discrete graphics cards. This week more "missing bits" were addressed in new Intel Linux graphics driver code on its way to DRM-Next ahead of the upcoming Linux 6.12 merge window...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Really looking forward to this new hardware from Intel.

    Preliminary benchmarks have Intel's Arrow Lake dominating AMD Zen 5 and Lunar Lake is supposed to be something special as well.

    Load up on the Intel stock people, at under $21 a share it's a bargain.

    When these product hit the market and the benchmarks hit you will see Intel jump at least $10 a share and I would not be surprised if it close to $100 this time next year.

    Comment


    • #3
      I hope they deliver in the areas that they missed with the Alchemist GPUs, cause I really like the A770, but there's some stuff about it that leaves so much to be desired.

      The most heartbreaking is that it has 16GB of VRAM but due to driver stuff it can only allocate 4GB at a time, requiring workarounds in things like pytorch to make stable diffusion work

      The CL_DEVICE_MAX_MEM_ALLOC_SIZE on Intel Arc GPUs is currently set to 4GB (A770 16GB) and 3.86GB (A750). Trying to allocate larger buffers makes the cl::Buffer constructor return error -61. Disabl...


      A lot of workarounds result in other fun bugs, like not freeing up memory and causing frequent crash/restarts

      Comment


      • #4
        Originally posted by lyamc View Post
        I hope they deliver in the areas that they missed with the Alchemist GPUs, cause I really like the A770, but there's some stuff about it that leaves so much to be desired.

        The most heartbreaking is that it has 16GB of VRAM but due to driver stuff it can only allocate 4GB at a time, requiring workarounds in things like pytorch to make stable diffusion work

        The CL_DEVICE_MAX_MEM_ALLOC_SIZE on Intel Arc GPUs is currently set to 4GB (A770 16GB) and 3.86GB (A750). Trying to allocate larger buffers makes the cl::Buffer constructor return error -61. Disabl...


        A lot of workarounds result in other fun bugs, like not freeing up memory and causing frequent crash/restarts
        If you read through the comments in that link you will note that the so-called 4gb issue was caused because support for greater than 4gb is not enabled by default, but if you pass the correct flag to the compiler and also modify the code slightly, you can allocate greater than 4gb with no problem.

        This is not a problem with Alchemist, it's the way they designed the driver.

        Comment


        • #5
          Originally posted by sophisticles View Post

          If you read through the comments in that link you will note that the so-called 4gb issue was caused because support for greater than 4gb is not enabled by default, but if you pass the correct flag to the compiler and also modify the code slightly, you can allocate greater than 4gb with no problem.

          This is not a problem with Alchemist, it's the way they designed the driver.
          Their DGPUs were released 2 years ago. They've got issues with this across their driver and higher level (opencl, IPEX etc) stack -- there's also this:

          Describe the bug Intel compute runtime doesn't allow allocating a buffer bigger than 4 GB. intel/compute-runtime#627 When you allocate an array in intel-extension-for-pytorch bigger than 4 GB in A7...


          They come out with a "flagship" DGPU with 16 GBy VRAM and you can't trivially MAKE USE OF IT by allocating a 16 GBy sized data structure.

          And yet based on the same & related architectures they sell enterprise / data center GPUs; I'm guessing they either don't have these problems with their SW for those OR they really just blew it in the SW / architecture / specifications for these; any way it's unacceptable at launch, even more so 2 years on without a SW fix which should have been "critical" day 1.

          In comparison NVIDIA seemingly has much more flexible memory handling:

          With CUDA 6, NVIDIA introduced one of the most dramatic programming model improvements in the history of the CUDA platform, Unified Memory. In a typical PC or cluster node today, the memories of the…


          This post introduces CUDA programming with Unified Memory, a single memory address space that is accessible from any GPU or CPU in a system.


          Heterogeneous Memory Management (HMM) is a CUDA memory management feature that improves programmer productivity for all programming models built on top of CUDA.


          And here 2 years after launch you can't even monitor voltages / temperatures, have usable controls for fans / clocks / power / video interface configurations & settings / LEDs / power management on their supported LINUX platform while one can do all that on MSW; there are no usable APIs and HW level documentation to even DIY the CLI / GUI application to do these.

          It's pathetic and inexcusable.

          And if you can't even allocate / use more than 4GB data structures with ARC seamlessly, what exactly is their plan to release even MORE "capable" GPUs with 24GBy, 32 GBy, etc.?

          Is this 1990 with segmented addressing and having to combine small chunks at the application level?! It should all be unified virtual GPU + host + cross-GPU memory!


          Comment


          • #6
            And here we were worried about the GPUs with all the layoffs - but it looks like open source stuff for it is sticking around!

            Comment

            Working...
            X