Announcement

Collapse
No announcement yet.

Intel Posts New Patches For GPU Shared Virtual Memory With Xe Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Posts New Patches For GPU Shared Virtual Memory With Xe Driver

    Phoronix: Intel Posts New Patches For GPU Shared Virtual Memory With Xe Driver

    Intel Linux graphics driver engineers continue to be very busy enabling the Xe Direct Rendering Manager that is becoming the default kernel graphics driver beginning with Xe2 Lunar Lake and Battlemage hardware (it currently works as an experimental option with existing Intel graphics hardware going back to Tigerlake). The latest work coming out of Intel is their latest push on enabling GPU Shared Virtual Memory (SVM) support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    What if my gpu don't have physical memory. Is it possible to use memory allocated to cpu ? But nothing like memory splitting. Just somehow tell gpu use this block and do some compute and then switch it back for cpu and do some other compute operation. Unified memory architecture.
    And it should all be cache aware and all other stuff

    Comment


    • #3
      Hey Nvidia, learn this.

      Comment


      • #4
        Originally posted by bkdwt View Post
        Hey Nvidia, learn this.
        Learn what? To publish more open source, ok, agreed!

        But unified virtual memory handling is apparently something nvidia has had for years, though there have been some tangential comments
        that I do not understand the full import / context of of some kind(s) of memory sharing / access / virtualization / unification NOT working on
        nvidia "consumer" GPUs implying some capabilities that are normally otherwise possible have been actually BLOCKED artificially presumably
        versus the NV enterprise model GPUs. There's the whole nvlink stuff (a hardware interface and the backend FW/driver stuff to support it) -- they removed it (physically) from consumer gpus, so that "made use of" some kind of electrical PCI-like bridge for GPUs to access each other's memory spaces over a dedicated physical link IIRC.
        Then there's the direct DMA IO stuff e.g. where GPUs can DMA stuff back and forth from things like SSDs (loading data) or network cards (generating / receiving high bandwidth network packet traffic); I ASSUME that'd fall into the UVM / SVM use case if it's just pages of VM being accessed over PCI from other memory buffers / devices.
        Then there's just the general VM memory sharing between allocations made by CPU based user software and allocations in the GPU mapped to VM and being able to access whatever just like a pointer to any other shared VM addressed memory from the GPU or from the CPU(s).

        So if someone knows edge cases where NV consumer GPUs don't actually support virtual unified heterogeneous memory per. the below examples I'd love to know the details.


        With CUDA 6, NVIDIA introduced one of the most dramatic programming model improvements in the history of the CUDA platform, Unified Memory. In a typical PC or cluster node today, the memories of the…


        This post introduces CUDA programming with Unified Memory, a single memory address space that is accessible from any GPU or CPU in a system.


        Heterogeneous Memory Management (HMM) is a CUDA memory management feature that improves programmer productivity for all programming models built on top of CUDA.



        Comment

        Working...
        X