Announcement

Collapse
No announcement yet.

Linux 3.15 To Support DRM Render-Nodes By Default

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 3.15 To Support DRM Render-Nodes By Default

    Phoronix: Linux 3.15 To Support DRM Render-Nodes By Default

    David Herrmann sent in a patch on early Sunday (along with some other patches to be covered in another article) for enabling support for DRM render-nodes by default with the next Linux kernel cycle...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It's a great news. I hope David's virtual bus for DRM devices will be ready soon too.

    Now, if only AMD would provide us nice open drivers for their GPUs, it would be possible to use them as coprocessors in servers, without X, through OpenCL (possibly a good alternative to Xeon Phi), since their launched Opteron APUs. It's easy to imagine the amount of improvement for matrix processing, cryptographic workloads and so on. If only AMD would be able to provide better open driver and OpenCL support ...

    Anyway, it's good to see the decoupling work if almost ready to use, next years for Linux graphic subsystem are looking very nice so far

    Comment


    • #3
      Originally posted by Fresh_meat View Post
      through OpenCL (possibly a good alternative to Xeon Phi)
      OpenCL targets completely different workloads than Xeon Phi's. Xeon Phi's those have a full-fledged OS (Linux) capable of running arbitrary compiled code with the usual memory architecture (at least that's one of the uses of the co-processors) - unlike OpenCL, which, though also supporting CPUs, is designed after the GPU memory model (local/shared/global), thus being very efficient for specific tasks (yes, matrices), and unpractical/unusable for other ones.

      Comment


      • #4
        Originally posted by eudoxos View Post
        and unpractical/unusable for other ones.
        It's especially the case when you have to use branching, because GPUs aren't designed for it, so you need to go back all the way back to the CPU, processing the branching into it, and send the data and instruction back into the GPU. And of cause the trouble of going through the bus, I wonder if AMD will be able to perform a really efficient data sharing between CPU and GPU with his HSA technology, and how well it will be supported on Linux ...

        Comment


        • #5
          Originally posted by Fresh_meat View Post
          It's especially the case when you have to use branching, because GPUs aren't designed for it, so you need to go back all the way back to the CPU, processing the branching into it, and send the data and instruction back into the GPU. And of cause the trouble of going through the bus, I wonder if AMD will be able to perform a really efficient data sharing between CPU and GPU with his HSA technology, and how well it will be supported on Linux ...
          Minor clarification -- it's SIMDs not GPUs that are impacted by branching, whether those SIMDs exist in a CPU or a GPU. Until recently only GPUs had really wide SIMDs though.

          Code running on GPU SIMDs can do per-element branching with an associated performance hit -- the SIMD has to execute the instructions for all "actually used" code paths while masking off data elements which didn't take the path currently being executed -- and do so almost invisibly other than the performance hit from executing multiple code paths.

          AFAIK CPUs do not (yet) support branching in the SIMD instructions so code running on CPU SIMDs is limited to using "branch for all elements or branch for none" logic (which would run full speed on a GPU anyways). IIRC CPU SIMDs do have some basic predication logic but nothing like what you get in a GPU.
          Test signature

          Comment

          Working...
          X