Announcement

Collapse
No announcement yet.

Speeding Up The Linux Kernel With Your GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Speeding Up The Linux Kernel With Your GPU

    Phoronix: Speeding Up The Linux Kernel With Your GPU

    Sponsored in part by NVIDIA, at the University of Utah they are exploring speeding up the Linux kernel by using GPU acceleration. Rather than just allowing user-space applications to utilize the immense power offered by modern graphics processors, they are looking to speed up parts of the Linux kernel by running it directly on the GPU...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Too bad there are no plans for a CUDA state tracker. In the last two months, I started with some CUDA programming and I must say that it's much nicer to work with compared to OpenCL. From my point of view, OpenCL is the inferior choice.

    Comment


    • #3
      About time someone started to look into using GPUs as general co-processors/vector units.

      Comment


      • #4
        One bug in either Nvidia's drivers or in the cuda code, and what happens to the kernel?

        Hang? Oops?

        Comment


        • #5
          Originally posted by curaga View Post
          One bug in either Nvidia's drivers or in the cuda code, and what happens to the kernel?

          Hang? Oops?
          Same thing that happens with a bug in the kernel itself.

          Comment


          • #6
            There aren't that many uses for GPGPU processing inside the kernel besides cryptography. The cards use a separate memory range and the time required to setup a task on the GPU is pretty high. Most kernel calls do not operate on large portions of data, they just pass them around between user-space programs and peripherial devices, so the processing power of a GPU cannot benefit the task. In most cases a task will probably even take longer, because copying the data to the GPU, starting the GPGPU task and copying the data back heavily increases the latency.

            This is the exact same reason why it doesn't currently make sense to use GPGPU computing in most standard applications, like Microsoft Office or a Web Browser: The workloads are so small that a standard CPU can deliver the result faster than a GPU round-trip would take. And most CPUs nowadays have multiple cores anyways. Maybe the situation improves once CPU and GPU are combined into a single devices with a common, flat memory layout, but the GPU is still no good for small workloads.

            Probably that's why they picked file system cryptography, but newer CPUs come with AES accelerators, and currently available AES-NI units already peak out at up to two gigabytes per second. That's enough to saturate multiple S-ATA links, and AES-NI comes with no additional memory copies, setup times etc., while completely freeing the CPU for other tasks.

            Comment


            • #7
              A better choice would have been OpenCL, which can run on both AMD and NVIDIA GPUs and is an open industry standard.
              Better choice is the one who has means and can do it in affordable amount of time. Why climb on tip of the tree if you can get low hanging fruit without much effort?

              Comment


              • #8
                Presumably even if the CUDA option ends up being a bit of a bust, the work on parallelising the kernel could have good payoffs in the non-gpu kernel given the ever increasing core counts of systems.

                Comment


                • #9
                  Give me sonething that works like a microkernel and drop me 20.

                  Comment


                  • #10
                    To utilize GPU power for filesystem decryption, the better choice would be to move the FS to userspace (FUSE) instead of GPU-stuff to the kernel.

                    In either case, much care needs to be taken to avoid compromising the key. GPU memory isn't protected much, and leftover memory usually gets assigned to the next task without clearing it first. Do either CUDA or OpenGL make any guarantees there?

                    Comment

                    Working...
                    X