Page 2 of 5 FirstFirst 1234 ... LastLast
Results 11 to 20 of 42

Thread: Speeding Up The Linux Kernel With Your GPU

  1. #11
    Join Date
    Dec 2008
    Posts
    315

    Default

    Ya let's do that. Nothing like a windows display driver model type scheduler with a frikkin massively parallel device putting your system to sleep taking memory lockout and schedule dirt naps.

    Is it cool if a program wants to use it. Ya. Cause it will speed things up but you can't multiask it very well. If you put this in kernel stuff it's going to be a nightmare.
    If they try to use on very much stuff it's going to make everything an unbearable type of slow.

    I just don't understand the whole concept of promising speed ups so you can slow things down and sell more rediculously overpowered hardware.

    They are in your linux. Infesting it with windows 7 taint.

    http://forums.nvidia.com/index.php?showtopic=190039

  2. #12
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    965

    Default

    Quote Originally Posted by NSLW View Post
    Better choice is the one who has means and can do it in affordable amount of time. Why climb on tip of the tree if you can get low hanging fruit without much effort?
    Because it is nvidia-only.

  3. #13
    Join Date
    Jul 2009
    Posts
    72

    Default simd instruction set

    Quote Originally Posted by not.sure View Post
    About time someone started to look into using GPUs as general co-processors/vector units.
    but we already have vector units on our CPUs. Does eCryptFS use SSE or AltVec or VIS (the less known sparc simd instruction set) to accelerate encryption?

  4. #14
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    The intent is obvious:
    -Intel good CPU, bad GPU
    -AMD arguably better but slower CPU, great GPU
    -nVidia crappy CPU, good GPU

    With the fusion stuff comming up, nVidia can only compete with good shaders, whilst keeping the nessecary CPU stuff there. So in order for nVidia to get a piece of the cake, they must do this.

    Quite frankly, with all my AMD fanboyism aside, I realy like what they're doing. They may:
    -Improve the Linux kernel (and break some for a short while) in a way that's accepted
    -Give Linux a serious technological edge
    -Shader cores are much better for kernels, I think, because a kernel is all about management and what greater way to have this furiously multicored? In fact I like anything that's not time-sliced.
    -Maybe gives nVidia a good reason to open source or imrpove Gallium... Maybe...

  5. #15
    Join Date
    Sep 2008
    Posts
    989

    Default

    Quote Originally Posted by sturmflut View Post
    There aren't that many uses for GPGPU processing inside the kernel besides cryptography. The cards use a separate memory range and the time required to setup a task on the GPU is pretty high. Most kernel calls do not operate on large portions of data, they just pass them around between user-space programs and peripherial devices, so the processing power of a GPU cannot benefit the task. In most cases a task will probably even take longer, because copying the data to the GPU, starting the GPGPU task and copying the data back heavily increases the latency.

    This is the exact same reason why it doesn't currently make sense to use GPGPU computing in most standard applications, like Microsoft Office or a Web Browser: The workloads are so small that a standard CPU can deliver the result faster than a GPU round-trip would take. And most CPUs nowadays have multiple cores anyways. Maybe the situation improves once CPU and GPU are combined into a single devices with a common, flat memory layout, but the GPU is still no good for small workloads.

    Probably that's why they picked file system cryptography, but newer CPUs come with AES accelerators, and currently available AES-NI units already peak out at up to two gigabytes per second. That's enough to saturate multiple S-ATA links, and AES-NI comes with no additional memory copies, setup times etc., while completely freeing the CPU for other tasks.
    Great post - agreed 100%. In order for in-kernel GPGPU to make any sense, it needs to be demonstrated that there are existing kernel tasks (or new kernel tasks yet to come) that (a) really belong in the kernel rather than userspace, and (b) would truly benefit from being accelerated despite the big setup times.

    Way I see it, most applications of GPGPU fail either (a) or (b). I can't think of an application besides very large-scale crypto that might pass both (a) and (b) legitimately.

    Maybe Software RAID could somehow be accelerated by the GPU, although you'd need a very large stripe size for it to be worth it. With say RAID-5, you might want to be able to calculate parity bits faster. If you factor in GPU setup latency and the GPU can still do that faster than the CPU, that's great -- go for it. But what about the vast majority of the people who either don't use RAID, or use hardware RAID that offloads those calculations to dedicated hardware anyway?

    Anyway, I'm out of ideas. I can't think of another practical application of GPGPU in the kernel. I can think of many many useful applications of GPGPU, but they all belong squarely in userspace, implemented in applications.

  6. #16
    Join Date
    Jan 2008
    Posts
    295

    Default

    Quote Originally Posted by ChrisXY View Post
    Because it is nvidia-only.
    I know it's hard to understand, but it's a research project.

    It's not for you to use. It's for them to learn.

  7. #17
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    965

    Default

    Quote Originally Posted by mattst88 View Post
    I know it's hard to understand, but it's a research project.

    It's not for you to use. It's for them to learn.
    But with OpenCL intel and amd might "help" researching.

  8. #18
    Join Date
    Dec 2008
    Location
    Poland
    Posts
    116

    Default

    Quote Originally Posted by ChrisXY View Post
    Because it is nvidia-only.
    It's about nvidia money so nvidia have right to decide on what they'll spend it. Isn't that simple? Did you expected nvidia to do all programming stuff while all ati has to do is to provide hardware compatible with it? ATI aloofness will not lead very far if they won't start seriously investing in software or they'll come up like with the delayed XvBA.

  9. #19
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Kernel -> program x -> program y -> kernel -> program a -> program x.... etc...

    How about doing a larger roundtrip whilst OpenOffice gets some CPU time? That way the kernel needs less instructions but you do speed up the system...

  10. #20
    Join Date
    Jan 2008
    Posts
    295

    Default

    Quote Originally Posted by ChrisXY View Post
    But with OpenCL intel and amd might "help" researching.
    That's not how university research projects work.

    I can see you don't really know, so trust me.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •