Announcement

Collapse
No announcement yet.

Speeding Up The Linux Kernel With Your GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • matthewbpt
    replied
    Originally posted by V!NCENT View Post
    Well it's simple:
    1. nVidia hands out no documentation, so you need the blob;
    2. Open source GPL parts refer to the blob;
    3. Linux depends upon proprietary.
    Yes, except no parts of the kernel need, or are likely to need, CUDA to run. This just gives one the option of running kernel code on GPU, there's no chance that the kernel will ever move to requiring a GPU for kernel level operations.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by matthewbpt View Post
    Seeing as the linux kernel is GPL'ed I don't see how this could be possible ... no conspiracy theories please ...
    Well it's simple:
    1. nVidia hands out no documentation, so you need the blob;
    2. Open source GPL parts refer to the blob;
    3. Linux depends upon proprietary.

    Leave a comment:


  • matthewbpt
    replied
    Originally posted by droidhacker View Post
    Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!
    Seeing as the linux kernel is GPL'ed I don't see how this could be possible ... no conspiracy theories please ...

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by droidhacker View Post
    Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!
    No. nVidia needs Linux to manage their hardware. Don't forget that they are working on a CPU that can never match AMD or Intel. Linus would never accept it and nVidia knows this.

    Linux is key to their hardware adoption in this regard and therefore they can't and won't do that.

    Leave a comment:


  • droidhacker
    replied
    Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by deanjo View Post
    You are arguing but saying the same thing. SSE and the likes brought vector specific registers to the x86 much like Altivec did for the PPC. In fact AVX is an effort to further improve on those capabilities.
    Wasn't the thing we were arguing about that the CPU offloads these calculations to the GPU with a standardised instruction set instead of replacing them with the CPU instruction sets?

    I thought you were saying that Larrebee ofloaded them. I meant to say that it does them.

    Leave a comment:


  • deanjo
    replied
    Originally posted by V!NCENT View Post
    Lol, no... Larrabee was some sort of CPU design that allowed you to do vector calculations in the CPU. These vector registers could aid in 3D rendering, but that's about it, basically.

    I still have that Intel paper somewhere in my Gmail account is case you don't believe me...
    You are arguing but saying the same thing. SSE and the likes brought vector specific registers to the x86 much like Altivec did for the PPC. In fact AVX is an effort to further improve on those capabilities.
    Last edited by deanjo; 08 May 2011, 05:22 PM.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by deanjo View Post
    Yup it could in theory but it would be much slower and inefficient. That is basically what intels Larrabee was trying to do.
    Lol, no... Larrabee was some sort of CPU design that allowed you to do vector calculations in the CPU. These vector registers could aid in 3D rendering, but that's about it, basically.

    I still have that Intel paper somewhere in my Gmail account is case you don't believe me...

    Leave a comment:


  • deanjo
    replied
    Originally posted by not.sure View Post
    Silly question:
    Can't GPUs be made to assist the CPU in software rendering (3D, video) via a standard instruction set extension (say 'SSE42' or 'x88')? Wouldn't that allow us to get rid of all the 'graphics driver' mess and have such things HW accelerated independently of the specific hardware?
    Yup it could in theory but it would be much slower and inefficient. That is basically what intels Larrabee was trying to do.

    Leave a comment:


  • not.sure
    replied
    Silly question:
    Can't GPUs be made to assist the CPU in software rendering (3D, video) via a standard instruction set extension (say 'SSE42' or 'x88')? Wouldn't that allow us to get rid of all the 'graphics driver' mess and have such things HW accelerated independently of the specific hardware?

    Leave a comment:

Working...
X