Announcement

Collapse
No announcement yet.

Speeding Up The Linux Kernel With Your GPU

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by ChrisXY View Post
    Because it is nvidia-only.
    I know it's hard to understand, but it's a research project.

    It's not for you to use. It's for them to learn.

    Comment


    • #17
      Originally posted by mattst88 View Post
      I know it's hard to understand, but it's a research project.

      It's not for you to use. It's for them to learn.
      But with OpenCL intel and amd might "help" researching.

      Comment


      • #18
        Originally posted by ChrisXY View Post
        Because it is nvidia-only.
        It's about nvidia money so nvidia have right to decide on what they'll spend it. Isn't that simple? Did you expected nvidia to do all programming stuff while all ati has to do is to provide hardware compatible with it? ATI aloofness will not lead very far if they won't start seriously investing in software or they'll come up like with the delayed XvBA.

        Comment


        • #19
          Kernel -> program x -> program y -> kernel -> program a -> program x.... etc...

          How about doing a larger roundtrip whilst OpenOffice gets some CPU time? That way the kernel needs less instructions but you do speed up the system...

          Comment


          • #20
            Originally posted by ChrisXY View Post
            But with OpenCL intel and amd might "help" researching.
            That's not how university research projects work.

            I can see you don't really know, so trust me.

            Comment


            • #21
              Um, wait. Can anybody explain me slowly what I am missing here. Okay, I got lots of work so I was too "lazy" to read the links. But. Doesn't such a thing need drivers to access the hardware? And if this is done all in kernel... oh, wait. Where were Nvidias free as in freedom (L)GPL/BSD/MIT drivers? (nouveau doesn't count)

              Comment


              • #22
                Originally posted by allquixotic View Post
                Maybe Software RAID could somehow be accelerated by the GPU, although you'd need a very large stripe size for it to be worth it. With say RAID-5, you might want to be able to calculate parity bits faster. If you factor in GPU setup latency and the GPU can still do that faster than the CPU, that's great -- go for it. But what about the vast majority of the people who either don't use RAID, or use hardware RAID that offloads those calculations to dedicated hardware anyway?
                This has already been researched and implemented:
                http://www.google.ca/url?sa=t&source...e-UC3g&cad=rja

                Comment


                • #23
                  Silly question:
                  Can't GPUs be made to assist the CPU in software rendering (3D, video) via a standard instruction set extension (say 'SSE42' or 'x88')? Wouldn't that allow us to get rid of all the 'graphics driver' mess and have such things HW accelerated independently of the specific hardware?

                  Comment


                  • #24
                    Originally posted by not.sure View Post
                    Silly question:
                    Can't GPUs be made to assist the CPU in software rendering (3D, video) via a standard instruction set extension (say 'SSE42' or 'x88')? Wouldn't that allow us to get rid of all the 'graphics driver' mess and have such things HW accelerated independently of the specific hardware?
                    Yup it could in theory but it would be much slower and inefficient. That is basically what intels Larrabee was trying to do.

                    Comment


                    • #25
                      Originally posted by deanjo View Post
                      Yup it could in theory but it would be much slower and inefficient. That is basically what intels Larrabee was trying to do.
                      Lol, no... Larrabee was some sort of CPU design that allowed you to do vector calculations in the CPU. These vector registers could aid in 3D rendering, but that's about it, basically.

                      I still have that Intel paper somewhere in my Gmail account is case you don't believe me...

                      Comment


                      • #26
                        Originally posted by V!NCENT View Post
                        Lol, no... Larrabee was some sort of CPU design that allowed you to do vector calculations in the CPU. These vector registers could aid in 3D rendering, but that's about it, basically.

                        I still have that Intel paper somewhere in my Gmail account is case you don't believe me...
                        You are arguing but saying the same thing. SSE and the likes brought vector specific registers to the x86 much like Altivec did for the PPC. In fact AVX is an effort to further improve on those capabilities.
                        Last edited by deanjo; 05-08-2011, 05:22 PM.

                        Comment


                        • #27
                          Originally posted by deanjo View Post
                          You are arguing but saying the same thing. SSE and the likes brought vector specific registers to the x86 much like Altivec did for the PPC. In fact AVX is an effort to further improve on those capabilities.
                          Wasn't the thing we were arguing about that the CPU offloads these calculations to the GPU with a standardised instruction set instead of replacing them with the CPU instruction sets?

                          I thought you were saying that Larrebee ofloaded them. I meant to say that it does them.

                          Comment


                          • #28
                            Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!

                            Comment


                            • #29
                              Originally posted by droidhacker View Post
                              Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!
                              No. nVidia needs Linux to manage their hardware. Don't forget that they are working on a CPU that can never match AMD or Intel. Linus would never accept it and nVidia knows this.

                              Linux is key to their hardware adoption in this regard and therefore they can't and won't do that.

                              Comment


                              • #30
                                Originally posted by droidhacker View Post
                                Just to put this into perspective, this project is all about nvidia trying to turn the linux kernel into their own personal proprietary blob. Drop dead nvidia!
                                Seeing as the linux kernel is GPL'ed I don't see how this could be possible ... no conspiracy theories please ...

                                Comment

                                Working...
                                X