Announcement

Collapse
No announcement yet.

NVIDIA's Optimus: Will It Come To Linux?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA's Optimus: Will It Come To Linux?

    Phoronix: NVIDIA's Optimus: Will It Come To Linux?

    Last week we reported on GPU switching and then delayed GPU switching coming to Linux via some Linux kernel hacks, but today NVIDIA has launched a new technology for dual-GPU notebooks and that is "Optimus Technology." NVIDIA's Optimus is similar to the hybrid-switching technologies that have been available on notebooks up to this point for switching between ATI/AMD, Intel, and NVIDIA GPUs on notebooks depending upon the graphics workload, but with Optimus the experience is supposed to be seamless. With NVIDIA's Optimus, no manual intervention is supposed to be needed but the notebook will automatically switch between onboard GPUs depending upon the graphics rendering workload. This technology was just launched today via a press release and can be found on a few select notebooks...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    This sounds like new marketing on an existing concept. You can do this already if your system has two GPUs of the same capability level, you just need some extra smarts in the driver. It's basically power selective multi-gpu. Wire one to the displays and then render using the faster or slower one depending on what power mode you are in. When the other one is not in use, ramp it's clocks down.

    Comment


    • #3
      As far as I can tell, "Optimus" doesn't switch GPUs only when the power mode changes but when the strong GPU is required. In other words, it can switch to the integrated GPU even while the laptop is plugged-in and can switch to the discrete GPU even if the laptop is on battery.

      Comment


      • #4
        Looks like they had a look at the power consumption of fermi and found that they need to launch something that diverts attention from it and makes it 'ok'.
        ("yeah ok the gtx499 needs 499W, but therefore we have optimus(tm)")

        Comment


        • #5
          i think this is more likely to come to linux through binary drivers than hybrid graphics was.

          this sounds like a hardware technology that just needs enabled through the driver.

          Comment


          • #6
            I really doubt the Linux graphics stack can support this in its current incarnation (either open *or* closed source). Just a feeling I get from reading the technical details over at techreport.com.

            Comment


            • #7
              Originally posted by BlackStar View Post
              I really doubt the Linux graphics stack can support this in its current incarnation (either open *or* closed source). Just a feeling I get from reading the technical details over at techreport.com.
              It's not very complicated, really... What this does is similar in technology to what 3Dfx used back in the day, when the "accelerator" did the rendering and then combined the frame with the output of the "2D" card. Here it is basically in reverse: the powerful GPU renders the frame, then copies it to the framebuffer of the slower IGP for display via PCIE. The IGP is always active and in control, and the GPU kicks in only when needed (by recognizing running applications). The important difference is that now the GPU can switch off completely, to the point that it could be removed phisically from the system (nVidia demo), and thus it consumes 0 power when not in use.

              There is only one hardware part, and that is a "copy engine" in the GPU that takes care of shuffling frames to the IGP asynchronously, so that the 3D engine can work on the next frame in the meantime -- but this is just a performance enhancement, and things would work (albeit slowly) even without it.

              All the magic is done completely in software, to the point that nVidia boasts how they employ thousands of driver programmers at the moment, so there really is no reason for this technology to miss in Linux, except of course laziness or lack of interest -- or some lame excuse a'la Adobe that the underlying technology does not allow it. It will require cooperation between the Intel driver team and the closed-source team at nVidia though, so politics may become a larger problem than technical issues... And I don't see this ever happening on AMD IGP's for obvious reasons.

              It already works in Win7 & MacOSX, so it's not tied to any windows-specific architecture either... so I hope we see this in Linux sometime, preferably soon, since it's a great laptop/netbook technology!

              Comment


              • #8
                Originally posted by agd5f View Post
                This sounds like new marketing on an existing concept. You can do this already if your system has two GPUs of the same capability level, you just need some extra smarts in the driver. It's basically power selective multi-gpu. Wire one to the displays and then render using the faster or slower one depending on what power mode you are in. When the other one is not in use, ramp it's clocks down.
                Sounds easy. Why is it not in the ATi Drivers? Besides the neverseen PowerXpress ("Available on notebook PCs using Windows Vista? OS")

                Can anyone please tell the developers at ATi what CUSTOMERS want? And make sure, that the marketing guys are stuck in snow somewhere, so they don't disturbing the devs with useless things.

                I vote for PowerXpress on the DESKTOP with high-end graphic card and IGP. But I repeat myself (9 month ago):
                Technical support and discussion of the open-source AMD Radeon graphics drivers.


                And bridgman did not listen to me

                Comment


                • #9
                  I have no doubt that this, or something functionally equivalent, will be coming to linux at some point. Will nvidia implement it in their blob? I don't know and quite frankly, I don't give a rat's a$$.

                  To be honest, I really don't want any kind of magical hidden automatic switching... a manual mechanism + an optional daemon monitoring (to monitor for switch conditions and automatically switch it) and switching it will be enough. Let me be in control of it if I happen to want to be in control of it.

                  The other thing I expect in this nvidia implementation is that it will only work with nvidia/nvidia pairs. Likely as a result of driver constraints. Gallium3D with its softpipe should introduce the possibility of switching *live* between two DIFFERENT GPUs (since all GPUs, in the end, will have equivalent capabilities, albeit at vastly different levels of performance). And this, I think, will be really slick.

                  Comment


                  • #10
                    Originally posted by droidhacker View Post
                    The other thing I expect in this nvidia implementation is that it will only work with nvidia/nvidia pairs. Likely as a result of driver constraints.
                    Actually, this technology was demoed with a Nvidia/Intel pair. Also, it gave you the option to right click an app and run it with either the Nvidia or the Intel GPU. You could also download application profiles automatically, or even build your own (i.e. always run application foo with Nvidia) - pretty slick if I may say so.

                    If dual GPU solutions become widespread in the future, I can imagine this coming to the open-source stack at some point. From a previous discussion it seems that X isn't really flexible enough for this right now, so it's mostly a matter of developer interested and manpower.

                    I also doubt this will come to open- and closed-source driver combinations (e.g. Intel + Nvidia) without very wide cooperation between all competitors (fat chance?) Time will tell.

                    Comment

                    Working...
                    X