Announcement

Collapse
No announcement yet.

NVIDIA's Optimus: Will It Come To Linux?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA's Optimus: Will It Come To Linux?

    Phoronix: NVIDIA's Optimus: Will It Come To Linux?

    Last week we reported on GPU switching and then delayed GPU switching coming to Linux via some Linux kernel hacks, but today NVIDIA has launched a new technology for dual-GPU notebooks and that is "Optimus Technology." NVIDIA's Optimus is similar to the hybrid-switching technologies that have been available on notebooks up to this point for switching between ATI/AMD, Intel, and NVIDIA GPUs on notebooks depending upon the graphics workload, but with Optimus the experience is supposed to be seamless. With NVIDIA's Optimus, no manual intervention is supposed to be needed but the notebook will automatically switch between onboard GPUs depending upon the graphics rendering workload. This technology was just launched today via a press release and can be found on a few select notebooks...

    http://www.phoronix.com/vr.php?view=Nzk3Mg

  • #2
    This sounds like new marketing on an existing concept. You can do this already if your system has two GPUs of the same capability level, you just need some extra smarts in the driver. It's basically power selective multi-gpu. Wire one to the displays and then render using the faster or slower one depending on what power mode you are in. When the other one is not in use, ramp it's clocks down.

    Comment


    • #3
      As far as I can tell, "Optimus" doesn't switch GPUs only when the power mode changes but when the strong GPU is required. In other words, it can switch to the integrated GPU even while the laptop is plugged-in and can switch to the discrete GPU even if the laptop is on battery.

      Comment


      • #4
        Looks like they had a look at the power consumption of fermi and found that they need to launch something that diverts attention from it and makes it 'ok'.
        ("yeah ok the gtx499 needs 499W, but therefore we have optimus(tm)")

        Comment


        • #5
          i think this is more likely to come to linux through binary drivers than hybrid graphics was.

          this sounds like a hardware technology that just needs enabled through the driver.

          Comment


          • #6
            I really doubt the Linux graphics stack can support this in its current incarnation (either open *or* closed source). Just a feeling I get from reading the technical details over at techreport.com.

            Comment


            • #7
              Originally posted by BlackStar View Post
              I really doubt the Linux graphics stack can support this in its current incarnation (either open *or* closed source). Just a feeling I get from reading the technical details over at techreport.com.
              It's not very complicated, really... What this does is similar in technology to what 3Dfx used back in the day, when the "accelerator" did the rendering and then combined the frame with the output of the "2D" card. Here it is basically in reverse: the powerful GPU renders the frame, then copies it to the framebuffer of the slower IGP for display via PCIE. The IGP is always active and in control, and the GPU kicks in only when needed (by recognizing running applications). The important difference is that now the GPU can switch off completely, to the point that it could be removed phisically from the system (nVidia demo), and thus it consumes 0 power when not in use.

              There is only one hardware part, and that is a "copy engine" in the GPU that takes care of shuffling frames to the IGP asynchronously, so that the 3D engine can work on the next frame in the meantime -- but this is just a performance enhancement, and things would work (albeit slowly) even without it.

              All the magic is done completely in software, to the point that nVidia boasts how they employ thousands of driver programmers at the moment, so there really is no reason for this technology to miss in Linux, except of course laziness or lack of interest -- or some lame excuse a'la Adobe that the underlying technology does not allow it. It will require cooperation between the Intel driver team and the closed-source team at nVidia though, so politics may become a larger problem than technical issues... And I don't see this ever happening on AMD IGP's for obvious reasons.

              It already works in Win7 & MacOSX, so it's not tied to any windows-specific architecture either... so I hope we see this in Linux sometime, preferably soon, since it's a great laptop/netbook technology!

              Comment


              • #8
                Originally posted by agd5f View Post
                This sounds like new marketing on an existing concept. You can do this already if your system has two GPUs of the same capability level, you just need some extra smarts in the driver. It's basically power selective multi-gpu. Wire one to the displays and then render using the faster or slower one depending on what power mode you are in. When the other one is not in use, ramp it's clocks down.
                Sounds easy. Why is it not in the ATi Drivers? Besides the neverseen PowerXpress ("Available on notebook PCs using Windows Vista® OS")

                Can anyone please tell the developers at ATi what CUSTOMERS want? And make sure, that the marketing guys are stuck in snow somewhere, so they don't disturbing the devs with useless things.

                I vote for PowerXpress on the DESKTOP with high-end graphic card and IGP. But I repeat myself (9 month ago):
                http://www.phoronix.com/forums/showp...6&postcount=17

                And bridgman did not listen to me

                Comment


                • #9
                  I have no doubt that this, or something functionally equivalent, will be coming to linux at some point. Will nvidia implement it in their blob? I don't know and quite frankly, I don't give a rat's a$$.

                  To be honest, I really don't want any kind of magical hidden automatic switching... a manual mechanism + an optional daemon monitoring (to monitor for switch conditions and automatically switch it) and switching it will be enough. Let me be in control of it if I happen to want to be in control of it.

                  The other thing I expect in this nvidia implementation is that it will only work with nvidia/nvidia pairs. Likely as a result of driver constraints. Gallium3D with its softpipe should introduce the possibility of switching *live* between two DIFFERENT GPUs (since all GPUs, in the end, will have equivalent capabilities, albeit at vastly different levels of performance). And this, I think, will be really slick.

                  Comment


                  • #10
                    Originally posted by droidhacker View Post
                    The other thing I expect in this nvidia implementation is that it will only work with nvidia/nvidia pairs. Likely as a result of driver constraints.
                    Actually, this technology was demoed with a Nvidia/Intel pair. Also, it gave you the option to right click an app and run it with either the Nvidia or the Intel GPU. You could also download application profiles automatically, or even build your own (i.e. always run application foo with Nvidia) - pretty slick if I may say so.

                    If dual GPU solutions become widespread in the future, I can imagine this coming to the open-source stack at some point. From a previous discussion it seems that X isn't really flexible enough for this right now, so it's mostly a matter of developer interested and manpower.

                    I also doubt this will come to open- and closed-source driver combinations (e.g. Intel + Nvidia) without very wide cooperation between all competitors (fat chance?) Time will tell.

                    Comment


                    • #11
                      Nvidia: We have no plans to support Optimus on Linux at this time.

                      http://www.nvnews.net/vbulletin/show....php?p=2183477

                      Comment


                      • #12
                        Originally posted by macemoneta View Post
                        Nvidia: We have no plans to support Optimus on Linux at this time.
                        Quit sad, but well. It's not meant for workstation users anyways .

                        I'm always wondering why this stuff can't be integrated in one GPU anyways. A simple gpu for the composited desktop and a real 3D unit which can be completely switched off. Or something more integrated: why can't maybe 98% of those processing "cores" be disabled on low workload. There should be a lower idle consumption be possible than AMD's current 5xx0 series has (>15W).

                        Comment


                        • #13
                          Originally posted by leidola View Post
                          Quit sad, but well. It's not meant for workstation users anyways .

                          I'm always wondering why this stuff can't be integrated in one GPU anyways. A simple gpu for the composited desktop and a real 3D unit which can be completely switched off. Or something more integrated: why can't maybe 98% of those processing "cores" be disabled on low workload. There should be a lower idle consumption be possible than AMD's current 5xx0 series has (>15W).
                          The issue is that you need to run the memory at some relatively high clock in order to feed the framebuffer with data (1920x1200 at 60fps = 527MB/sec) and this takes power.

                          However, with IGPs being integrated into CPUs right as we speak, the future might look quite different. The output signal could always be routed through the IGP (so no display hardware on the dedicated GPU) and the extra GPU could only be fired up when the load becomes too heavy for the IGP. Something similar to the Voodoo2-era 3d accelerator add-on cards, in fact.

                          Comment


                          • #14
                            Originally posted by Hasenpfote View Post
                            Sounds easy. Why is it not in the ATi Drivers? Besides the neverseen PowerXpress ("Available on notebook PCs using Windows Vista® OS")

                            Can anyone please tell the developers at ATi what CUSTOMERS want? And make sure, that the marketing guys are stuck in snow somewhere, so they don't disturbing the devs with useless things.
                            Who do you think actually drives the requirements? How much does the OEM implementation affect the driver architecture?

                            Marketing is either forward looking (how I can sell more widgets), or complementary to a customer (I need to meet this requirement for to help a customer sell more widgets). Note that as a supplier, the customer is really the OEM.

                            Comment


                            • #15
                              Originally posted by mtippett View Post
                              Who do you think actually drives the requirements? How much does the OEM implementation affect the driver architecture?

                              Marketing is either forward looking (how I can sell more widgets), or complementary to a customer (I need to meet this requirement for to help a customer sell more widgets). Note that as a supplier, the customer is really the OEM.
                              In the end, this is why we don't have as much support as we ought to with things in the world. The OEMs are still dancing to the tune of Microsoft's whims- and thereby companies like AMD have to dance to it, even if they don't fully want to. If they don't they don't sell to their customers (we're not their customers...we're HP's, etc. unless we're buying bulk quantities of GPUs from them...HP, Dell, Lenovo, etc. are their customers...) they don't sell much.

                              So you're going to have to convince the OEMs to work at getting things more straightened out.

                              As for not supporting Optimus, that was something of a poor decision on NVidia's part- even if I understand the motivations behind them choosing that. They're going with what the OEMs are asking for- the main reason we have drivers is the workstation market and Optimus isn't likely to be used in that space. So, no driver support for it for us.

                              All this means is more room for AMD to get it's act together on things- and take the market segment from them (and there IS one, it's just not on your OEM's radar...). I WAS going to get a new laptop with NVidia because of the stuff just largely working on Linux (AMD's proprietary drivers have in the past been iffy in varying areas...and rumors of issues with suspend on the mobile parts has me still thinking there's still a bit to be done...) - but if they're going to play that game, I might have to reconsider that decision.

                              Comment

                              Working...
                              X