Announcement

Collapse
No announcement yet.

Early Mesa 9.2 Benchmarks With Nouveau

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Early Mesa 9.2 Benchmarks With Nouveau

    Phoronix: Early Mesa 9.2 Benchmarks With Nouveau

    While the release of Mesa 9.2/10.0 is still a ways away, for those users of the Nouveau reverse-engineered open-source NVIDIA graphics driver, here are some early benchmarks for reference compared to the stable Mesa 9.0 and 9.1 series...

    http://www.phoronix.com/vr.php?view=MTM0NzY

  • #2
    This will get interesting once they get reclocking (in the kernel) going. Since the big rewrite, nothing much has changed except for a few patches which mostly seemed to fix bugs.

    Comment


    • #3
      Originally posted by Rexilion View Post
      This will get interesting once they get reclocking (in the kernel) going. Since the big rewrite, nothing much has changed except for a few patches which mostly seemed to fix bugs.
      Reclocking is painfully hard to do, you would also have to do it for almost every card (or at least every GPU: there are 4 Kepler GPUs I know for example: GK107, GK106, GK104, GK110), so we won't see it for a while, sadly. We can always hope though.

      Comment


      • #4
        Originally posted by Rexilion View Post
        This will get interesting once they get reclocking (in the kernel) going. Since the big rewrite, nothing much has changed except for a few patches which mostly seemed to fix bugs.
        I think it was David Airlie that said they had spoken to the nvidia devs and what they basically said was: we dont even know how i works. -supposedly- they get a sheet from the hardware devs that says "put these registers in this state" and it automagically reclocks to X frequency. (Sounds like a joke i know, but he said it with a pretty straight face in the talk lol)

        Comment


        • #5
          Originally posted by Calinou View Post
          Reclocking is painfully hard to do, you would also have to do it for almost every card (or at least every GPU: there are 4 Kepler GPUs I know for example: GK107, GK106, GK104, GK110), so we won't see it for a while, sadly. We can always hope though.
          Yeah, it's been a while since there has been some progress. That said, it has been pretty good so far. The quality of the 3D driver is pretty good. I got a really old (but nice) game work under wine with this driver. Which is awesome. The game was Drakan: The order of the flame.

          Originally posted by Ericg View Post
          I think it was David Airlie that said they had spoken to the nvidia devs and what they basically said was: we dont even know how i works. -supposedly- they get a sheet from the hardware devs that says "put these registers in this state" and it automagically reclocks to X frequency. (Sounds like a joke i know, but he said it with a pretty straight face in the talk lol)
          But, wouldn't they 'see' these registers in 'this state' and then just copy-paste that to nouveau? Case closed? From what I can see in the IRC channels, is that they are trying to understand the entire mechanics 'behind' these registers. Only then to implement it properly.

          Wish I could help though. All I'm doing now is prodding them with bugreports :P .

          Comment


          • #6
            Originally posted by Rexilion View Post
            But, wouldn't they 'see' these registers in 'this state' and then just copy-paste that to nouveau? Case closed? From what I can see in the IRC channels, is that they are trying to understand the entire mechanics 'behind' these registers. Only then to implement it properly.

            Wish I could help though. All I'm doing now is prodding them with bugreports :P .
            You're right in that ideally it would just be a copy-paste from the blob, also it would be better if they DID understand the entire reclocking process but that takes a lot longer. Better, but it takes more time. Honestly it'll be interesting when the foss drive gets properly memory / fan frequency support... theyve got a a pretty good 3D pipeline going from michaels tests over the years, just needs to be running at higher-than-minimum frequencies.

            Comment


            • #7
              Originally posted by Ericg View Post
              You're right in that ideally it would just be a copy-paste from the blob, also it would be better if they DID understand the entire reclocking process but that takes a lot longer. Better, but it takes more time. Honestly it'll be interesting when the foss drive gets properly memory / fan frequency support... theyve got a a pretty good 3D pipeline going from michaels tests over the years, just needs to be running at higher-than-minimum frequencies.
              "good 3D pipeline"

              Courtesy of Nvidia GPU engineers. They design their GPU's interfaces drivers interact with pretty closely to OpenGL. That is why OpenGL support on Nvidia is quite good.

              Comment


              • #8
                Speaking of Nouveau, what is required to get GTX 660 to run on it? Do I need a special Mesa version? Do I need some files from the blob? I'm currently trying to test as many configurations as possible, and the only driver I haven't tested so far is nouveau

                Comment


                • #9
                  Originally posted by GreatEmerald View Post
                  Speaking of Nouveau, what is required to get GTX 660 to run on it? Do I need a special Mesa version? Do I need some files from the blob? I'm currently trying to test as many configurations as possible, and the only driver I haven't tested so far is nouveau
                  From the codenames I think that's an NVE0. The driver provides firmware for that card. However, on the hardware status page, someone reported a shoddy experience with an EVGA chipset with exactly your chip.

                  This information is not exactly classified btw...

                  Comment


                  • #10
                    Originally posted by Rexilion View Post
                    From the codenames I think that's an NVE0. The driver provides firmware for that card. However, on the hardware status page, someone reported a shoddy experience with an EVGA chipset with exactly your chip.

                    This information is not exactly classified btw...
                    I know, but it's good to know - I never used nouveau before, so I really don't know what's the installation process, and it doesn't just work out of the box in openSUSE 12.3. But apparently that's because it requires kernel 3.8, and the one that ships is 3.7. So that's good to know too. As for it being unstable in 3D, eh, I don't really mind it, since I just need to use it for a quick look at any bugs a game may have when using nouveau, and if the bugs are the same as when using some of the other drivers (intel, radeon, fglrx, nvidia) or different in some way.

                    Comment


                    • #11
                      Has anyone heard of any movement towards glamour with nouveau?
                      I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).

                      Comment


                      • #12
                        Originally posted by liam View Post
                        Has anyone heard of any movement towards glamour with nouveau?
                        I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).
                        Maybe it's viable only after reclocking has been done?

                        Comment


                        • #13
                          Glamour

                          Originally posted by liam View Post
                          Has anyone heard of any movement towards glamour with nouveau?
                          I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).
                          And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
                          Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

                          And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.

                          Comment


                          • #14
                            Chipsets

                            Originally posted by Calinou View Post
                            Reclocking is painfully hard to do, you would also have to do it for almost every card (or at least every GPU: there are 4 Kepler GPUs I know for example: GK107, GK106, GK104, GK110), so we won't see it for a while, sadly. We can always hope though.
                            Luckily the memory type (xDDRy) doesn't change that often, and the interfaces to it tend to only change with each new card generation (Fermi, Kepler, ...).
                            The trouble is, the register values that the blob does write *depend* on the specific card you have (which registers set which frequencies to which values, how to extract memory timing information from the VBIOS, where to put it, how to even determine which memory type you have, etc.). I haven't worked on it myself but it looks like memory reclocking is the most difficult to get right. You can't just copy and paste from the binary driver. That will, at best, work on the very card you extracted the values from.

                            We also want performance level to be selected dynamically based on load/temperature/power consumption, all that is being worked on. And we can't turn it on for users before it really works because there's always the danger of exposing your card to unhealthy levels of heat (or worse). But don't worry, I haven't heard of any dev's cards getting fried yet, even when experimenting with reclocking.
                            Last edited by calim; 04-11-2013, 07:46 AM.

                            Comment


                            • #15
                              Originally posted by calim View Post
                              And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
                              Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

                              And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.
                              I think the distinction here is presence of a 2D engine. If the GPU has a 2D engine that can handle EXA-style drawing functions then writing a traditional 2D driver first makes sense.

                              If the GPU uses the 3D engine for 2D, then you need to write "most of a 3D HW driver" in order to run even basic 2D operations, and using something like Glamor or XA makes more sense.
                              Last edited by bridgman; 04-11-2013, 09:30 AM.

                              Comment

                              Working...
                              X