Announcement

Collapse
No announcement yet.

Early Mesa 9.2 Benchmarks With Nouveau

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Has anyone heard of any movement towards glamour with nouveau?
    I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).

    Comment


    • #12
      Originally posted by liam View Post
      Has anyone heard of any movement towards glamour with nouveau?
      I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).
      Maybe it's viable only after reclocking has been done?

      Comment


      • #13
        Glamour

        Originally posted by liam View Post
        Has anyone heard of any movement towards glamour with nouveau?
        I can't think of any reason why this wouldn't have at least been attempted but I've been unable to find any discussion about it (thnx goog).
        And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
        Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

        And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.

        Comment


        • #14
          Chipsets

          Originally posted by Calinou View Post
          Reclocking is painfully hard to do, you would also have to do it for almost every card (or at least every GPU: there are 4 Kepler GPUs I know for example: GK107, GK106, GK104, GK110), so we won't see it for a while, sadly. We can always hope though.
          Luckily the memory type (xDDRy) doesn't change that often, and the interfaces to it tend to only change with each new card generation (Fermi, Kepler, ...).
          The trouble is, the register values that the blob does write *depend* on the specific card you have (which registers set which frequencies to which values, how to extract memory timing information from the VBIOS, where to put it, how to even determine which memory type you have, etc.). I haven't worked on it myself but it looks like memory reclocking is the most difficult to get right. You can't just copy and paste from the binary driver. That will, at best, work on the very card you extracted the values from.

          We also want performance level to be selected dynamically based on load/temperature/power consumption, all that is being worked on. And we can't turn it on for users before it really works because there's always the danger of exposing your card to unhealthy levels of heat (or worse). But don't worry, I haven't heard of any dev's cards getting fried yet, even when experimenting with reclocking.
          Last edited by calim; 11 April 2013, 07:46 AM.

          Comment


          • #15
            Originally posted by calim View Post
            And why exactly are we supposed to cripple our perfectly good (ok, maybe not perfectly) 2D driver ? Going via OpenGL would add considerable overhead and deprive us of the opportunity to use the 2D engine where it's appropriate / helpful.
            Plus, it's extra work with no significant (if any) gain, and we don't exactly have a lot of extra time at our disposal.

            And we wouldn't want to have to finish GL support for a new chipset before anyone can use X. The 2D driver is much much simpler and thus faster to write.
            I think the distinction here is presence of a 2D engine. If the GPU has a 2D engine that can handle EXA-style drawing functions then writing a traditional 2D driver first makes sense.

            If the GPU uses the 3D engine for 2D, then you need to write "most of a 3D HW driver" in order to run even basic 2D operations, and using something like Glamor or XA makes more sense.
            Last edited by bridgman; 11 April 2013, 09:30 AM.
            Test signature

            Comment


            • #16
              Originally posted by bridgman View Post
              I think the distinction here is presence of a 2D engine. If the GPU has a 2D engine that can handle EXA-style drawing functions then writing a traditional 2D driver first makes sense.

              If the GPU uses the 3D engine for 2D, then you need to write "most of a 3D HW driver" in order to run even basic 2D operations, and using something like Glamor or XA makes more sense.
              Does that imply that the AMD cards do not have a suitable 2D engine that is capable of running EXA-style drawing? Or do they don't have a 2D engine at all?

              Comment


              • #17
                Originally posted by Rexilion View Post
                Does that imply that the AMD cards do not have a suitable 2D engine that is capable of running EXA-style drawing? Or do they don't have a 2D engine at all?
                No 2D engine at all. We had a 2D engine in 5xx and earlier, but it didn't do blends etc.. so we used the 3D engine for EXA anyways.
                Test signature

                Comment


                • #18
                  Originally posted by bridgman View Post
                  No 2D engine at all. We had a 2D engine in 5xx and earlier, but it didn't do blends etc.. so we used the 3D engine for EXA anyways.
                  Neither does NV's 2D engine. It can do solids (with ROP) and blits. But still, setting up the 3D engine for a single, known operation is much easier than dealing with all of OpenGL. The most significant advantage being that you don't need a shader compiler. And little things, like you also don't need vertex buffers because the 3D engine has immediate mode (which is quite sufficient or even preferable for drawing a single quad).

                  Comment


                  • #19
                    NV still has a 2d engine in Kepler?

                    Then why did Nvidia's (blob) 2d performance take a nosedive after 7xxx? I recall at the time the official reason was that they no longer had a 2d engine, had to do 2d work on the 3d engine with 8xxx and onwards, and it took years to optimize it to the level of the 7xxx 2d engine.

                    Google finds a lot of confirmations for this, that Nvidia dropped the 2d engine starting with 8xxx?

                    Comment


                    • #20
                      2D Engine

                      Originally posted by curaga View Post
                      NV still has a 2d engine in Kepler?

                      Then why did Nvidia's (blob) 2d performance take a nosedive after 7xxx? I recall at the time the official reason was that they no longer had a 2d engine, had to do 2d work on the 3d engine with 8xxx and onwards, and it took years to optimize it to the level of the 7xxx 2d engine.

                      Google finds a lot of confirmations for this, that Nvidia dropped the 2d engine starting with 8xxx?
                      Do you think we're making this up ? - https://github.com/pathscale/envytoo...db/nv50_2d.xml (NV50 = G80, naming always uses chipset id where the class interface first appeared)

                      It doesn't do all that much and it likely uses mostly the same circuits as the 3D engine (but different interface, separate state, and who knows what the internal details are like), but it's there.

                      Comment

                      Working...
                      X