Announcement

Collapse
No announcement yet.

Some Good & Bad News For The Nouveau Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Well, even if the performance is still not so good, it's nice to see Nouveaux improving.

    Last time I tried it though, purely for desktop computing rather than graphics, it was X-server crashes (when enabling desktop effects or switching displays) that made me switch back to Nvidia. I've no idea how much this is dependant on the chip used, but in laptops replacing the graphics chip is not an option so stability across all chips still seems the most important element.

    Comment


    • #22
      Originally posted by cb88 View Post
      Does it look like it to anyone else that the driver might not be scaling well with faster cards possibly still being CPU limited like the radeon driver?
      Yeah, the driver hits a CPU bottleneck quite quickly. There was a recent discussion on mesa-dev on ways to mitigate this issue. Also note that Nvidia's driver is multithreaded, while Nouveau isn't.

      That said, Nouveau surpassed my expectations in my own testing. Not only did it consume my GL2.1 code correctly (once I wrote a CPU S3TC decoder), it ran much better than expected! Full fps on a scene with several 2048x2048 textures and quite complex shaders (9500GT) - not bad!

      Comment


      • #23
        vdpau support

        My biggest problem with Nouveau is the lack of hw accelerated video playback support. While I greatly admire the Nouveau effort, and I really don't care about 3D acceleration, at the end of the day it seems like if you use either the ATI or the nVidia open source driver, you end up with buying an expensive video card - and not using any of its features (or dumb/slow them down). What's the point then??

        I have a 9400GT in my desktop and a first generation ION in my HTPC. I chose these because of the binary nVidia driver with the excellent VDPAU interface. Using open source drivers is unfortunately a no-go for me.

        Comment


        • #24
          Unfortunately, this will likely never be solved, because the people who pass laws relating to IP want to ensure that open source users cannot watch anything. It's a form of punishment. In order for Nouveau to support the decoding hardware, you first need to topple the Hollywood studios, the MPAA lobby, and much of the US patent and IP laws.

          Some sort of shader-based acceleration will eventually arrive, which will likely be somewhere half-way. Much better than CPU-only decode, but still wasting more energy than the dedicated chip on the graphics card.

          Comment


          • #25
            Those results exceed the expectations of all of us who know anything about graphics driver situation in Linux. It also raises even greater expectations about the future.

            Now, given the hacker nature of the Linux community, in my opinion, the free drivers' developers should put more effort into making efficient GPGPU computing available in Linux. Think of all the opportunities wide-spread opencl support could bring to the community.
            I would be more happy with 50% opengl and 85% opencl performance than vice versa.

            Comment


            • #26
              Thunderbird: Indeed! So far, we've seen the memory clock is often a big bottleneck.

              Michael: If you want to upclock the card to the maximum frequency, please contact me. I'll tell you what to do.
              By the time, could you provide us with the clock speed for each card compared to the maximum clocks? The performance in game are almost linearly bound to the memory speed.

              Comment


              • #27
                Originally posted by phoronix
                The VDrift racing game continues to be peculiar. With our ATI comparison the lower-end Radeon GPUs on the Gallium3D driver had outperformed the Catalyst driver, which was similarly the case on the NVIDIA side too. All four GeForce 8 graphics cards had ran VDrift at 1920 x 1080 much faster with the Gallium3D driver than did the NVIDIA driver. The NVIDIA driver results led us to believe there is an issue within NVIDIA's driver or Mesa-specific optimizations within VDrift itself.
                It always amuses me when you are clueless on what's going on you don't investigate but just come up with some wild theory. I would have expected a little more clue from somebody who has been reporting on Linux graphics for so many years.

                My take on it is that VDrift tries to use glsl shaders and when they fail to compile (as is the case with Mesa) it falls back to a different rendering path, which is faster at the cost of lower graphics quality.

                Comment


                • #28
                  Originally posted by monraaf View Post
                  It always amuses me when you are clueless on what's going on you don't investigate but just come up with some wild theory. I would have expected a little more clue from somebody who has been reporting on Linux graphics for so many years.

                  My take on it is that VDrift tries to use glsl shaders and when they fail to compile (as is the case with Mesa) it falls back to a different rendering path, which is faster at the cost of lower graphics quality.
                  Compared to other journalists, I think Michael is very good .
                  I understand he lacks time for that but some little programming with OpenGL wouldn't hurt him.

                  Comment


                  • #29
                    Originally posted by r1348 View Post
                    How are the nouveau devs supposed to make a performant driver without any proper documentation?
                    They shouldn't. Why do the hard work for some company who clearly doesn't want you to do it.
                    They should work on the radeon drivers. Clearly they are highly skilled so they could probably speed radeon development up much after getting used to it.
                    Then, when a really good open source radeon driver exists, and wayland is about to replace X in some distributions used for embedded devices etc. nvidia might consider releasing documentation and THEN they can do their work there.

                    Comment


                    • #30
                      ChrisXY: I felt this way up to when I've been given a laptop with an nVidia card. I tried nouveau and liked it up to the point of contributing.

                      Actually, we are having some fun trying to understand the craziness of nvidia in hardware design. My heart is deeply on the AMD side, but there is a real challenge with nVidia too.

                      I guess if some new developers would come and actively contribute to radeon, more documentation could be released and more work be done under NDA before the release of new GPUs.

                      This isn't something we can do with nVidia. We are trying to catch up and have good support when the blob will stop supporting your card.

                      As for the reason why nVidia would release doc, I don't think wayland is incompatible with the current blob architecture. Embedded devices? Well, having the blob or an open source driver isn't going to change anything user-wise, so....

                      Comment

                      Working...
                      X