Announcement

Collapse
No announcement yet.

Mixed Feelings Over The PSCNV Nouveau Driver Fork

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Mixed Feelings Over The PSCNV Nouveau Driver Fork

    Phoronix: Mixed Feelings Over The PSCNV Nouveau Driver Fork
    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Originally posted by phoronix View Post
    From Martin's comment, PathScale is not only interested in GPGPU on NVIDIA cards, but to increase performance. Martin says their 2D Fermi support is up to 50% faster than NVIDIA's official proprietary driver.
    Well, I said that of my version of libdrm which isn't working properly currently and it was on my nv86. I have no nvcx to try it with.
    The other libdrm supporting pscnv is slower but works way better

    Comment


    • #3
      "The OpenGL4 comment was only a personal one. I would really love to see a community of developers make a todo list, plan and try to tackle this. I think within a year's time if it's broken down into small manageable pieces with good docs it can be done."
      Erm yeah sure.. The problem is not the opengl extensions since they can't be _that_ hard to implement.. The problem is the extremely high learning curve for driver development in general and TGSI. And the different "IL" layers in nvidia and amd don't make it much easier.

      Anyway, a 2d performance that is 50% better then the proprietary nvidia driver? Compared to what? windows or linux? Even then, that must be a "special" way of doing things so perhaps that "special" way can also be implemented in the r600c/g drivers? ^_^

      Comment


      • #4
        @Michael

        Comment


        • #5
          Originally posted by markg85 View Post
          Erm yeah sure.. The problem is not the opengl extensions since they can't be _that_ hard to implement.. The problem is the extremely high learning curve for driver development in general and TGSI. And the different "IL" layers in nvidia and amd don't make it much easier.

          Anyway, a 2d performance that is 50% better then the proprietary nvidia driver? Compared to what? windows or linux? Even then, that must be a "special" way of doing things so perhaps that "special" way can also be implemented in the r600c/g drivers? ^_^
          It was tested on Linux. The blob is terrible at doing 2D stuffs. I guess the reason why pscnv is so fast on 2D is because command submission is done from the userspace and not from the kernel space as the blob and nouveau do.
          Also, contrary to nouveau and the blob, I was using a single big buffer to send the commands and used it as a ring-buffer. This proves useful when we are sending a shitload of commands, we wait less so it is more efficient on the CPU and on the GPU.

          Anyway, don't trust this number too much, it is from an experimental code that may not work very well later. I have stopped working on libdrm and went to PM as I was kind of stuck with some issues I could'nt solve (I'm a noob). Someone else did a quick libdrm port (Christopher Bumiller) and we are using it to get accelerated X on nv50+.

          As for this being possible on radeon cards, I asked Alex Deucher at the XDS. I can't really remember what his answer was but I seem to recall it was impractical due to hw design.

          Comment


          • #6
            Originally posted by M?P?F View Post
            It was tested on Linux. The blob is terrible at doing 2D stuffs. I guess the reason why pscnv is so fast on 2D is because command submission is done from the userspace and not from the kernel space as the blob and nouveau do.
            Also, contrary to nouveau and the blob, I was using a single big buffer to send the commands and used it as a ring-buffer. This proves useful when we are sending a shitload of commands, we wait less so it is more efficient on the CPU and on the GPU.

            Anyway, don't trust this number too much, it is from an experimental code that may not work very well later. I have stopped working on libdrm and went to PM as I was kind of stuck with some issues I could'nt solve (I'm a noob). Someone else did a quick libdrm port (Christopher Bumiller) and we are using it to get accelerated X on nv50+.

            As for this being possible on radeon cards, I asked Alex Deucher at the XDS. I can't really remember what his answer was but I seem to recall it was impractical due to hw design.
            In fact, the blob does userspace command submission too.

            Also, it seems like the slow 2D may only be Qt related.

            Comment


            • #7
              Qt is well known for having bad xrender support.

              Comment


              • #8
                Who knows.


                Maybe Intel was right to abandon TTM and go with their own GEM solution. Maybe TTM sucks that much, or that it's more practical to have a memory management solution for each hardware type.

                As long as these forks do not get all ideological and are willing to work together to find the best solution then I don't see anything wrong with it.

                May the best video memory management scheme win!

                Comment


                • #9
                  TTM was designed for graphics and we believe it's quite good at it, but Pathscale wants a memory manager for compute. If their solution turns out to be good enough even for graphics, we might end up switching over to it for Radeons. Let's see.

                  Comment

                  Working...
                  X