Announcement

Collapse
No announcement yet.

Radeon R600 Gallium3D MSAA Performance Update

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Radeon R600 Gallium3D MSAA Performance Update

    Phoronix: Radeon R600 Gallium3D MSAA Performance Update

    It's been a while since last checking out the multi-sample anti-aliasing (MSAA) performance of the open-source Radeon Gallium3D driver. However, with the recent release of Mesa 9.2, here are new benchmarks of the MSAA Radeon Gallium3D performance from three different AMD graphics cards on Xubuntu Linux.

    http://www.phoronix.com/vr.php?view=19105

  • #2
    I've been playing Left 4 Dead 2 on my Radeon 4850 (core i7 2.8GHz, 12 Gb RAM) the last week and the performance is waaaaay below the one on Windows. But it's actually playable if I lower the resolution to 1680x1050 and set all the detail settings to Low.
    Could we possibly get some benchmarks on L4D2? Either on different AMD cards, or a comparison with Windows, or Catalyst/FOSS.

    Comment


    • #3
      Originally posted by Azpegath View Post
      I've been playing Left 4 Dead 2 on my Radeon 4850 (core i7 2.8GHz, 12 Gb RAM) the last week and the performance is waaaaay below the one on Windows. But it's actually playable if I lower the resolution to 1680x1050 and set all the detail settings to Low.
      Could we possibly get some benchmarks on L4D2? Either on different AMD cards, or a comparison with Windows, or Catalyst/FOSS.
      IMHO the use of the phoronix benchmarks is very limited. Why no Unigine? Why no HL2: Lost Cost? Why not a single steamgame? Because its not Open Source?

      Comment


      • #4
        Originally posted by Kemosabe View Post
        IMHO the use of the phoronix benchmarks is very limited. Why no Unigine? Why no HL2: Lost Cost? Why not a single steamgame? Because its not Open Source?
        I don't think that's the issue, since Michael has previously mentioned the fact that Valve was going to add the possibility to run pre-defined demos (scripted sequences) from the command prompt. That's what he needs for being able to automatically run the benchmarks. Doom 3 is used every now and then, but that is open sourced so I guess that's not a good example.

        Comment


        • #5
          He does run Unigine when possible, but the Unigine benches are buggy (not GL spec compliant) and so don't run on mesa without the app-specific workarounds.

          Comment


          • #6
            Originally posted by curaga View Post
            so don't run on mesa without the app-specific workarounds.
            Which are bundled with mesa itself.
            ## VGA ##
            AMD: X1950XTX, HD3870, HD5870
            Intel: GMA45, HD3000 (Core i5 2500K)

            Comment


            • #7
              Yeah, and not necessarily installed by packages.

              Hell, I install from source and I don't install them either. Screw workarounds, fix the culprits.

              Comment


              • #8
                Originally posted by Kemosabe View Post
                Why no HL2: Lost Cost? Why not a single steamgame? Because its not Open Source?

                No: http://www.phoronix.com/scan.php?pag...tem&px=MTQxMzY
                Michael Larabel
                http://www.michaellarabel.com/

                Comment


                • #9
                  You should put a link to that in big bold letters with magenta highlighted text on the first and last page of every GPU benchmark article.

                  Comment


                  • #10
                    What is the technical reason behind 2x/4x and 6x/8x being almost identical in performance for a few scenarios?

                    Also, while the benchmarks show the performance hit, what do we gain? A screenshot of each MSAA level for a particular frame and then creating a gif that cycles through these levels: 0x, 2x, 4x, 6x, 8x, 0x, ... is highly welcome.

                    Comment


                    • #11
                      Originally posted by FourDMusic View Post
                      What is the technical reason behind 2x/4x and 6x/8x being almost identical in performance for a few scenarios?
                      MSAA adds another processing step to the rendering pipeline: the resolve. This process combines subsamples to one sample for each pixel. This results in a constant performance hit. Beyond that, GPUs use various schemes to compress depth, stencil and color buffers, so that increasing sample depth does not result in much increase in memory bandwidth (after all, only the geometry edges can be different, GPUs take advantage of that) in typical cases. Therefore, performance is often quite similar for the different MSAA levels.

                      Comment


                      • #12
                        Disable VSync in 13.04?

                        Hey guys,

                        How have you been disabling vsync in Ubuntu 13.04 with the R600g driver? I have a R770 (4870), and I have set up a custom xorg.conf and disabled sync to vblank on compiz, and I've disabled it in-game, but the FPS will not exceed 60 FPS.

                        Any ideas?

                        Comment


                        • #13
                          Originally posted by linuxguy View Post
                          Hey guys,

                          How have you been disabling vsync in Ubuntu 13.04 with the R600g driver? I have a R770 (4870), and I have set up a custom xorg.conf and disabled sync to vblank on compiz, and I've disabled it in-game, but the FPS will not exceed 60 FPS.

                          Any ideas?
                          if you already disabled Swapbufferswait on xorg.conf, maybe you have to disable vblank with driconf too, or manually in the ~/.drirc file, with the line:

                          Code:
                          <application name="Default">
                                      ...
                                      <option name="vblank_mode" value="0" />
                                      ...
                                  </application>

                          Comment


                          • #14
                            Originally posted by Azpegath View Post
                            I've been playing Left 4 Dead 2 on my Radeon 4850 (core i7 2.8GHz, 12 Gb RAM) the last week and the performance is waaaaay below the one on Windows. But it's actually playable if I lower the resolution to 1680x1050 and set all the detail settings to Low.
                            Could we possibly get some benchmarks on L4D2? Either on different AMD cards, or a comparison with Windows, or Catalyst/FOSS.
                            You're running a fast-paced game at over 1080p on an older (4850) card? Whats your monitor's native resolution that you have to "lower" the it to 1680x1050?

                            Comment


                            • #15
                              Originally posted by Ericg View Post
                              You're running a fast-paced game at over 1080p on an older (4850) card? Whats your monitor's native resolution that you have to "lower" the it to 1680x1050?
                              well maybe he is using ubuntu default stack in 13.04 or something like that and ofc will be slow

                              Comment

                              Working...
                              X