Announcement

Collapse
No announcement yet.

Open-Source ATI R600/700 Mesa 3D Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    A question to Michael or whoever else cares to provide insight:

    As a non-gamer, I wonder why the framerate in games is so important. The human eye perceives pictures as fluid motion when more than about 16 to 24 frames per second. So half the refresh rate of a TFT screen (usually 60Hz), at 30Hz should be more than enough. Or maximally 60Hz when doing 3D, so 30 Hz for each eye.
    Why is it not always done that way?
    Why not use the CPU and GPU cycles for improving the picture quality instead of more frames per second?
    Also, is there a way to fix the framerate in games to a certain amount (say 30Hz) and do benchmarking based on (lesser is better) CPU and GPU load instead?

    I'm really puzzled by this...

    Comment


    • #42
      So actually I'm asking why the framerate in games (and GUI's like compiz, and movies) is not simply locked to the vertical refresh of the screen? Even half of it would be enough. Or the whole 60Hz if you need to independant viewpoints for 3D?

      Comment


      • #43
        Actually you can lock the framerate with the sync to vblank option, supported either at the software or driver level. While in theory 30 fps are enough, I've seen quite some games that become really playable only at 60 fps or more. The rest is only for gamers fun.

        Comment


        • #44
          Originally posted by Melcar View Post
          Here is an old radeon vs. fglrx test I did. All those games a rather playable. Would like to know what you did to get Tremulous running though; it's a slide show even under masa 7.8, and tends to crash.
          Mmm, I just ran the WoP on my rig, I get relatively less performance than you:


          In your test fglrx is 3.6 times faster, and in mine 4.8 - but this is at different resolutions (yours is 1680 x 1050 and mine 1920 x 1080). We also have different software stacks (you seem to be running latest Mesa on a 2009 distro?).

          Anyways, thanks for sharing, roughly speaking, 3D performance since to vary between a factor 2 and a factor 5 in 3D these days.

          Comment


          • #45
            Originally posted by mendieta View Post
            Mmm, I just ran the WoP on my rig, I get relatively less performance than you:


            In your test fglrx is 3.6 times faster, and in mine 4.8 - but this is at different resolutions (yours is 1680 x 1050 and mine 1920 x 1080). We also have different software stacks (you seem to be running latest Mesa on a 2009 distro?).

            Anyways, thanks for sharing, roughly speaking, 3D performance since to vary between a factor 2 and a factor 5 in 3D these days.
            Interestingly enough, using the vanilla 2.6.33rc6 I get a 25% speedup (4.8 -> 3.9)


            Even more interesting, there is a regression in rc7: no 3D acceleration at all!

            Comment


            • #46
              I think that there are two things that it comes down to;

              1) A game's framerate is VARIABLE. What you need is a WORST CASE framerate that is at LEAST some minimum.

              2) (more importantly) point of sampling. If your game is running at 500 fps minimum, then it can get a frame sampling at very precise timing, resulting in fluidity in frames. I.e., it obviously isn't going to display 500 fps since that is beyond the capabilities of your display device, so it is going to pick SOME frames, spaced equally in time, and display those.

              If your game is running at 30 fps and your screen at 60 fps, then you could get, worst case, THREE identical frames back-to-back followed by displaying the next frame only once (when it should be twice). I.e., things are variable, you expect it to get two identical frames, followed by two identical frames, followed by two identical frames, etc. If it is a little bit late preparing the third frame, then it gives THREE identical frames, then the FOURTH frame. It is a little difficult to explain without a picture... but this can lead to a choppy looking output.

              Next, also consider that just because your game is running at "30 fps", it really doesn't mean that it is *actually* running at 30 fps *at this instant*. What it does to calculate the framerate is it counts how many frames HAVE BEEN generated over some amount of time (which is obviously greater than zero), and divide by the time. So you're looking at the AVERAGE framerate within a particular window of time. If the window happens to be 10 seconds, then it *could* have rendered 500 fps for a few seconds, followed by 0 fps for a few seconds, and it averaged out to 30 fps. This is an extreme example, of course, but it illustrates one of the issues with framerate since this example will obviously be real ugly to watch.

              In contrast, for filming MOVIES, you basically have an INFINITE framerate (i.e. real world fluidity) that you sample from at a fixed rate. There is no question about when the next frame will happen to finish being rendered -- it is rendered and fully up to date at whatever moment you decide to sample it. And in computer generated videos, you can render at any rate you like, as long as the motion between frames makes sense for the time between them. In other words, there is no constraint that rendering has to be done in real time as it is in a game.

              Comment


              • #47
                Originally posted by perpetualrabbit View Post
                As a non-gamer, I wonder why the framerate in games is so important.
                Some games clip the displayed framerate to a "sane" value, and/or sync the frames to the monitor vblanc.

                Other games do not use any framerate limit, or deactivate the limit for benchmarking your system. In this case, the framerate is only limited by your video driver/hardware. X11 might also limit the framerate, thats why glxgears is not considered to be a valid benchmarking tool. So more fps indicate better video drivers if you are not limited by your windowing system.

                The human eye perceives pictures as fluid motion when more than about 16 to 24 frames per second. So half the refresh rate of a TFT screen (usually 60Hz), at 30Hz should be more than enough. Why is it not always done that way? Why not use the CPU and GPU cycles for improving the picture quality instead of more frames per second.
                Your eye does more than receiving 24 frames per second. It "integrates" the light intensity for each frame. A video camera usually does the same, but if you render 24 sharp snapshots per second, a fast motion will not appear fluid. Thats why 60fps may appear more fluid even when 24 frames should be enough. You could call this an aliasing effect in time.

                You can counter this effect by simply displaying more frames and let the eye integrate these frames again, or composing multiple frames to a single frame which is then motion blurred. The first approach does not require any special features of your graphics hardware or additional work on the rendering code, only raw GPU power.

                Also, is there a way to fix the framerate in games to a certain amount (say 30Hz) and do benchmarking based on (lesser is better) CPU and GPU load instead?
                Why? Measuring the fps for benchmarking is a simple and reliable approach. You just have to remember that your application should run GPU limited. So if glxgears gives you 6000fps, which is probably X11 limited, use another application with more demands that runs at 20fps and use this number as benchmark.

                Comment


                • #48
                  Originally posted by perpetualrabbit View Post
                  A question to Michael or whoever else cares to provide insight:

                  As a non-gamer, I wonder why the framerate in games is so important. The human eye perceives pictures as fluid motion when more than about 16 to 24 frames per second. So half the refresh rate of a TFT screen (usually 60Hz), at 30Hz should be more than enough. Or maximally 60Hz when doing 3D, so 30 Hz for each eye.
                  FYI; Everything below 100Hz is percievable by the human brain. Which is why 100Hz CRT screens appear to be flikker-free and TL tubes (50Hz each) are usually placed in pares so you don't see the flicker.

                  And then there is the subconcience. Not really unimportant when competing against other humans online :P And what has been said before: while the concience part of your brain only gets to process about 30 images/sec, what it recieves is the avarage all the images larger than the amount of 30 which these 30 images/s are made of that the concience part of your brain processes.

                  Also, is there a way to fix the framerate in games to a certain amount (say 30Hz) and do benchmarking based on (lesser is better) CPU and GPU load instead?
                  It is called frame limiting (often done with verticle sync -> 60fps) and it is possible. But yeah, that would be very interesting indeed. Don't know if that's so easily tested, BTW...

                  Comment


                  • #49
                    Originally posted by perpetualrabbit View Post
                    A question to Michael or whoever else cares to provide insight:

                    As a non-gamer, I wonder why the framerate in games is so important. The human eye perceives pictures as fluid motion when more than about 16 to 24 frames per second.
                    Sorry, that's bollocks. The human eye can detect flicker in frequencies >100Hz (look at your CRT with the corner of your eye, if you still have one). Pilots have been able to identify planes by seeing a picture for 1/220th of a second.

                    The only reason 24Hz is acceptable in film, is because the filming process introduces motion blur into the picture, which fools the eye into a fluid perception of motion (when it is all but fluid in reality). Remove motion blur, like in a PC monitor, and you'll be able to tell the difference between 60, 85 and 120fps even under best-case conditions (stable frame intervals that match the screen refresh rate 1-1).

                    Edit: http://www.100fps.com/how_many_frame...humans_see.htm

                    Comment


                    • #50
                      Originally posted by droidhacker View Post
                      I think that there are two things that it comes down to;

                      1) A game's framerate is VARIABLE. What you need is a WORST CASE framerate that is at LEAST some minimum.

                      2) (more importantly) point of sampling. If your game is running at 500 fps minimum, then it can get a frame sampling at very precise timing, resulting in fluidity in frames. I.e., it obviously isn't going to display 500 fps since that is beyond the capabilities of your display device, so it is going to pick SOME frames, spaced equally in time, and display those.

                      If your game is running at 30 fps and your screen at 60 fps, then you could get, worst case, THREE identical frames back-to-back followed by displaying the next frame only once (when it should be twice). I.e., things are variable, you expect it to get two identical frames, followed by two identical frames, followed by two identical frames, etc. If it is a little bit late preparing the third frame, then it gives THREE identical frames, then the FOURTH frame. It is a little difficult to explain without a picture... but this can lead to a choppy looking output.

                      Next, also consider that just because your game is running at "30 fps", it really doesn't mean that it is *actually* running at 30 fps *at this instant*. What it does to calculate the framerate is it counts how many frames HAVE BEEN generated over some amount of time (which is obviously greater than zero), and divide by the time. So you're looking at the AVERAGE framerate within a particular window of time. If the window happens to be 10 seconds, then it *could* have rendered 500 fps for a few seconds, followed by 0 fps for a few seconds, and it averaged out to 30 fps. This is an extreme example, of course, but it illustrates one of the issues with framerate since this example will obviously be real ugly to watch.

                      In contrast, for filming MOVIES, you basically have an INFINITE framerate (i.e. real world fluidity) that you sample from at a fixed rate. There is no question about when the next frame will happen to finish being rendered -- it is rendered and fully up to date at whatever moment you decide to sample it. And in computer generated videos, you can render at any rate you like, as long as the motion between frames makes sense for the time between them. In other words, there is no constraint that rendering has to be done in real time as it is in a game.
                      @1 Why pick "some" frame at random in a 1/30th second interval rather than calculate the exact picture in the middle of the interval? I don't see why calculating 500 pictures and then picking the best somewhere in the middle of the stack of 500 would be better?

                      @2 Ok, so you pick equally spaced pictures out of a large number. That still seems a waste, as you could also use the spare cycles to calculate a picture in the middle of the 1/30th second interval and apply a good blurring algorithm on it. Using the previous, current and next picture or maybe if your calculate 60 frames per second using some other intelligent interpolation (blurring) algorithm. I know that Pixar does something like this in their computer animated movies.
                      Anyway blurring or interpolation would fall in the category of "using the spare cycles to improving picture quality". Still you would only display 30 frames per second, which seems to be enough for even Avatar (although that is 30 frames per eye).

                      Comment

                      Working...
                      X