Announcement

Collapse
No announcement yet.

Open-Source ATI R600/700 Mesa 3D Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • V!NCENT
    replied
    Oh and PS: the light intensity is also very important.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by yotambien View Post
    And how about you taking a general physics course instead of relying on random pseudoscience bullshit you found in youtube? If you really can't refrain yourself from falling for it, at least don't try to lecture others.
    Pseudoscience? I was looking for the explenation of time by that famous time video (you know... the one use to teach time at some universities for quantum physiscs) and came across something that was quite like it. Ofcourse anything past 4D is pure speculation/theory/whatever, but (I was interested) it lets the viewer think about changes over time and because motion blur is a problem with time (implement it in a gameloop for (semi?)realtime rendering) and simultation and (unknown?) probability and movement through 3D space, I considdered it extremely helpfull...

    Once I got past viewing the fourth dimension, it was already too late... I had posted the second video also and too bad the editing is a one minute limit here...

    Besided... Anything past what we already considder 'real' is seemingly unmessurable. e=mc^2... heh... yeah... Even the LHC is one big pile of pseudo science if you look at it that way. What's next? CERNs gravity detector... You have theory, seemingly proven theory and scientifically proven theory. The rest is all science guilt 1 versus science guilt 2. "They are wrong!" -"No they are wrong!", "Your theory is pseudo science!" -"No you only hate my methods because they differ from the established one!", and so forth and so forth. Bunch of criebabies that will never know for sure. They only demand that they do... History repeats itself, you know? Let me be clear though that I am not claiming that the omniverse is correct, however with realtime rendering time is a single dimensional problem and let us leave it at that, pretty please...

    But lets get back to the subject before this ends in a giant flamewar...

    What motion blur are we talking about here? Motion blur as in blurred motion on film. We have HDR and bloom for games. Everything is a frame and so we talk about the kind of motion blur that you get when you photograph a scene that is moving at such speed in respect to the camera that multiple images land on one frame. Tada. This is also done with screencaptures, which comes closest to exact representages of the Desired Effect (TM).

    Yes, when one puts enough frames on there you get the smudgy effect thingy, true... But if you look at it than truely correct motion blur is a mix of older images and the older they are, the less strongly they appear on the 'photo' that the camera takes like in real life.

    Leave a comment:


  • yotambien
    replied
    Originally posted by deanjo View Post
    Third alternative is to use a old lcd with 16ms+ pixel LCD response time. Motion blur without any additional performance hit.
    Ha, you joke, but I remember watching The lord of the rings II in a real crap TV with a real crap aerial causing a lot of artifacts and think, "uhm, this doesn't actually stink, you know?". Then I watched the third part on DVD on a good TV only to realize how bad and unrealistic it looks.

    Leave a comment:


  • legume
    replied
    Originally posted by perpetualrabbit View Post
    A question to Michael or whoever else cares to provide insight:

    As a non-gamer, I wonder why the framerate in games is so important. The human eye perceives pictures as fluid motion when more than about 16 to 24 frames per second.
    That's not true, film makers are trained to work with the deficiencies of 24fps motion judder, by eg. not panning too fast and defocusing the background during pans. Interlaced TV at 50 or 60 fields/sec will look far better for things like fast paced sports.

    Modern TVs have chips that will interpolate between frames for 24p film so that the perceived motion quality is better.

    Why not use the CPU and GPU cycles for improving the picture quality instead of more frames per second
    Gamers already have to do this - typically there are many settings that inprove quality at the expese of fps and unless you have a very powerful machine or very old game you have to compromise.

    Also, is there a way to fix the framerate in games to a certain amount (say 30Hz) and do benchmarking based on (lesser is better) CPU and GPU load instead?
    Many games already have a cvar that will cap fps. As has already been said, for gameplay minimum fps is the important figure. High Averages are nice because they mean that when there is lots of action on screen you are likely not to drop too low.

    Leave a comment:


  • deanjo
    replied
    Originally posted by droidhacker View Post
    Two alternatives to this are, of course, going for insanely high framerates (leaving any motion blur to the human retina), and even better, a real-motion display device that hasn't been invented yet. You know, the kind of thing where you take the sampling and calculus and throw it out the window in favor of an analog processing and display of real motion.
    Third alternative is to use a old lcd with 16ms+ pixel LCD response time. Motion blur without any additional performance hit.

    Leave a comment:


  • droidhacker
    replied
    Originally posted by V!NCENT View Post
    You have such a bad understanding (if at all) of time that I'd like you to watch this Flash videos, for your own good (no point intended):
    http://www.youtube.com/watch?v=JkxieS-6WuA
    http://www.youtube.com/watch?v=ySBaY...eature=channel
    Try keeping things separated better. This is clearly not directed at me since it is not related to anything I've said.

    It can not. You still have to render two frames so you can overlap them, which is true motion blur
    Absolutely it can. You do NOT need two frames to overlap into motion blur. In fact, I would consider this a very amateurish approach to motion blur, which really shouldn't be based on rendered frames since it leaves out everything that is not exposed in those two frames and only accounts for the positions of objects in those two frames and not where the object was *between* the frames. What you need for *ultimate* motion blur, is *one* 3d model of the scene and all applicable motions.

    As an example, lets consider the erratic motion of a bright object over a dark background... especially at very low framerates... i.e. a common sparkler. In traditional photography, dark scenes tend to get longer exposures and thus greater motion blur (which happens to have a blurring effect very similar to the overexposure of your retina to bright objects in low light conditions). The actual motion could result in multiple accelerations in 3-dimensional space between two frames, for example, making a circle with a common sparkler. The motion blur you are looking for would show the curve. The best you can get by generating the motion blur out of two instantaneous frames is a straight line.

    Of course, I'm not suggesting that there is any computationally sane way to process this. I would guess that it would be prohibitive in terms of 3d games. And of course, easiest by using rendered frames at a rate *FAR FAR FAR* higher than the ultimate framerate and performing the hackish form of motion blur you suggest, but rather than over just two frames, over MANY frames....

    Two alternatives to this are, of course, going for insanely high framerates (leaving any motion blur to the human retina), and even better, a real-motion display device that hasn't been invented yet. You know, the kind of thing where you take the sampling and calculus and throw it out the window in favor of an analog processing and display of real motion.

    Leave a comment:


  • yotambien
    replied
    Originally posted by V!NCENT View Post
    You have such a bad understanding (if at all) of time that I'd like you to watch this Flash videos, for your own good (no point intended):
    http://www.youtube.com/watch?v=JkxieS-6WuA
    http://www.youtube.com/watch?v=ySBaY...eature=channel
    And how about you taking a general physics course instead of relying on random pseudoscience bullshit you found in youtube? If you really can't refrain yourself from falling for it, at least don't try to lecture others.

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by V!NCENT View Post
    It can not. You still have to render two frames so you can overlap them, which is true motion blur
    OMG I think I've got the answer!!!

    A three dimensional frame buffer! Or tripple buffer...

    Jesus I can start my owm patent troll office by now xD

    Realtime motion blur :P

    Is anyone interested in creating motion blur inverted backwards time travel? xD

    Leave a comment:


  • V!NCENT
    replied
    I'm am going to add some things to clarify...

    Originally posted by droidhacker View Post
    Random? No. You pick the most recent frame that has been completely rendered. How do you calculate the exact picture in the middle of an interval? You don't know what the conditions will be in the future, you can only render based on the conditions you know NOW.
    Games are rendered with a game loop. For example you take the input, update where everything is, what is blown up, what is blowing up, etc. Then shortly afterwards you paint a picture of the scene and put it onto the screen. Then you process the input from where you stopped processing the input, take a look at how much time has passed since and calculate where everything currently is, when thats done you paint it to your screen. Then you repeat. Over and over and over. What you want is blur in between two pictures. That is possible but you will always lack a painted frame behind and then calculate the frame in between and so forth. That also adds lag.

    It can be done in a different way though, and that is that you calculate how fast the camera moves to the left before you paint a new frame and ass some smudge to the painted frame afterwards. This is not true motion blur BTW... It also doesn't look realistic. If you want to do it the Right Way (TM), you will also lack a frame of input and processed game state behind.

    Its not so much a question of "picking some", it is a question of "picking *THE* (frame that happens to be finished processing at the moment)"
    Thanks for ranting for me Playing a game is not the same as editing a movie. Or you might be 5 seconds behind. You see an enemy, move to the left then only to figure out the enemy has already moved out of sight (to the right) and killes you but by the time you actually see that on your screen you are already dead.

    You have such a bad understanding (if at all) of time that I'd like you to watch this Flash videos, for your own good (no point intended):
    http://www.youtube.com/watch?v=JkxieS-6WuA
    http://www.youtube.com/watch?v=ySBaY...eature=channel

    Now just to make things a little more fun, motion blur certainly *could* be done. Being a 3d image processor, you certainly do have access to instantaneous motion vectors. Combine that with knowledge regarding how often frames are ACTUALLY being displayed, and you can come up with the appropriate blur to add to each frame. As for how expensive this is in terms of processing, I'm sure that there are different approaches to this that are more or less expensive, but have no idea what the cost would actually look like (I doubt that it would be pretty).
    It can not. You still have to render two frames so you can overlap them, which is true motion blur

    Leave a comment:


  • droidhacker
    replied
    Originally posted by perpetualrabbit View Post
    @1 Why pick "some" frame at random in a 1/30th second interval rather than calculate the exact picture in the middle of the interval?
    Random? No. You pick the most recent frame that has been completely rendered. How do you calculate the exact picture in the middle of an interval? You don't know what the conditions will be in the future, you can only render based on the conditions you know NOW.
    I don't see why calculating 500 pictures and then picking the best somewhere in the middle of the stack of 500 would be better?
    You are COMPLETELY missing the point. You are NOT calculating a set of 500 frames and picking one. You are generating frames as FAST AS YOU CAN and picking the MOST RECENT one at the moment when it is needed. The faster the frames are generated, the more accurately they reflect the current state of the inputs.

    @2 Ok, so you pick equally spaced pictures out of a large number. That still seems a waste, as you could also use the spare cycles to calculate a picture in the middle of the 1/30th second interval and apply a good blurring algorithm on it.
    Its not so much a question of "picking some", it is a question of "picking *THE* (frame that happens to be finished processing at the moment)"
    Using the previous, current and next picture or maybe if your calculate 60 frames per second using some other intelligent interpolation (blurring) algorithm.
    You don't know what the previous picture was after it is discarded (unless you keep it -- expensive), and you DEFINITELY don't know what the NEXT frame will be unless you invent a time machine.
    I know that Pixar does something like this in their computer animated movies.
    THEY DON'T RENDER IN REAL TIME!!!!
    They render in WHATEVER TIME it happens to take, even if one frame takes an HOUR to render. It doesn't MATTER how long it takes for them to render, when it is DONE, it just goes on the stack, and only the stack is DISPLAYED in real time.

    In fact, they don't even do their rendering in one pass. They go over the data multiple times, applying successive transformations. And they don't just render into bitmaps. They keep MUCH more data, like motion vectors.
    Anyway blurring or interpolation would fall in the category of "using the spare cycles to improving picture quality".
    Finish reading before making statements like this. If there *is no* frame to blur or interpolate, how can you blur or interpolate it? If the frame *doesn't yet exist*, then you can't do any kind of transformations on it, which means that you need to WAIT for the frame to be generated (at which point it should be *immediately* displayed), or buffer it and perform transformations on it LATER -- but this introduces a LAG (the time it takes to actually perform the transformations), which is real bad when you are trying to make something that instantly responds to user input.

    And no, you can't generate your frames far in advance to give you enough time to transform it before displaying it because the frames are generated as a response to user input. That means that the overall state of things has to be taken *right now* and used to generate *this frame*, and the INSTANT that *this frame* is finished, it takes the current state and begins all over again.

    Now just to make things a little more fun, motion blur certainly *could* be done. Being a 3d image processor, you certainly do have access to instantaneous motion vectors. Combine that with knowledge regarding how often frames are ACTUALLY being displayed, and you can come up with the appropriate blur to add to each frame. As for how expensive this is in terms of processing, I'm sure that there are different approaches to this that are more or less expensive, but have no idea what the cost would actually look like (I doubt that it would be pretty).

    Leave a comment:

Working...
X