Announcement

Collapse
No announcement yet.

The Ideal (Hypothetical) Gaming Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Qaridarium
    its not my problem if the people are dump and without knowledge.
    if you do have a real-time-ray-tracing engine then your interacting time with the system is always fast and always the same!
    Time to feed the troll.
    Ive been following through this thread and im not really following your logic (if it is logical).

    If your going to render an object one time then you have a fixed amount of CPU time that will be consumed doing the ray calculations for a single image to be loaded onto the framebuffer.
    If your then going to animate the object then each of those single frames must be completed within a certain amount of time. The longer the render time for each frame the longer the lower the frame rate is going to be.

    All systems are not equal so the comparison between what an 286 and i7 can do are different things all together. Each are going to be able to do more instructions per second more than the other. Thus going to be able to do more ray calculations in the same amount of time.

    So take for example the ray trace video with the car that the camera can rotate around. Clearly the system there produces a low frame rate or lag between frames when rotating. This is clearly because the ray calculations between frames takes longer than the framebuffer is refreshed. Now they do some tricks to make that animation faster, but clearly the image quality drops. This murmuring, which ive yet to see you actually try to explain i assume (if valid) is just another technique to cut down on calculations in order to produce a final image faster but at a lower quality. At least the noisey tv image you pasted i would consider garbage if that is what is actually produced with this 'murmur technique'.

    I think you would do better for yourself if you actually took time to explain your ideas, rather than post shock statements.

    Comment


    • #32
      Pickle: i alreday feedin' in

      Qaridarium: ,,in both cases your engine is bad. '' oh yeah, you have right, all nonrealtime ray tracers are very very bad. time to show me your ray tracer, i want to learn from the best :3
      Last edited by Geri; 12 March 2012, 02:54 PM.

      Comment


      • #33
        Originally posted by Qaridarium
        Originally posted by Geri View Post
        my ray tracer algorythm doesnt murmurs, can i get pregnant? :P
        sure its possible but then there are 2 possible causes: 1.)= the complexity is to low. 2.)= its not a real-time-ray-tracing engine.
        in both cases your engine is bad.
        I think you just failed a Turing test

        Originally posted by Pickle View Post
        Ive been following through this thread and im not really following your logic (if it is logical).
        I think I understand what Q is getting at, and from a purely academic point of view his "logic" does have some merit. The raytracer he's proposing simply stops rendering a scene once a fixed amount of time has elapsed. The remaining pixels simply stay black. This way the raytracer is able to maintain interactive framerates no matter how powerful the underlying hardware is. Of course, in the worst case you'd simply get a blank image but at least the engine is still running in "real-time".

        Comment


        • #34
          Originally posted by Qaridarium
          YES thank you you understand my Logic ! *Happy*
          Ok but your idea of frame rate is really flawed. So I think your point was that your could maintain a frame rate on different CPU's if you lowered the quality (which is also wrong because sooner or later the system itself wont have time actually copy the framebuffer).
          You act as if frame rate doesnt matter, but with animation its everything. Why else would you have to come up your murmur idea.

          Your idea of quality is also flawed, so your murmur idea is reducing the amount of data you have to create the final image. It is of lesser detail or quality than if the engine was allowed to calculate all of its rays.

          Comment


          • #35
            Originally posted by log0 View Post
            I am assuming shared/unified memory in my proposal.
            Ah, I appear to have misunderstood that. My bad. So.....excellent idea!

            Comment


            • #36
              Originally posted by log0 View Post
              If I think of a single simulation step:
              Prediction
              Broadphase
              Contact Generation
              Correction/Solver

              Lets say the intermediate results form the last step are available to the cpu to tinker with at the same time. There will be a lag of at least one frame. But for game events it should be negligible.
              You need more than that, though. For instance, quite a few non-trivial games need to have pre-contact callbacks in the contact generator in order to properly ignore contacts. The absolute simplest example is a 2.5D platformer (think Trine) where you can jump up through some objects but still land on them. This is generally implemented by an engine with contact caching (requiring random-access read-write storage in the physics engine, which is not GPU-friendly) and a pre-contact callback that flags the contact as ignored if the surface-player contact normal is not pointing up.

              More complex 3D physics uses those callbacks for more complex needs.

              Physics can be made GPU-friendly, but only in the non-general case. That is to say, certain features of fully-featured physics engines like Havok or PhysX or whatnot simply do not work well with GPUs, and only games that avoid those features can reasonably use GPU-based physics.

              So far as the rest of this thread... why in the fuck are any of you trying to converse with Qaridium still? Theres's an Ignore User feature on this forum. Use it.

              Comment


              • #37
                No matter what techniques and algorithms your apply your loop of logic will take up a fixed amount of time. So you cant do unlimited rendering of frames on a CPU that is fixed on how many calculations per second it can do.
                Your time between complete framebuffer updates and quality of the image within the buffer scales with the speed of the hardware, which i dont understand why you just dont say that.

                Comment


                • #38
                  Originally posted by elanthis View Post
                  You need more than that, though. For instance, quite a few non-trivial games need to have pre-contact callbacks in the contact generator in order to properly ignore contacts. The absolute simplest example is a 2.5D platformer (think Trine) where you can jump up through some objects but still land on them. This is generally implemented by an engine with contact caching (requiring random-access read-write storage in the physics engine, which is not GPU-friendly) and a pre-contact callback that flags the contact as ignored if the surface-player contact normal is not pointing up.

                  More complex 3D physics uses those callbacks for more complex needs.

                  Physics can be made GPU-friendly, but only in the non-general case. That is to say, certain features of fully-featured physics engines like Havok or PhysX or whatnot simply do not work well with GPUs, and only games that avoid those features can reasonably use GPU-based physics.
                  The simple example could be dealt with on the GPU by passing additional state to decide whether a contact is added or not. Of course there are limits to this method. I've got some experience with bullet physics lib. I've used callbacks, but more out of convenience. To avoid having to adapt the code for my needs and not because there was no other way to implement certain functionality. But that is my (limited) point of view of course.

                  Comment


                  • #39
                    Originally posted by Qaridarium
                    because the realtime engine skip any ray and frame if the calculating time is over.
                    OK. WE UNDERSTAND.

                    Originally posted by mangobrain View Post
                    You could write a raytracer which worked hard to try and guarantee a particular frame rate, and stop casting any more rays when the time budget for the current frame is elapsed ...
                    Originally posted by Wildfire View Post
                    The raytracer he's proposing simply stops rendering a scene once a fixed amount of time has elapsed.
                    THERE. HAPPY? We GET IT. But you know what? It's not magic. Of course you can render hundreds of frames per second if you only cast one ray, but if you only cast one ray, you're only going to get one colour value. You NEED TO CAST ENOUGH RAYS TO RENDER A RECOGNISABLE SCENE.

                    Also, I say again: IF YOU DO NOT RENDER AT HIGHER QUALITY THAN A POLYGON ENGINE, NOBODY WILL USE YOUR RAYTRACER. Simple. You need to cast a lot of rays to get a decent image, which requires a lot of computing power for high resolutions/complex scenes.

                    Have another look at your precious Intel video:



                    Pause it at 0:08 and look at how blocky the car looks. It looks that blocky because the engine has, on that particular frame, cast far less than one primary ray per pixel, and has just filled in the rest of the screen by repeating the colour values of those rays. THIS IS ONLY ONE OBJECT - IMAGINE AN ENTIRE GAME WHICH LOOKS LIKE THAT WHEN IT'S MOVING. Also, this is on one of Intel's fantastic multi-core beasts, imagine how bad it would have to look on a typical dual-core home computer in order to run at a decent speed!

                    Another video, raytracing with 80(!) threads on one machine:



                    Start watching at 2:40. The scene looks fantastic, doesn't it? Then the camera moves, and THE QUALITY GOES TO SHIT.

                    Happy now? Go away.

                    Comment


                    • #40
                      Qaridarium:
                      How this your real-time ray-tracing with "infinite" FPS would handle constantly changing scene in games or animations?
                      If you don't 'freeze' the scene for fixed amount of time between every rendered frame, the scene will change every time after the renderer has cast only few rays, no matter how fast the cpu is.

                      To solve this issue you can use 1/monitor refresh rate (or other pre-defined time) as screen update interval to get something that LOOKS like your real-time ray-tracer with murmuring and constant draw times.
                      Or you could use the (Rays per second) / (Rays needed per scene to be good enough) to get constant image quality (like in games today) and changing (and long) draw times.
                      And if scene for next rendered image isn't ready before the image is updated to screen, you should stop the raytracer because it would just waste cpu time.

                      The ray tracer could of cource run all the time just rendering the scene it has and the engine could update the display at refresh rate with whatever the raytracer has rendered. (This is the real time raytracer you keep talking about?)
                      But this would be stupid, because the raytracer would do waste resources (rendering obsolete frame after it is updated to screen) and if the rendering has just started before display is updated it would cause annoying flickering. And the frames per second is still defined in the engine because the scene must stay unchanged while the raytracer is rendering it.

                      How you would solve this problem with your 'real-time ray-tracing' while keeping frames per second unlimited?

                      Comment

                      Working...
                      X