Announcement

Collapse
No announcement yet.

The Ideal (Hypothetical) Gaming Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Pyre Vulpimorph
    replied
    Sweet mother of John DeLancie... what the hell happened to my thread?!

    Oh, right, got hijacked by Q. Spirit of Chaos and Disharmony.

    Leave a comment:


  • Petteri
    replied
    Sorry, but it doesn't work at all if the scene changes ALL THE TIME. The raytracer gets scene and traces few rays -> scene changes, tracer traces few rays -> scene changes, etc. etc.
    The result is huge amount of wasted compute power and mess as an output.
    Why not do it the traditional way?
    Raytracer gets scene, traces lots of rays for x milliseconds and after that the 'finished' frame is drawn to screen and loop starts again. If the rendering time is shorter than display refresh interval you can't notice any difference in smoothness.
    For defining frames per second you lose nothing and get better quality output and smaller resource usage.

    It's just bad programming to run the code without speedlimits if it doesn't gain you anything. If screen refresh rate is xHz, it is just stupid and pointless to run game engine over x times per second.

    Leave a comment:


  • AnonymousCoward
    replied
    Originally posted by Petteri View Post
    Qaridarium:
    How this your real-time ray-tracing with "infinite" FPS would handle constantly changing scene in games or animations?
    If you don't 'freeze' the scene for fixed amount of time between every rendered frame, the scene will change every time after the renderer has cast only few rays, no matter how fast the cpu is.
    Just keep the previous frame and overwrite only those pixels you have calculated. Will probably look like trying to decode a video with missing data, people should be used to that by now from digital tv.

    Leave a comment:


  • Wildfire
    replied
    Originally posted by Petteri View Post
    Qaridarium: How this your real-time ray-tracing with "infinite" FPS would handle constantly changing scene in games or animations?
    Originally posted by Qaridarium
    a black screen without any output is unlimited rendering of frames per definition ! and my definition of "Rendering" is 100% valid with a "Black-Screen" without any output. because the engine internal runs in "Real-time" in the definition of "Real-Time" there is no need to push any rays to the "screen"
    There that should answer your question. He simply defines his raytracer as valid even if doesn't display anything Incidentally I just finished implementing worlds smallest, fastest and most interactive raytracer ever

    On a more serious note, I'm with mangobrain. If you want to have an interactive raytracer that is actually usable, you'll have to wait a few more years. Intel's very simple demo is semi-interactive on a 64-core machine. Now take a look at what an "inferior" rasterizer can do right now:



    I've yet to see an interactive raytracing demo that looks even remotely comparable. Yes, raytracers can do better. Much better even. Just not right now (if we're talking interactive).

    Leave a comment:


  • Geri
    replied
    Originally posted by mangobrain View Post
    Did you try reversing the polarity of the neutron flow?

    Leave a comment:


  • mangobrain
    replied
    Originally posted by Qaridarium
    you can fix the murmuring by scaling the resolution in a zone based interlacing algorithm!
    Did you try reversing the polarity of the neutron flow?

    Leave a comment:


  • Petteri
    replied
    Qaridarium:
    How this your real-time ray-tracing with "infinite" FPS would handle constantly changing scene in games or animations?
    If you don't 'freeze' the scene for fixed amount of time between every rendered frame, the scene will change every time after the renderer has cast only few rays, no matter how fast the cpu is.

    To solve this issue you can use 1/monitor refresh rate (or other pre-defined time) as screen update interval to get something that LOOKS like your real-time ray-tracer with murmuring and constant draw times.
    Or you could use the (Rays per second) / (Rays needed per scene to be good enough) to get constant image quality (like in games today) and changing (and long) draw times.
    And if scene for next rendered image isn't ready before the image is updated to screen, you should stop the raytracer because it would just waste cpu time.

    The ray tracer could of cource run all the time just rendering the scene it has and the engine could update the display at refresh rate with whatever the raytracer has rendered. (This is the real time raytracer you keep talking about?)
    But this would be stupid, because the raytracer would do waste resources (rendering obsolete frame after it is updated to screen) and if the rendering has just started before display is updated it would cause annoying flickering. And the frames per second is still defined in the engine because the scene must stay unchanged while the raytracer is rendering it.

    How you would solve this problem with your 'real-time ray-tracing' while keeping frames per second unlimited?

    Leave a comment:


  • mangobrain
    replied
    Originally posted by Qaridarium
    because the realtime engine skip any ray and frame if the calculating time is over.
    OK. WE UNDERSTAND.

    Originally posted by mangobrain View Post
    You could write a raytracer which worked hard to try and guarantee a particular frame rate, and stop casting any more rays when the time budget for the current frame is elapsed ...
    Originally posted by Wildfire View Post
    The raytracer he's proposing simply stops rendering a scene once a fixed amount of time has elapsed.
    THERE. HAPPY? We GET IT. But you know what? It's not magic. Of course you can render hundreds of frames per second if you only cast one ray, but if you only cast one ray, you're only going to get one colour value. You NEED TO CAST ENOUGH RAYS TO RENDER A RECOGNISABLE SCENE.

    Also, I say again: IF YOU DO NOT RENDER AT HIGHER QUALITY THAN A POLYGON ENGINE, NOBODY WILL USE YOUR RAYTRACER. Simple. You need to cast a lot of rays to get a decent image, which requires a lot of computing power for high resolutions/complex scenes.

    Have another look at your precious Intel video:



    Pause it at 0:08 and look at how blocky the car looks. It looks that blocky because the engine has, on that particular frame, cast far less than one primary ray per pixel, and has just filled in the rest of the screen by repeating the colour values of those rays. THIS IS ONLY ONE OBJECT - IMAGINE AN ENTIRE GAME WHICH LOOKS LIKE THAT WHEN IT'S MOVING. Also, this is on one of Intel's fantastic multi-core beasts, imagine how bad it would have to look on a typical dual-core home computer in order to run at a decent speed!

    Another video, raytracing with 80(!) threads on one machine:



    Start watching at 2:40. The scene looks fantastic, doesn't it? Then the camera moves, and THE QUALITY GOES TO SHIT.

    Happy now? Go away.

    Leave a comment:


  • log0
    replied
    Originally posted by elanthis View Post
    You need more than that, though. For instance, quite a few non-trivial games need to have pre-contact callbacks in the contact generator in order to properly ignore contacts. The absolute simplest example is a 2.5D platformer (think Trine) where you can jump up through some objects but still land on them. This is generally implemented by an engine with contact caching (requiring random-access read-write storage in the physics engine, which is not GPU-friendly) and a pre-contact callback that flags the contact as ignored if the surface-player contact normal is not pointing up.

    More complex 3D physics uses those callbacks for more complex needs.

    Physics can be made GPU-friendly, but only in the non-general case. That is to say, certain features of fully-featured physics engines like Havok or PhysX or whatnot simply do not work well with GPUs, and only games that avoid those features can reasonably use GPU-based physics.
    The simple example could be dealt with on the GPU by passing additional state to decide whether a contact is added or not. Of course there are limits to this method. I've got some experience with bullet physics lib. I've used callbacks, but more out of convenience. To avoid having to adapt the code for my needs and not because there was no other way to implement certain functionality. But that is my (limited) point of view of course.

    Leave a comment:


  • Pickle
    replied
    No matter what techniques and algorithms your apply your loop of logic will take up a fixed amount of time. So you cant do unlimited rendering of frames on a CPU that is fixed on how many calculations per second it can do.
    Your time between complete framebuffer updates and quality of the image within the buffer scales with the speed of the hardware, which i dont understand why you just dont say that.

    Leave a comment:

Working...
X