Announcement

Collapse
No announcement yet.

The Ideal (Hypothetical) Gaming Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Qaridarium
    you can fix the murmuring by scaling the resolution in a zone based interlacing algorithm!
    Did you try reversing the polarity of the neutron flow?

    Comment


    • #42
      Originally posted by mangobrain View Post
      Did you try reversing the polarity of the neutron flow?

      Comment


      • #43
        Originally posted by Petteri View Post
        Qaridarium: How this your real-time ray-tracing with "infinite" FPS would handle constantly changing scene in games or animations?
        Originally posted by Qaridarium
        a black screen without any output is unlimited rendering of frames per definition ! and my definition of "Rendering" is 100% valid with a "Black-Screen" without any output. because the engine internal runs in "Real-time" in the definition of "Real-Time" there is no need to push any rays to the "screen"
        There that should answer your question. He simply defines his raytracer as valid even if doesn't display anything Incidentally I just finished implementing worlds smallest, fastest and most interactive raytracer ever

        On a more serious note, I'm with mangobrain. If you want to have an interactive raytracer that is actually usable, you'll have to wait a few more years. Intel's very simple demo is semi-interactive on a 64-core machine. Now take a look at what an "inferior" rasterizer can do right now:



        I've yet to see an interactive raytracing demo that looks even remotely comparable. Yes, raytracers can do better. Much better even. Just not right now (if we're talking interactive).

        Comment


        • #44
          Originally posted by Petteri View Post
          Qaridarium:
          How this your real-time ray-tracing with "infinite" FPS would handle constantly changing scene in games or animations?
          If you don't 'freeze' the scene for fixed amount of time between every rendered frame, the scene will change every time after the renderer has cast only few rays, no matter how fast the cpu is.
          Just keep the previous frame and overwrite only those pixels you have calculated. Will probably look like trying to decode a video with missing data, people should be used to that by now from digital tv.

          Comment


          • #45
            Sorry, but it doesn't work at all if the scene changes ALL THE TIME. The raytracer gets scene and traces few rays -> scene changes, tracer traces few rays -> scene changes, etc. etc.
            The result is huge amount of wasted compute power and mess as an output.
            Why not do it the traditional way?
            Raytracer gets scene, traces lots of rays for x milliseconds and after that the 'finished' frame is drawn to screen and loop starts again. If the rendering time is shorter than display refresh interval you can't notice any difference in smoothness.
            For defining frames per second you lose nothing and get better quality output and smaller resource usage.

            It's just bad programming to run the code without speedlimits if it doesn't gain you anything. If screen refresh rate is xHz, it is just stupid and pointless to run game engine over x times per second.

            Comment


            • #46
              Sweet mother of John DeLancie... what the hell happened to my thread?!

              Oh, right, got hijacked by Q. Spirit of Chaos and Disharmony.

              Comment


              • #47
                Originally posted by Pyre Vulpimorph View Post
                Sweet mother of John DeLancie... what the hell happened to my thread?!

                Oh, right, got hijacked by Q. Spirit of Chaos and Disharmony.
                I have to admit that Q is very good. Amazing ability to steer thread to offtopic with full power and everyone follows.

                Originally posted by Qaridarium
                LOL this is my point not "there" point for me the xhz of the screen is the time death line of the real-time raytracer!

                the real point is the "Others" want less FPS than screen refresh rate and i always push the screen refresh rate !
                Ah, I think I get the point now (again). The consept of frames per second applies for raytracing and this real-time ray-tracing just means that it is being kept at display refresh rate at all times. And raytracer keeps rendering the frame as long as it can before next screen refresh.

                Comment


                • #48
                  Here, ACTUAL CONTENTS OF THIS THREAD:

                  Originally posted by Wildfire View Post
                  I think you're looking at this the wrong way around. You're looking at what current games are doing and you're then trying to design a CPU that is optimal for that. In terms of progress you would need to design a CPU that offers things that current CPUs are lacking (in terms of performance for gaming) so that future games can be optimized for that. Let the software adapt to your CPU not the CPU to your software.
                  Originally posted by AnonymousCoward View Post
                  If floating points are handled by your CPU, make sure they are deterministic. http://nicolas.brodu.numerimoire.net...lop/index.html describes the problems you can encounter on current PCs.
                  Originally posted by log0 View Post
                  To get back to the topic. Assume one would use OpenCL for physics and rendering. I think you could get away with 2-4 simple RISC cores(without FPU). The cores would be there to feed the GPU with data and take care of game logic, interrupts and other boring stuff. Make them as fast as you can afford. Make sure there are no bottlenecks or large latencies between CPU cores and GPU. Throw 8GB shared memory with enough bandwidth into the mix and you should be good to go.

                  And make sure to keep the production costs low and yields high. No experiments a la PS3.
                  Originally posted by mirv View Post
                  If physics takes place entirely on the GPU, then your bi-directional communication needs to be rather good between CPU and GPU. Physics will generally trigger game logic events (depending on the game of course), so while the GPU can handle physics calculations faster, it's the need for a feedback system that destroys it for anything more than eye-candy with current architectures. I have been curious how well AMD's Fusion systems can be made to work with that, but I don't really have time to delve into it in more than a theoretical capacity. At least, don't have the time yet.
                  Originally posted by log0 View Post
                  If I think of a single simulation step:
                  Prediction
                  Broadphase
                  Contact Generation
                  Correction/Solver

                  Lets say the intermediate results form the last step are available to the cpu to tinker with at the same time. There will be a lag of at least one frame. But for game events it should be negligible.
                  Originally posted by mirv View Post
                  Reading back from the GPU is quite costly. You certainly want to avoid it as much as possible - unless you can share the memory with a zero-copy buffer (in theory). Sure it's getting easier with current architectures and bus speeds for data readback, but I'm pretty sure it's still costly enough that you don't want to do it. This is why most games will only use particle effects, or physics related calculations that's classified as "eye candy" and doesn't directly affect gameplay logic.
                  Also, graphics cards still need to do graphics.
                  I guess it depends on the game, how much physics calculations you need to affect game logic (those ones are generally very simplistic compared to, say, cloth simulation) and where your bottleneck will be (calculations vs data transfer). It would be interesting to see just what kind of balance point can be found...maybe something like ants (for path update AI code) combined with "dodge the particles". Sucks having a day job and not being able to explore such ideas properly.
                  Originally posted by log0 View Post
                  I am assuming shared/unified memory in my proposal.
                  Originally posted by mirv View Post
                  Ah, I appear to have misunderstood that. My bad. So.....excellent idea!
                  Originally posted by elanthis View Post
                  You need more than that, though. For instance, quite a few non-trivial games need to have pre-contact callbacks in the contact generator in order to properly ignore contacts. The absolute simplest example is a 2.5D platformer (think Trine) where you can jump up through some objects but still land on them. This is generally implemented by an engine with contact caching (requiring random-access read-write storage in the physics engine, which is not GPU-friendly) and a pre-contact callback that flags the contact as ignored if the surface-player contact normal is not pointing up.

                  More complex 3D physics uses those callbacks for more complex needs.

                  Physics can be made GPU-friendly, but only in the non-general case. That is to say, certain features of fully-featured physics engines like Havok or PhysX or whatnot simply do not work well with GPUs, and only games that avoid those features can reasonably use GPU-based physics.

                  So far as the rest of this thread... why in the fuck are any of you trying to converse with Qaridium still? Theres's an Ignore User feature on this forum. Use it.
                  I'm new here... is there a "Block User From My @#$%! Thread" feature?

                  Originally posted by log0 View Post
                  The simple example could be dealt with on the GPU by passing additional state to decide whether a contact is added or not. Of course there are limits to this method. I've got some experience with bullet physics lib. I've used callbacks, but more out of convenience. To avoid having to adapt the code for my needs and not because there was no other way to implement certain functionality. But that is my (limited) point of view of course.
                  Apologies if I missed other relevant posts.

                  The point of this thread, really, was so I can learn more about how modern games work. Specifically, what the CPU is left doing while the GPU is busy rendering frames. So, let's shoot for the moon and say my client's system will include a Radeon HD 7870 (pitcarin) GPU, and "normal" output resolution is going to be 1920x1080p.

                  System memory will be 2-4 GiB of DDR3 CAS-6 @ 1600 MHz, framebuffer memory will be 2 GiB of GDDR5 @ 6000 MHz

                  I decide to build and 8-core MIPS64 that's 4-lane superscalar, but no SMT. It has dynamic out-of-order execution, speculative execution, regester renaming, and a fairly long (for RISC) instruction pipeline with aggressive branch prediction. Each core has 512-bit deterministic floating point units. Each core has 128 KiB of L1 cache, 512 KiB of private L2 cache, and 16 MiB of L3 cache is shared across the 8 cores (2 MiB per core).

                  The chip has a quad-channel memory controller, and talks directly to the GPU via 32-bit HyperTransport 3.1 connection link.

                  --------

                  Again, while I don't presume to know everything there is to know about CPU design, the goal is to make the chip as small and cheap as possible without ever bottlenecking the GPU, and still providing advanced functionality (like accurate real-time physics). So, all that junk I just spit out might not be an "optimal" design.

                  Any thoughts?

                  Comment


                  • #49
                    To derail this thread further: Did anyone ever try motion compensated raytracing? I was amazed to realize they are using the last frame as an initial guess for the new rendering, so you get free motion blur. What I'd do is, from the known motion vectors, interpolate the next frame and use that as an initial guess for the raytracer. I think that would reduce the "murmuring" a lot without creating too much motion blur. Also if the interpolation was fast enough, an intelligent raytracer could shoot more rays in those sections that are unknown from the interpolation, like if a moving object reveals what lied behind, shoot rays there because the interpolated image can't know what should be there.
                    Kind of what was done with SW Foce Unleashed, but not to increase framerate, but to increase picture quality. (One could of course render at half the fps and interpolate to double the fps, but thats not the idea here.)

                    Also a question to Qaridarium: I think most of us understand what youre trying to do, but youre not a native speaker are you? (neither am I) I think much of the misunderstanding are because people don't understand what you say or you don't understand what others say. Everyone knows how real time raytracing works, but at least to my knowledge, you use uncommon terms like murmuring and you are very much fixated on those terms. Are there any technical papers that use the same terms like you?

                    Comment


                    • #50
                      Pyre Vulpimorph: like 2girls1cup. you cant stop... you just keep watchin' it

                      Spirit of Chaos and Disharmony
                      there is no such thing. only the murmuring rate of the topic has been incrased.
                      Last edited by Geri; 13 March 2012, 05:53 PM.

                      Comment

                      Working...
                      X