Announcement

Collapse
No announcement yet.

The Ideal (Hypothetical) Gaming Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Ideal (Hypothetical) Gaming Processor

    Hi everyone. I've been wondering what the "best" types of gaming CPUs would be, and I would like to know what it takes to make an idealized gaming-oriented processor.

    Suppose I had a fabless semiconductor company, and I was contracted to design the CPU for a new game console. The GPU was already determined to be something relatively powerful, like a die-shrunk Radeon HD 6870. The goal is to make the chip as small and cheap as possible, but will never bottleneck the graphics processor.

    What sort of difference does a processor's ISA make? Suppose I had licenses for MIPS, ARM, SPARC, and other risc-based architectures. Is the ISA really that important, or just the micro-architectural "plumbing" underneath?

    What types of instructions do modern video games make the most use of? Do games find integer performance most important, floating point performance, or both? If floating-point calculations can be offloaded to the GPU, can the CPU's floating-point units be excised to make the chip smaller and cheaper, or would that harm system performance? If FP power is still important, would adding complex 256- or even 512-bit units be beneficial to total system performance, or just a waste of space?

    How important is cache size? Intel's SNB i5, i7, and SNB-E i7 processors have 1.5, 2.0, and 2.5 MiB of L3 cache per core, but looking at benchmarks from Anandtech and other places, there doesn't seem to be too much difference. At least, not enough difference to justify the added space and expense. How much cache per core is "enough"?

    As for the core count itself, would it be best to make a quad-core chip and call it a day? I know most game engines today simply do not scale past four cores, and simultaneous multithreading is pretty much useless for games. But, since consoles are expected to last around 5 years, would making a 6- or 8-core CPU prove beneficial in the future, so long as the chip stayed within the client's budget?

    I know this is just a lot of speculation, but I'm just curious what makes games tick.

  • #2
    Originally posted by Qaridarium
    what kind of game and what technique do you prefer for the game?
    The idea is to get the best possible performance out of existing, non-halo hardware. Hardware rasterization is still the best way to get very good graphics @ 60 frames per second. Whether using Direct3D, OpenGL, or an API much closer to the silicon, rasterization is still the way to go.

    Originally posted by Qaridarium
    if you use raytracing over openCL and bulledphysik over openCL you are fine with many tiny cores.
    While I've been very impressed with GPU-accelerated ray tracing and path tracing so far, these rendering algorithms are still much to slow to be useful in a gaming system without large sacrifices in image quality or making a prohibitively expensive system. I'm rather skeptical of regular CPU-based tracing ever being viable for real-time rendering; if it has dozens or hundreds of cores, it would have to be humongous.

    Originally posted by Qaridarium
    maybe 2000 pieces of 64bit mibs cores big cache is only needed if your ram and ram interface is slow this means you need 512bit ram interface or bigger and XDR2 ram oder gddr5 ram!
    Um... what? The CPU doesn't need exotic memory or memory controllers. SandyBridge-E has a quad-channel (256-bit) memory controller, and it offers no tangible benefit to gaming compared to a dual-channel controller. CAS-7 DDR3 memory @ 1600 MHz is more than adequate. GDDR5 and XDR2 are optimized for GPU usage, and no telling how expensive the latter is.

    Originally posted by Qaridarium
    in other words: the amd hd7970 is a good gaming cpu if you use raytracing and openCL.
    Um... what? the Radeon HD is a Graphics Processing Unit, not a Central Processing Unit. GPUs are incapable of commanding the entire system. They're designed for high-speed rasterizing and math-heavy computing, not general-purpose processing.

    Originally posted by Qaridarium
    anyway why people still buy normal CPUs? only because the games are using obsolete technique and they are bad designed.
    Um... what? Modern games don't need super-expensive CPUs anymore. Nearly all modern games are GPU-bound when played at 1920x1080 and above. A Core i5-2500K costs $200-$225, and its the most expensive CPU you'd ever need for gaming for the next several years.

    Q, what on earth do you mean, when you claim that games are "using obsolete technique" and "are badly designed"? Do you have some divine insight as to how a video game should be programmed that other developers don't? Because developers code their games to work most efficiently on the hardware their expected to run.

    Q, I think you missed the entire point of this exercise. The graphics processor (GPU) is already assumed to be of modern architecture and fairly high performance, and everything you've mentioned has to to with GPU features. What I'm concerned with is what the central processor (CPU) is left doing while the GPU is busy pushing polygons into pixels, and focus the CPU design around just those functions. Again, the goal is to make the CPU chip as small and cheap as possible without ever bottlenecking the GPU -- the two need to work in synergy.

    Comment


    • #3
      and hey this year 2012 comes a AAA ray tracing tittle!
      What is it's name??
      Seriously, spill it!

      Comment


      • #4
        my oppinion about this (read from: GENERAL THINKING FROM INDUSTRY)
        http://phoronix.com/forums/showthrea...562#post253562

        Comment


        • #5
          yes, they usable for that ofc. there is for example the aurana ray tracing / brigade patch tracing engine by jbikker. that can run on the most new gpu's around 20 fps in real time. its probably the fastest ray tracer technology at the moment. but its still very fugly in some situations, the gpu under it still not a ray tracer - just a shoddy rasterizer. if you move the camera, you can see. if nvidia would create a new gpu from scratch with native ray tracing pipeline, those scene's from jbikker site would easilly run above 400 fps.

          Comment


          • #6
            Originally posted by Qaridarium
            also you are wrong at the FPS because ray tracing engines never make FPS!
            what?
            fps means frame per second.

            Comment


            • #7
              Originally posted by Qaridarium
              also you are wrong at the FPS because ray tracing engines never make FPS!

              they always pushes rays per "minute"
              That has nothing to do with it. With current GPUs you have stuff like "polygons per second", not FPS.

              FPS is how much times per second you can update the whole scene, regardless of whether it's "polygons per second" or "rays per minute".

              Comment


              • #8
                Originally posted by Pyre Vulpimorph View Post
                As for the core count itself, would it be best to make a quad-core chip and call it a day? I know most game engines today simply do not scale past four cores, and simultaneous multithreading is pretty much useless for games. But, since consoles are expected to last around 5 years, would making a 6- or 8-core CPU prove beneficial in the future, so long as the chip stayed within the client's budget? I know this is just a lot of speculation, but I'm just curious what makes games tick.
                I think you're looking at this the wrong way around. You're looking at what current games are doing and you're then trying to design a CPU that is optimal for that. In terms of progress you would need to design a CPU that offers things that current CPUs are lacking (in terms of performance for gaming) so that future games can be optimized for that. Let the software adapt to your CPU not the CPU to your software.

                Originally posted by Qaridarium
                ray tracing dosn't work in FRAMES! ray tracing work in ray per minutes! a Real time Ray tracing engine dosn'T have FRAMES per SECOND! because of this all of your writing is wrong!
                Sure it does. Your rays per minute translate back into frames per second. If your average scene requires 1,000,000 rays and your raytracer can do 10,000,000 rays per second (or 600,000,000 per minute) then it's going to require 0.1 seconds for a single image (aka frame). Which means it can render 10 images per second aka 10 fps.

                Comment


                • #9
                  Qaridarium: every image processing mechanism have a frame/second value, also ray tracing have.

                  this is just wrong! because realtime Raytracing engine do NEVER handle any pixel in a frame!
                  oh jesus, stop posting nonsense

                  Comment


                  • #10
                    Sorry, Q, but I've *written* a raytracer and you are very confused. It is true that you can dynamically alter the performance of a raytracer by changing the number of rays cast, but that does not magically mean raytracers are somehow completely divorced from the concept of FPS. It's also worth nothing that casting any less than one primary ray per pixel* will have a negative impact on quality: what you see in the Intel video is that when the camera moves, the engine casts fewer rays to keep the framerate at a level suitable for interacting with the scene; then, when the camera stops, the engine ramps the ray count back up again because there's no need for a high framerate when nothing is moving.

                    It's a clever trick, but there's no magic which means you can scale from 1 FPS to 1000 FPS with no impact on quality.

                    * A primary ray is one traced out from the camera's viewpoint, usually through the centre of a pixel in the image plane, to see what part of the scene it intersects. "Shadow" rays, reflected rays, refracted rays etc. are then cast from a primary ray's point of impact, and contribute to the final colour value rendered at the appropriate pixel, but if you don't cast at least one primary ray per pixel then you have to fill in the gaps with some sort of interpolation. Casting more than one primary ray per pixel is a great way to do anti-aliasing, but can be very expensive in terms of performance.
                    Last edited by mangobrain; 12 March 2012, 08:27 AM.

                    Comment

                    Working...
                    X