Announcement

Collapse
No announcement yet.

The Ideal (Hypothetical) Gaming Processor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    If you want to make a technology FPS-less (even raster) you can do it. The problem is when you move the 3D world around, you need new calculations (in a fixed point, a new frame). Frames are fixed points of the flow of information and its only typical if you want them. The best processor is OpenCores and needs only 1 million transistor for a typical 48macs/hz or 2,5dmips/mhz. That is 13 times less transistor than a9-arm, for a little better speed. http://en.wikipedia.org/wiki/Loongson see this in L3C part, has 3D instructions as emulation instructions and I believe that are important for every processor. Also see ZMS, has a cemi-fpga that doubles or even quads the CPU speed for a 30% more energy, and you cant measure ZMS in gflops because are special operations. Any way a mix of technology (better free) can give as 10x-tflops/watt.

    Comment


    • #12
      No, you can't make things magically FPS-less. It's simple.

      The maximum FPS is determined by how long it takes to gather one frame's worth of information at the lowest acceptable quality. For example, in an interactive raytracer, you may decide that casting primary rays every 10 pixels (in a grid) is acceptable, but there is no point in displaying a new frame until you have cast at least that many rays. This takes a non-zero amount of time, so you can only do it so many times per second. If the camera has moved, there is no point updating the screen individually for each ray cast, because THE ENTIRE PREVIOUS FRAME'S WORTH OF INFORMATION IS NOW INVALID.

      If the camera is stationary, then some of the information in the previous frame may still be valid, so you can increase image quality by simply casting more rays into the existing scene, filling in the gaps in the previous frame. However, if there are moving objects - or, in the worst case, moving light sources - in the scene, then again, you have to discard at least some of the information from the previous frame. Again, gathering enough new information to make a screen update worthwhile will take a non-zero amount of time, so you can only do it so many times per second.

      If the camera, objects and light sources are all stationary, and you have already cast enough rays to satisfy the upper bounds of your image quality settings, then you can stop casting until something changes. This doesn't mean your FPS becomes "infinite", it just means that a raytracer with a well-written dynamic image quality system, effectively devolves to a static image viewer under optimal conditions.

      Comment


      • #13
        Originally posted by Pyre Vulpimorph View Post
        What types of instructions do modern video games make the most use of? Do games find integer performance most important, floating point performance, or both? If floating-point calculations can be offloaded to the GPU, can the CPU's floating-point units be excised to make the chip smaller and cheaper, or would that harm system performance? If FP power is still important, would adding complex 256- or even 512-bit units be beneficial to total system performance, or just a waste of space?
        If floating points are handled by your CPU, make sure they are deterministic. http://nicolas.brodu.numerimoire.net...lop/index.html describes the problems you can encounter on current PCs.

        Comment


        • #14
          Q, please pull your head out of your ass. Seriously. I already posted what FPS is. You don't understand what this is about and, worse, you are unable to admit it.

          Comment


          • #15
            Originally posted by Qaridarium
            wen you start talking about framerate in a raytracing tropic you just LIE!

            the Framerate doesn't matter you can force 1000fps! by GOD LAW! it just doesn't matter!

            but you get a lot of "murmuring"

            so please stop spamming bullshit to me.

            Real Time Ray-tracing is all about "murmuring" and not frames per second.
            Dude CALM DOWN. If you actually READ my posts, I'm describing how quality scaling in interactive raytracers is usually done. I know you can scale the quality up/down to hit a particular FPS target - I SAY THAT IN MY POST, FOR CRYING OUT LOUD!

            Seriously though, if you expect to hit 1000FPS on current commodity hardware, you're going to be throwing out very few rays. That means either a very low resolution image, an incredibly low-complexity scene, or lots of large gaps between your primary rays. Worst case scenario would be that you only actually have the power available to cast 1000 rays per second, which means the first 999 of your 1000 frames will be unrecognisable garbage, or - if the camera is moving - you have 1000 frames rendered from a grand total of one ray each, i.e. each image is just one huge rectangle of a single colour value.

            It's true that you can't really compare FPS in a polygon engine to FPS in an interactive raytracer, especially one with dynamic quality control, but it doesn't mean you can magically raytrace complex scenes in real time on a fucking 286.

            Comment


            • #16
              Originally posted by Qaridarium
              yes you can raytrace complex scenes in real time on a fucking intel286 with a murmuring rate of 99,99999x%
              Which means you're only casting a couple of rays per frame, which means only a couple of pixels have colour values based on the scene itself. The rest of the pixel colours will be made up using interpolation. Did you read what I said about casting primary rays? Each primary ray cast gives you one colour value, not one complete image.

              If you're scaling the number of rays per frame down to below the point where a standard polygon engine looks better, then you might as well use a polygon engine. This is why commercial games in the real world still use polygon engines. It doesn't mean raytracing is nonsense. I for one would love to see hardware & software get to the point where raytracers can exceed the quality of polygon engines, at similar frame rates, for the types of complex scenes we see in AAA titles these days - but the industry hasn't got there yet.

              There is no conspiracy to keep raytracing down. If there was, I would not be part of it.

              Comment


              • #17

                Comment


                • #18
                  Originally posted by Qaridarium
                  answer me this question about your suggestion: "The rest of the pixel colours will be made up using interpolation." why?
                  no you don't need interpolation to get a real time raytracing complex scene on a 286
                  and i think the 286 is to slow for the interpolation.
                  A 286 is too slow for interpolation but fast enough for interactive raytracing, are you serious?

                  You need to interpolate because you get one pixel per ray. If you have an image resolution of 1920 x 1200 then you need to shoot 2,304,000 (= 1920 x 1200) rays to get a colour value for each pixel (let's forget about reflection, refraction, etc. for the time being). If you reduce the number of rays to reach interactive speeds this means you no longer get a colour value for each pixel. Hence you need to calculate the remaining pixels by interpolating between those pixels you do have.

                  Also, your "I can ramp up the fps as much as I want" is only true if you're willing to completely sacrifice image quality. If you want any kind of consistent image quality then you need to use a consistent number of rays. Otherwise there's no point in using a raytracer in the first place.
                  Last edited by Wildfire; 12 March 2012, 10:31 AM. Reason: Typos

                  Comment


                  • #19
                    Originally posted by Qaridarium
                    answer me this question about your suggestion: "The rest of the pixel colours will be made up using interpolation." why?
                    Because if you haven't got the computing power to cast at least one real primary ray per pixel, you either just leave the rest of the pixels black, or you fill them in based on the colours of nearby pixels where you *have* cast primary rays.

                    For example, if you imagine a grid of 10x10 squares on your screen, and cast primary rays at every point where two grid lines meet, you can "guess" the colours of the rest of the pixels by blending the colours from the corners of the squares. You won't have a lot of detail in the resulting image, but it will at least cover the screen. This is just a naive example, there are better ways of deciding where to cast primary rays than a fixed-space grid.

                    Originally posted by Qaridarium
                    and i think the 286 is to slow for the interpolation.
                    Bilinear interpolation can be performed much quicker than the calculations needed to cast a primary ray, unless your scene is really, really simple. When I say interpolation in this context, I basically mean "colour blending".

                    Comment


                    • #20
                      I do understand what your so called "murmuring" is. And this is exactly what I mean be image quality. This "murmuring" is a trick employed on current hardware to be able to reach interactive speeds. For most applications this loss of image quality is _not_ acceptable, however. Would you want to play a game which looks like the television screen in your example? Which means you need to use a consistent number of rays. All the time. Which means you are no longer able to maintain interactive speeds.

                      Comment

                      Working...
                      X