Announcement

Collapse
No announcement yet.

The Ideal (Hypothetical) Gaming Processor

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #76
    Originally posted by Pyre Vulpimorph View Post
    Sweet mother of John DeLancie... what the hell happened to my thread?!

    Oh, right, got hijacked by Q. Spirit of Chaos and Disharmony.
    I have to admit that Q is very good. Amazing ability to steer thread to offtopic with full power and everyone follows.

    Originally posted by Qaridarium View Post
    LOL this is my point not "there" point for me the xhz of the screen is the time death line of the real-time raytracer!

    the real point is the "Others" want less FPS than screen refresh rate and i always push the screen refresh rate !
    Ah, I think I get the point now (again). The consept of frames per second applies for raytracing and this real-time ray-tracing just means that it is being kept at display refresh rate at all times. And raytracer keeps rendering the frame as long as it can before next screen refresh.

    Comment


    • #77
      Here, ACTUAL CONTENTS OF THIS THREAD:

      Originally posted by Wildfire View Post
      I think you're looking at this the wrong way around. You're looking at what current games are doing and you're then trying to design a CPU that is optimal for that. In terms of progress you would need to design a CPU that offers things that current CPUs are lacking (in terms of performance for gaming) so that future games can be optimized for that. Let the software adapt to your CPU not the CPU to your software.
      Originally posted by AnonymousCoward View Post
      If floating points are handled by your CPU, make sure they are deterministic. http://nicolas.brodu.numerimoire.net...lop/index.html describes the problems you can encounter on current PCs.
      Originally posted by log0 View Post
      To get back to the topic. Assume one would use OpenCL for physics and rendering. I think you could get away with 2-4 simple RISC cores(without FPU). The cores would be there to feed the GPU with data and take care of game logic, interrupts and other boring stuff. Make them as fast as you can afford. Make sure there are no bottlenecks or large latencies between CPU cores and GPU. Throw 8GB shared memory with enough bandwidth into the mix and you should be good to go.

      And make sure to keep the production costs low and yields high. No experiments a la PS3.
      Originally posted by mirv View Post
      If physics takes place entirely on the GPU, then your bi-directional communication needs to be rather good between CPU and GPU. Physics will generally trigger game logic events (depending on the game of course), so while the GPU can handle physics calculations faster, it's the need for a feedback system that destroys it for anything more than eye-candy with current architectures. I have been curious how well AMD's Fusion systems can be made to work with that, but I don't really have time to delve into it in more than a theoretical capacity. At least, don't have the time yet.
      Originally posted by log0 View Post
      If I think of a single simulation step:
      Prediction
      Broadphase
      Contact Generation
      Correction/Solver

      Lets say the intermediate results form the last step are available to the cpu to tinker with at the same time. There will be a lag of at least one frame. But for game events it should be negligible.
      Originally posted by mirv View Post
      Reading back from the GPU is quite costly. You certainly want to avoid it as much as possible - unless you can share the memory with a zero-copy buffer (in theory). Sure it's getting easier with current architectures and bus speeds for data readback, but I'm pretty sure it's still costly enough that you don't want to do it. This is why most games will only use particle effects, or physics related calculations that's classified as "eye candy" and doesn't directly affect gameplay logic.
      Also, graphics cards still need to do graphics.
      I guess it depends on the game, how much physics calculations you need to affect game logic (those ones are generally very simplistic compared to, say, cloth simulation) and where your bottleneck will be (calculations vs data transfer). It would be interesting to see just what kind of balance point can be found...maybe something like ants (for path update AI code) combined with "dodge the particles". Sucks having a day job and not being able to explore such ideas properly.
      Originally posted by log0 View Post
      I am assuming shared/unified memory in my proposal.
      Originally posted by mirv View Post
      Ah, I appear to have misunderstood that. My bad. So.....excellent idea!
      Originally posted by elanthis View Post
      You need more than that, though. For instance, quite a few non-trivial games need to have pre-contact callbacks in the contact generator in order to properly ignore contacts. The absolute simplest example is a 2.5D platformer (think Trine) where you can jump up through some objects but still land on them. This is generally implemented by an engine with contact caching (requiring random-access read-write storage in the physics engine, which is not GPU-friendly) and a pre-contact callback that flags the contact as ignored if the surface-player contact normal is not pointing up.

      More complex 3D physics uses those callbacks for more complex needs.

      Physics can be made GPU-friendly, but only in the non-general case. That is to say, certain features of fully-featured physics engines like Havok or PhysX or whatnot simply do not work well with GPUs, and only games that avoid those features can reasonably use GPU-based physics.

      So far as the rest of this thread... why in the fuck are any of you trying to converse with Qaridium still? Theres's an Ignore User feature on this forum. Use it.
      I'm new here... is there a "Block User From My @#$%! Thread" feature?

      Originally posted by log0 View Post
      The simple example could be dealt with on the GPU by passing additional state to decide whether a contact is added or not. Of course there are limits to this method. I've got some experience with bullet physics lib. I've used callbacks, but more out of convenience. To avoid having to adapt the code for my needs and not because there was no other way to implement certain functionality. But that is my (limited) point of view of course.
      Apologies if I missed other relevant posts.

      The point of this thread, really, was so I can learn more about how modern games work. Specifically, what the CPU is left doing while the GPU is busy rendering frames. So, let's shoot for the moon and say my client's system will include a Radeon HD 7870 (pitcarin) GPU, and "normal" output resolution is going to be 1920x1080p.

      System memory will be 2-4 GiB of DDR3 CAS-6 @ 1600 MHz, framebuffer memory will be 2 GiB of GDDR5 @ 6000 MHz

      I decide to build and 8-core MIPS64 that's 4-lane superscalar, but no SMT. It has dynamic out-of-order execution, speculative execution, regester renaming, and a fairly long (for RISC) instruction pipeline with aggressive branch prediction. Each core has 512-bit deterministic floating point units. Each core has 128 KiB of L1 cache, 512 KiB of private L2 cache, and 16 MiB of L3 cache is shared across the 8 cores (2 MiB per core).

      The chip has a quad-channel memory controller, and talks directly to the GPU via 32-bit HyperTransport 3.1 connection link.

      --------

      Again, while I don't presume to know everything there is to know about CPU design, the goal is to make the chip as small and cheap as possible without ever bottlenecking the GPU, and still providing advanced functionality (like accurate real-time physics). So, all that junk I just spit out might not be an "optimal" design.

      Any thoughts?

      Comment


      • #78
        Originally posted by Petteri View Post
        Ah, I think I get the point now (again). The consept of frames per second applies for raytracing and this real-time ray-tracing just means that it is being kept at display refresh rate at all times. And raytracer keeps rendering the frame as long as it can before next screen refresh.
        wow now you get it... and if the refresh frame rate is all the time the same then what a sense does the FPS rate make? nothing!

        but yes I'm happy you understand me finally.

        Originally posted by Petteri View Post
        I have to admit that Q is very good. Amazing ability to steer thread to offtopic with full power and everyone follows.
        first of all thank you... but this time you are wrong because if ray-tracing is the future you need ray-tracing hardware and this is the tropic! ray tracing doesn't care about "Single-thread-performance" the fastest hardware for ray-tracing right now is a hd7970 with 6gb of vram.

        no CPU on earth can beat that card in ray-tracing.

        this means this time I'm innocent.

        Comment


        • #79
          Originally posted by Pyre Vulpimorph View Post
          Sweet mother of John DeLancie... what the hell happened to my thread?!

          Oh, right, got hijacked by Q. Spirit of Chaos and Disharmony.
          yes i like it very much i believe in discordianism!

          but this time i'm innocent! if ray-tracing is the future you need raytracing hardware and not your old style CPUs

          read the thread again my claim is : the hd7970(6gb vram) is the fastest card for ray-tracing!

          this means you don't need a fast cpu... you can buy a quatcore for 50€ or a ARM quatcore for less than 50€ and use a bigblock hd7970 or 4 of them and just push the rays..

          this technique just bankrupt "INTEL"

          and yes you really need much ram for your gpu 6GB or higher because this is a bottle neck in raytracing.

          Comment


          • #80
            To derail this thread further: Did anyone ever try motion compensated raytracing? I was amazed to realize they are using the last frame as an initial guess for the new rendering, so you get free motion blur. What I'd do is, from the known motion vectors, interpolate the next frame and use that as an initial guess for the raytracer. I think that would reduce the "murmuring" a lot without creating too much motion blur. Also if the interpolation was fast enough, an intelligent raytracer could shoot more rays in those sections that are unknown from the interpolation, like if a moving object reveals what lied behind, shoot rays there because the interpolated image can't know what should be there.
            Kind of what was done with SW Foce Unleashed, but not to increase framerate, but to increase picture quality. (One could of course render at half the fps and interpolate to double the fps, but thats not the idea here.)

            Also a question to Qaridarium: I think most of us understand what youre trying to do, but youre not a native speaker are you? (neither am I) I think much of the misunderstanding are because people don't understand what you say or you don't understand what others say. Everyone knows how real time raytracing works, but at least to my knowledge, you use uncommon terms like murmuring and you are very much fixated on those terms. Are there any technical papers that use the same terms like you?

            Comment


            • #81
              Pyre Vulpimorph: like 2girls1cup. you cant stop... you just keep watchin' it

              Spirit of Chaos and Disharmony
              there is no such thing. only the murmuring rate of the topic has been incrased.
              Last edited by Geri; 03-13-2012, 05:53 PM.

              Comment


              • #82
                Originally posted by Mathias View Post
                Are there any technical papers that use the same terms like you?
                i don't know murmuring is the direct english word to the germen word "Rauschen" and the german word fit perfect.

                and i think the english word is correct to.

                most people use the word "Quality" but this word is technically wrong.

                because the affect of the murmuring on the quality is relative

                Comment


                • #83
                  http://dict.leo.org/?lp=ende&search=rauschen
                  Rauschen: noise, hissing (more sound related),...
                  http://dict.leo.org/?lp=ende&search=murmur
                  murmur: murmeln, raunen, ...

                  where does your translation come from? I think the term describing what you want is "noise". Like in "image noise":
                  http://en.wikipedia.org/wiki/Image_noise
                  http://de.wikipedia.org/wiki/Bildrauschen

                  Comment


                  • #84
                    Originally posted by Mathias View Post
                    http://dict.leo.org/?lp=ende&search=rauschen
                    Rauschen: noise, hissing (more sound related),...
                    http://dict.leo.org/?lp=ende&search=murmur
                    murmur: murmeln, raunen, ...

                    where does your translation come from? I think the term describing what you want is "noise". Like in "image noise":
                    http://en.wikipedia.org/wiki/Image_noise
                    http://de.wikipedia.org/wiki/Bildrauschen
                    http://translate.google.de/?hl=de&tab=wT#de|en|rauschen

                    and yes Image noise fits very good

                    and google image shows that murmuring is rarely used in this way:

                    https://www.google.de/search?tbm=isc...l996l11j1l12l0

                    so yes google translate fails here... but maybe the english speaking people fail?

                    because if i put noise into the translation the complete wrong german word are showed.

                    Noise
                    =
                    Lärm
                    Geräusch
                    Krach
                    Unruhe
                    Getön

                    most of the therms are just wrong in my case only "unruhe" fits

                    but unruhe in english is restlessness

                    so in the end noise maybe is used but goes into the complete wrong way.

                    Comment


                    • #85
                      Maybe they are, but don't try to lecture native speakers about how they use there language incorrect. There are also countless words in German that are used incorrectly if you go by the original meaning.

                      Another thing I'd like to ask: Do you have any references to "a black picture has unlimited fps in a real time raytracer"? Because to my understanding, a ray traced images is created by tracing rays. So if no rays are traced because the 286 is to slow to do any significant number of rays/sec, the image is not a ray traced image.
                      I mean everyone agrees that you can produce 60fps with an averagely slow computer if you sacrifice image quality beyond the image being recognizable at all. But you can hardly argue, that it doesn't matter at what framerate you render the image, because the image quality will degrade past the point where its even recognizable because of missing information. Even if you use some kind of local adaptive resolutions or whatever, on a 286 you won't be able to get even 24 fps with an even remotely recognizable picture. I know you never wanted to write that, but thats what this is about. Tracing rays at 24/60 fps and getting a picture out of it that is beyond what rasterizers can achieve. Call your 60fps on 286 raytracer "real time" if you want, but don't show it anyone, they will laugh in your face.

                      Comment


                      • #86
                        Originally posted by Mathias View Post
                        Call your 60fps on 286 raytracer "real time" if you want, but don't show it anyone, they will laugh in your face.
                        I think you meant 60 rays per second picture. LoL.

                        And no I don't think a hd7970 can deliver Crysis 3 1080p quality rendering by ray tracing. Unless you enjoy watching pixel garbage on every camera movement. Games are rarely static. Thus the whole adaptive crap will be of limited benefit.

                        Comment


                        • #87
                          Kind of related I think is this discussion between Tim Sweeney and Andrew Richards. They both have very different views on how the future of graphics should or could be done. Andrew more favors the traditional rasterization on GPUs approach "for the forseeable future", Sweeney argues their rasterizer wasn't that much slower then what 3DFX delivered and they would have a better image quality today if GPUs never would have gained market share...
                          Featured by the quite sarcastic and weired Charlie from Semiaccurate:
                          http://semiaccurate.com/2010/09/22/t...ture-graphics/

                          Comment


                          • #88
                            Trying to get back on topic: Before you can design your processor I guess you should first try to figure out what kinds of games you want to run on your platform. Then you'd need to analyze those games in terms of CPU usage. Once you've figured out which CPU functions you need (and which you don't) you could try to design a CPU which offers only the functionality that's actually used. One other thing you'd need to figure out: Is your CPU going to be compatible to existing CPUs so that games can be run without any modifications or would you need to recompile the game to run on your platform?

                            Like I already said, usually software adapts to the hardware, not the hardware to the software. Its easier to change your software to use new features available in hardware than it is to adapt your hardware to growing needs of your software. Adaption means specialization and specialization means it will be very good at what it does but it also means it may not be able to do anything developers want or need in the future.

                            If you're simply going to try to be as cheap as possible you might get away with using a high-end video card and an existing low-end CPU that's just fast enough so as not to slow down the GPU.

                            Originally posted by Qaridarium View Post
                            so yes google translate fails here... but maybe the english speaking people fail?
                            I know this is going to derail the thread even further, mea culpa. Words rarely have only one meaning and their meaning can change based on context. Most automated translators only give you the primary meaning of a word which doesn't always fit. When applied to sound "noise" can mean loud, obnoxious or irritating. When applied to an image "noise" is also irritating and the best translation is usually "Rauschen". Take the word "loud" for instance. When applied to sound, it describes a sound that is very noticeable or audible (and sometimes irritating). When applied to color "loud" becomes "noticeable" but also "irritating" and the best translation is probably "grell".

                            Comment


                            • #89
                              Originally posted by Mathias View Post
                              Maybe they are, but don't try to lecture native speakers about how they use there language incorrect. There are also countless words in German that are used incorrectly if you go by the original meaning.
                              Logic is a ultimate truth and if nativ speakers are not logical there thinking are damaned and this is a fact! this is right for all languages only in the mad-house this is wrong.

                              "Language" is a lifing beeing you can fix that irratonal unlogic stuff.

                              Originally posted by Mathias View Post
                              Another thing I'd like to ask: Do you have any references to "a black picture has unlimited fps in a real time raytracer"? Because to my understanding, a ray traced images is created by tracing rays. So if no rays are traced because the 286 is to slow to do any significant number of rays/sec, the image is not a ray traced image.
                              I mean everyone agrees that you can produce 60fps with an averagely slow computer if you sacrifice image quality beyond the image being recognizable at all. But you can hardly argue, that it doesn't matter at what framerate you render the image, because the image quality will degrade past the point where its even recognizable because of missing information. Even if you use some kind of local adaptive resolutions or whatever, on a 286 you won't be able to get even 24 fps with an even remotely recognizable picture. I know you never wanted to write that, but thats what this is about. Tracing rays at 24/60 fps and getting a picture out of it that is beyond what rasterizers can achieve. Call your 60fps on 286 raytracer "real time" if you want, but don't show it anyone, they will laugh in your face.
                              this only depends on your definition in my definition its the definition of the analog camera and if you put a black plastic cloth over your camera lens then you get a black output right?

                              if you calculate the same in a raytracer no ray hits the frame because of the black plastic cloth and because of this its a full black frame

                              this is 100% valid! a black screen is technicaly valid!
                              Last edited by Qaridarium; 03-14-2012, 06:58 AM.

                              Comment


                              • #90
                                Originally posted by log0 View Post
                                I think you meant 60 rays per second picture. LoL.

                                And no I don't think a hd7970 can deliver Crysis 3 1080p quality rendering by ray tracing. Unless you enjoy watching pixel garbage on every camera movement. Games are rarely static. Thus the whole adaptive crap will be of limited benefit.
                                compared to crysis 3 a raytracing engine produce a much higher quality!

                                and a much higher quality need much higher hardware needs if you use a raster graphic engine right?

                                many raytracing videos on youtube are very nice with much weaker hardware.

                                this means yes a hd7970 rocks da house! and yes you can put 4 pices in your PC and with raytracing you get full speed of these 4 cards.

                                Comment

                                Working...
                                X