Announcement

Collapse
No announcement yet.

Few notes about Carmack's keynote at QuakeCon 2009

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    idtech5 won't require gpgpu systems, it's just that it might help if there's spare processing time to use it for a given engine job. They already do things such as additional processing whilst the graphics card is busy just to keep their timing constraints, so they really don't rely upon gpgpu stuff, more just parallel processing (threads, multiple cpu cores, etc).

    Comment


    • #47
      Originally posted by mirv View Post
      idtech5 won't require gpgpu systems, it's just that it might help if there's spare processing time to use it for a given engine job. They already do things such as additional processing whilst the graphics card is busy just to keep their timing constraints, so they really don't rely upon gpgpu stuff, more just parallel processing (threads, multiple cpu cores, etc).
      Required, no, required for a satisfying experience, more then likely. Really, read the siggraph presentation.

      –Anticipate CUDA, OpenCL, Larrabee support
      Last edited by deanjo; 08-23-2009, 06:52 AM.

      Comment


      • #48
        Originally posted by deanjo View Post
        Required, no, required for a satisfying experience, more then likely. Really, read the siggraph presentation.
        I did read it. I also listened to most of the keynote (it's two hours long, so I did skip some parts). I was merely pointing out that idtech5 isn't about leveraging gpgpu - the paper quite clearly state about parallel processing and making an engine to take advantage of a range of possibilities.
        It just seems these days that people think that gpgpu is the answer to everything, but it's really not that useful if your graphics card is too busy drawing graphics to spare the time to compute something else.

        Comment


        • #49
          Originally posted by deanjo View Post
          It's not so much the engine as the lack of consumer demand and the poor state of linux's chosen graphics subsystems. This does make something very clear though, the future of quality commercial games on linux rests solely on the efforts of projects like wine (sorry Svartalf, but it's true).
          I meant more the ever increasing range of hardware and software ( OS, drivers, apps ). Supporting all these different setups is a chore and getting optimal performance out of all of them is neigh impossible. We had this back in the old days where each app had to support everything on it's own ( like including own printer drivers or graphic drivers ). Back then we figured out that placing an OS between the hardware and apps solves this problem. But in game development we are still in stone age which is the cause of all the problems we have currently with games, engines and porting.

          Comment


          • #50
            Originally posted by mirv View Post
            I did read it. I also listened to most of the keynote (it's two hours long, so I did skip some parts). I was merely pointing out that idtech5 isn't about leveraging gpgpu - the paper quite clearly state about parallel processing and making an engine to take advantage of a range of possibilities.
            It just seems these days that people think that gpgpu is the answer to everything, but it's really not that useful if your graphics card is too busy drawing graphics to spare the time to compute something else.
            Your right, GPGPU isn't the answer to everything. These tasks can be done as well with raw core speed but we see right now that every major hardware player out there has basically abandoned this route in favor of parallelism. You don't see roadmaps with a 5 ghz cpu anymore, you do however see roadmaps and plans with cpu's having cores in the hundreds. It just happens to be right now that the solutions that are available to day that offers the most parallelism with being effecient at it happens to be GPU's.

            Comment


            • #51
              Originally posted by Dragonlord View Post
              I meant more the ever increasing range of hardware and software ( OS, drivers, apps ). Supporting all these different setups is a chore and getting optimal performance out of all of them is neigh impossible. We had this back in the old days where each app had to support everything on it's own ( like including own printer drivers or graphic drivers ). Back then we figured out that placing an OS between the hardware and apps solves this problem. But in game development we are still in stone age which is the cause of all the problems we have currently with games, engines and porting.
              I get what your saying now, although you can't really blame the software devs. They are just trying to make use of the hardware that is available now.

              Comment


              • #52
                That's correct. It's also a problem since the large game development companies base their income to a large degree on licensing engines, especially re-selling their engine with new iterations. While good for business this is bad for solving the problem. But so far any sub-par solutions got overtaken by proper solutions so this day is coming for sure

                Comment


                • #53
                  Originally posted by deanjo View Post
                  Your right, GPGPU isn't the answer to everything. These tasks can be done as well with raw core speed but we see right now that every major hardware player out there has basically abandoned this route in favor of parallelism. You don't see roadmaps with a 5 ghz cpu anymore, you do however see roadmaps and plans with cpu's having cores in the hundreds. It just happens to be right now that the solutions that are available to day that offers the most parallelism with being effecient at it happens to be GPU's.
                  Multi-core is not a solution neither unfortunately. What you gain with parallelism you loose with synchronization work. And games tend to be highly correlated. Some parts can be done in parallel like rendering and physics but that's as far as it gets. I think the development goes in a wrong direction there. Trying to parallelize something that does not lend itself well to parallelization is a problem. For graphics and physics I do see the solution but not for games in general.

                  Comment


                  • #54
                    Originally posted by Dragonlord View Post
                    Multi-core is not a solution neither unfortunately. What you gain with parallelism you loose with synchronization work. And games tend to be highly correlated. Some parts can be done in parallel like rendering and physics but that's as far as it gets. I think the development goes in a wrong direction there. Trying to parallelize something that does not lend itself well to parallelization is a problem. For graphics and physics I do see the solution but not for games in general.
                    Well another potential area that can be highly parallelized is AI. Pre-calculation of possible event outcomes has massive benefits. Nothing demonstrates this better then IBM's efforts on their chess platform, without this paralellism their chess engine would be slow as molasses. Apply that capability now to general games and the AI can potentially become far more sophisticated then current single threaded solutions. There is great potential for general gaming in such a scenario.

                    Comment


                    • #55
                      Originally posted by Dragonlord View Post
                      Multi-core is not a solution neither unfortunately. What you gain with parallelism you loose with synchronization work. And games tend to be highly correlated. Some parts can be done in parallel like rendering and physics but that's as far as it gets. I think the development goes in a wrong direction there. Trying to parallelize something that does not lend itself well to parallelization is a problem. For graphics and physics I do see the solution but not for games in general.
                      so if you cant just add more threading then what should be done?

                      Comment


                      • #56
                        Originally posted by L33F3R View Post
                        so if you cant just add more threading then what should be done?
                        Then you need more brute calculating capacity. Again. Or figure out something new altogether. Some do think silicon is getting past it's expiry date...

                        Comment


                        • #57
                          Originally posted by L33F3R View Post
                          so if you cant just add more threading then what should be done?
                          That's why companies make a lot of money from licensing engines - someone else takes care of that little problem!
                          It seems to me that most people are going the route of pooling together a group of jobs, and then scheduling them with whatever can run them (gpgpu, cpu, whatever) - but this is still new territory, and it will take a while to sort it all out. And there's what the hardware people do as well that will help influence everything.

                          Comment


                          • #58
                            Originally posted by nanonyme View Post
                            Then you need more brute calculating capacity. Again. Or figure out something new altogether. Some do think silicon is getting past it's expiry date...
                            i was going to mention that. The proof is in the clock speeds. 4.04ghz for the power 7 which isnt even out yet......

                            Comment


                            • #59
                              Originally posted by L33F3R View Post
                              i was going to mention that. The proof is in the clock speeds. 4.04ghz for the power 7 which isnt even out yet......
                              Well architecture has a lot to do with it as well, I don't think that anyone would conclude that a 3.8 Ghz Pentium 4 HT 672 is the fastest x86 processor out there still.

                              Comment


                              • #60
                                Originally posted by L33F3R View Post
                                so if you cant just add more threading then what should be done?
                                Be smart. Brute force solutions tend to be slower than solutions with brain. A good old saying of rendering is "the fasted triangles to render are those you don't render at all". There are many tricks which reduce the work load. But here again the problem with the dated engine design I mentioned comes into play. You have to use so many tricks that you can not get them done properly and optimized in the short TTM ( time to market ) alloted to game development projects. Hence companies try to fix the shortcoming in clever design with brute force. A battle you can't win in the long run.

                                Comment

                                Working...
                                X