Announcement

Collapse
No announcement yet.

Put the wish list for porting projects HERE...

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Looking at the updated list Im seeing that natural selection 2 is in the being looked at category. I didnt know natural selection was going commercial. From their website im seeing that theyre using a new engine too. Can they sell what was originally a mod for half life 1?

    Comment


    • It has been known since quite some time.

      What goes about OpenGL 3.0 ... who cares. It's got extensions so I can use those tricks already on OGL2 hardware if it exposes what I need. No need to fall back into retarded revision thinking as DirectX does.

      Comment


      • Originally posted by xav1r View Post
        Looking at the updated list Im seeing that natural selection 2 is in the being looked at category. I didnt know natural selection was going commercial. From their website im seeing that theyre using a new engine too. Can they sell what was originally a mod for half life 1?
        Yes. If they're not using Valve's engine for it and not using any assets that were Valve's or someone else's.

        It's a definite prospect since they're doing a full-on game. It's a good AAA prospect that we might be able to scoop up without much pain because they've expressed interest and stated that they fully intend to but don't have the resources (which I could provide however they'd like to run it...) to guarantee it until after the Windows version comes out.

        I like to see that sort of thing- I like seeing someone, even if it's not myself, coming through and making it happen.

        Comment


        • Originally posted by Dragonlord View Post
          It has been known since quite some time.

          What goes about OpenGL 3.0 ... who cares. It's got extensions so I can use those tricks already on OGL2 hardware if it exposes what I need. No need to fall back into retarded revision thinking as DirectX does.
          Heh... That's why I commented elsewhere that the people bitching and indicating they're going DX10 just because they didn't get Long's Peak as promised are actually being more than a bit silly- you are only going to get Vista as a target and possibly the next generation of X-Box (but NOT the current one- that should be a hint there... ).

          If you want to know what the fast path actually IS on OpenGL2.x (Which was one of the main bitches they used as an excuse for their acting out like they did...), just look at what you can do with ES 2.0- it'll be pretty close to the same thing really and is more akin to what I was expecting out of Longs Peak, really, with a more robust Object-centric model involved with it.

          I'm disappointed, yes. But you can accomplish the same level of eye-candy with a bit more effort with OpenGL 2.1 and DirectX 9c- making it "easier" won't do anything other than allow them to cut corners there or elsewhere because it's "faster" to code for now.

          Comment


          • Originally posted by xav1r View Post
            BTW, Svartalf, what do you think of the newly released OpenGL 3.0 specs?
            They're "okay".

            I honestly think they should have worked at making the Embedded Subsets the main APIs, myself, and left the CAD vendors that didn't want to move their old crufty code to use the Safety Critical profile of ES 1.1, left the people that wanted fixed function to use the normal profile of ES 1.1, and everyone else could use ES 2.0 like Sony and anyone coding for the console mostly does right now. You can honestly have three or more API edges for things- what you code to is an API, not a driver... There can be several different purposed API sets all comprising what you call "OpenGL"- and honestly do you need immediate mode rendering operations in a game or GIS application? NO. But with OpenGL as it currently stands, you get all the crap you don't need along with what you do and if you're not paying attention to the spec docs when you chose to use an API call, you can end up picking the "wrong" one for your application- and the biggest gripe is that it's not "obvious" which is which. That, honestly, is bunk- it's someone wanting to not think about what they're doing when they're coding. (See a problem with that? I do.)

            Longs Peak was ambitious and broke the hell out of all kinds of things much like the delta from D3D7 to D3D8 or D3D9c to D3D10 did to DirectX programmers. It'd been nice to have what they promised- but they didn't give us that for some legit reasons. Not having it doesn't relegate us to the "stone age" as the people griping about it all state as the state machine might be "old" but it still allows you to accomplish pretty much everything you need to do. Some things take a little more coding effort in OpenGL over D3D, but other things are much, much simpler and pretty much all the functionality from D3D is there with the ARB extensions in the first place. Even with D3D10 if you think about it. What's useful and relevant is there or going to be there now in an easy to use manner.

            Comment


            • What's the problem with "wrong calls"? The OpenGL API is not a huge mess as the DX API is. There are traps along the way but the same is true for DX. So how can you pick a "wring call" in OpenGL? That's an honest question from a fellow game dev.

              Comment


              • Originally posted by Dragonlord View Post
                What's the problem with "wrong calls"? The OpenGL API is not a huge mess as the DX API is. There are traps along the way but the same is true for DX. So how can you pick a "wring call" in OpenGL? That's an honest question from a fellow game dev.
                Heh... Mostly the "wrong calls" are stupid things that'll stall the pipeline. Either with all cards (but you usually find those out REAL quick...) or with just AMD's or NVidia's cards. It's a bit byzantine and if you don't read the Red Book carefully you can get zapped in the oddest of ways. There's fewer gotchas than DX, but when you find them...

                For example:

                You can map a VBO into the applications memory mappings.

                The spec says that this operation may stall the pipeline.

                There's a certain LucasArts published 3D game in which they took that at face value (I know this because I worked as a sustaining engineer for one of the Big Two, trying to sort out XP and Vista bugs in their driver code for 6 months some two or so years ago- I was led to believe that I might get to work on Linux driver stuff later on in the gig which was the only reason I took it on...never happened for whatever the reasons the vendor had...). This had the predictable result that one of the drivers (Unfortunately the one I was chasing the problem on...) took the "may" to be a "shall" because it was easier to implement and was still completely within the spec. If you stall on all map operations, you know for positive that you're not corrupting any of the VBOs and don't have to go to the extra step of tracking their usage by the GPU. Easy thing, really- and I might have even implemented the driver that way, not expecting them to do the "wrong" thing and drop their code out of the fast path.

                Because of this, the game on at least some of the levels would get drug down into slide-show framerates on a card that should paste pretty much all comers with syrupy smooth framerates period. Why it would do this is that it was recycling VBOs, rendering portions of the scene and then re-recycling the VBO in the middle of the frame render.

                If the studio that LucasArts had retained had simply observed that you honestly need to wait until intraframe time to do these operations they could have done the trick of recycling their VBOs to conserve card memory and never encountered a problem. In all honesty, I'm not naming names to protect the inno...er...guilty... Someone over there at the studio in question ought to have known better as they've been at OpenGL titles for a bit before that game...

                End result was that the driver was rewritten to account for and track in-flight/non-in-flight VBOs and only stall the pipeline if we knew the VBO was in use or could about to be so, basically going from "shall" to "may" in the use of the map operation stall possibility...

                There's other vicious edge cases like the aforementioned. The complainers don't seem to get that while there can be some difficulty finding the actual fast-path for things (because it requires careful reading of the Red Book to avoid gotchas like this...) it's still documented and it's better so, even with the wishy-washy defs, than much of D3D is right now.

                Longs Peak promised to remove much of the nuances like that one from the API and bring a smaller, cleaner one. Knowing some of the people, one of whom was my boss at the time, that were ARB board members, I'm a bit disappointed to see them back down on this one. Not as much as some of the people screeching about it- but disappointed all the same.
                Last edited by Svartalf; 14 August 2008, 05:29 PM.

                Comment


                • Yeah, I could imagine such things happen. That said I never mixed HW update code ( hence anything moving data to the GPU RAM ) with render code. I always do first all update code and then all render code. Prevented me from all sorts of "odd behaviors" so far. The only gotcha I found ( which though also resulted in some wtf?! over at gamedev.net ) has been the annoying FBO behavior. Took me quite some time profiling around until I located this one. 64us versus 10ms is a noticeable trap to avoid :O .

                  I doubt though that LP is going to change this a lot. There are so many hacks in place in graphic cards to gain speed for certain tasks that wrapping a more rigid ( for the driver developers ) system around won't be easy. That said it would be nice if some stupid traps would be disarmed like ATI/nVidia specific and mutual-exclusive FBO render formats <.=.< . Some ARBs would be happy to get some "sane" definitions. After all OpenGL has unfortunately a tendency for washy definitions.

                  Comment


                  • Originally posted by Dragonlord View Post
                    Yeah, I could imagine such things happen. That said I never mixed HW update code ( hence anything moving data to the GPU RAM ) with render code. I always do first all update code and then all render code. Prevented me from all sorts of "odd behaviors" so far.
                    I've had to clean up some code (including in Bandits AND Ballistics) that was done without adhering to those sorts of "rules" for coding. I've found that it always helps to hold that position or if you're not able to do so, make for sure you read the Red Book's descriptions on what you're doing and make sure that "may" is read as "shall".


                    The only gotcha I found ( which though also resulted in some wtf?! over at gamedev.net ) has been the annoying FBO behavior. Took me quite some time profiling around until I located this one. 64us versus 10ms is a noticeable trap to avoid :O .
                    Ow. That just hurts reading it. That's enough to toast any real framerate out of something. So, there's some mutually exclusive target formats for FBO work. Nice to know- I'll start digging around on gamedev.net for some of the details then...

                    I doubt though that LP is going to change this a lot. There are so many hacks in place in graphic cards to gain speed for certain tasks that wrapping a more rigid ( for the driver developers ) system around won't be easy. That said it would be nice if some stupid traps would be disarmed like ATI/nVidia specific and mutual-exclusive FBO render formats <.=.< . Some ARBs would be happy to get some "sane" definitions. After all OpenGL has unfortunately a tendency for washy definitions.
                    Heh... It's one of those "washy" definitions that produced the problem I described. And, I, too doubt that LP would have made all that much of a difference- it'd been nice and helped in some areas of coding, but it wouldn't have fixed other things like the stuff we're talking to right at the moment. It's also why I go and scratch my head and go wtf? when they bitch about OpenGL being "in the stone age" or "behind the times" when compared to Direct3D. It's not- it's just different. Some tasks are easier compared to OpenGL- others are definitely NOT so. In the end, I'd think the cross-platform ability would outweigh the cases where it's a little more coding required, but apparently not.

                    Comment


                    • About the FBO thing it's in fact fairly simple. I use for the main render code greedy color attachments hence ones which grow if a larger render target is demanded by the game but does not shrink. I'm simply doing the typical viewport/scissor dance to reuse the same attachments all over the place since I'm doing deferred rendering. Anyways at some point in time you just have to switch to a 1024 or 2048 shadow texture to do those fancy real time shadows. That though hurt with 10ms. The culprit had been the switch attachment calls. As it looks like FBOs do not like at all if you attach something having a different dimension ( different color format or type is fine though ). Hence changing attachments with same dimensions = fast... changing attachments with different size = slow. I switched then to a FBO manager handing out FBOs for a given size request ensuring that this case does not happen. Switch codes are now down to 64us for FBO change which is acceptable. The funny point in this story is that one of the moderators at gamedev nearly tipped over since on Vista doing a FBO change is hell slow but changing attachments with different sizes is not. So I use now 1 shared FBO on windows and managed FBOs on Linux.

                      The second one is the attachment woes. Not sure how this is with new cards but the problem is that my nVidia cards only allow to attach color targets to FBOs which are of the GL_NV_* type. Anything else yields incomplete FBO. On ATI on the other hand GL_NV_* of course yields incomplete FBO and there you need to common GL_RGB* and company to render to. Can be easily coped with by doing an FBO incompletion test at startup marking valid formats. No idea though why nVidia had this glorious idea in the first place to mess up common sense.

                      Comment

                      Working...
                      X