Announcement

Collapse
No announcement yet.

Valve's L4D2 Linux Presentation Slides

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    On another note related to the slides, I'd like to see what they did for threading GL. I'm assuming they're using the newer GLX_ARG_create_context extension to make shared contexts, pre-assign them to any threads doing rendering, and then synchronizing draw calls to the display screen in the main thread. They might be doing something more funky and creative though.

    Generally the steps I go through these days to get multi-threaded GL rendering to work go something like:

    1) Create dummy context to get access to extensions
    2) Create real context for device using ARB_create_context extension (requires GL 3, basically)
    3) Kill dummy context
    4) Create shared context for main display window (shared with context from step #2)
    5) Create a per-thread context cache, generally just a TLS variable
    6) Create a context pool to accelerate step 8 in the common case
    7) Create several shared GL contexts to store in the context pool
    8) Create a function to check that a context has been bound to the current thread, and if not, pull one off the context queue and bind it; signal main thread and block if the pool is empty, wait for main thread to create a new shared context that we can bind
    9) Ensure that all threads that are ending return their cached context (if they have one) to the pool
    10) Write letters to Khronos asking them to just give us explicit device, surface, and context objects like they promised for Longs Peak

    The point of the separate context in step 4 is that you sometimes need to destroy and recreate your main window. Since OpenGL oh-so-wonderfully ties your device context (which controls the lifetime of your GPU objects) and the output window into a single object, there's no way to recreate an output window without also destroying all your textures, shaders, buffers, etc. Unless you create two shared contexts, which is a relatively newish feature and not yet supported everywhere (Mesa is only just getting support for it in 8.1, iirc). Again, I think it's obvious that the D3D approach is much superior here: separate objects for the device and swap chain, which are explicitly managed by the developer.

    Comment


    • #12
      Originally posted by AJSB View Post
      Thanks very much for the slides w/o the darn watermark

      Interesting stuff that was added to what we already knew...only two more points...

      1 The complete title of the presentation is based in the complete title of Stanley Kubrick movie "Dr. Strangelove"

      2. Valve is hiring....in special OpenGL, Kernel and drivers programmers...i have a felling that soon we will have a new distro called SteamOS...both for PCs and for the Steam console
      Wouldn't mind. I don't know much about game development but I think Linux is an interessting platform since you can streamline it
      from head to toe to deliver best performance. If you get all the people together and they exchange ideas which will be released in the next driver and kernel versions that seems a lot more interssting than having Windows on one side (which gets a new display server or kernel every 3 years) the drivers on the other and you try to get your game between them.

      Let the couch people have their linux based consoles and for the grown ups the is Steam for Linux (any distro) with all the awesome games.

      Really like seeing them hiring people and I think if you are a talented programmer it is hard to resist with Valve being a company like this:
      Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.


      Mr Newell can also be quite persuasive when you are in his office:
      Last edited by blackout23; 12 August 2012, 05:42 PM.

      Comment


      • #13
        Originally posted by elanthis View Post
        It's been some time since I've actually had a Linux desktop machine (as you may have already picked up on, I became very disillusioned with Linux as a non-toy desktop OS last year), so I don't have much of an opinion on Mesa right now. What I've read and seen in the code all looks fairly good, but it's hard to say without actually using it for anything serious.

        I am impressed with the general speed of development on Mesa, specially from the Intel team lately, and I enjoy keeping up on the git changesets. I want to make clear that while I very strongly despise the Intel Windows GL driver, I harbor no disrespect for the Intel Linux driver team. Seems like all good work so far.

        I have been mulling getting a small Mac Mini like machine with Ivy Bridge (maybe the Giada i53 if it's available soon) specifically for Linux as I'd like to do some porting work, so I'll be experimenting with Mesa's features and quality quite a bit then. If I find problems there, though, export bug reports rather than forum bitching.

        The only problem I'm aware of with Intel's Linux support right now is that I'd really like Intel to switch to Gallium, so that any "better than OpenGL but not D3D" API/state-tracker experiments I might decide to try could actually be done with Intel hardware and not just softpipe/llvmpipe.

        [edit: I can't remember if you're with AMD or not, but if you are, I also had good experiences with r600. it was definitely very buggy and the DRI/Mesa model makes it way too easy for bugs to cause kernel oopses, but this was over a year ago, so again I don't have an educated opinion on today's state of the driver.]
        Probably your best bet if you want to experiment with with Gallium is AMD. They are some tiny little machines (ie zotac) that are cheap (and probably slow) but if you get serious about your "better than OpenGL but not D3D" you 'll probably buy something better.

        Comment


        • #14
          Originally posted by FourDMusic View Post
          Michael, please add some transparency to the "Phoronix" watermark! It's covering parts of the text on the slides. And people have had trouble with the watermark covering up pictures in the past, too.

          Merci beaucoup

          Here's the link to the original slides in PDF from the khronos website: http://www.khronos.org/assets/upload...RAPH_Aug12.pdf
          Thank you♥, I wasn't in the mood to open (and save) up all those images (that are covered by phoronix logo :@ )

          Comment


          • #15
            Originally posted by elanthis View Post
            It's been some time since I've actually had a Linux desktop machine (as you may have already picked up on, I became very disillusioned with Linux as a non-toy desktop OS last year), so I don't have much of an opinion on Mesa right now. What I've read and seen in the code all looks fairly good, but it's hard to say without actually using it for anything serious.
            And who cares about stupid trolls thoughts? If there's a toy OS it's Windows and this was proven many times (nobody serious puts GUI into ring 0!). It was also proven you're a dumb troll:



            The sad thing you're still trolling even after being proven wrong.
            Last edited by kraftman; 13 August 2012, 04:00 AM.

            Comment


            • #16
              Originally posted by elanthis View Post
              Again, I think it's obvious that the D3D approach is much superior here: separate objects for the device and swap chain, which are explicitly managed by the developer.
              It's not obvious while d3d is much slower. While d3d was only successful on Windows it seems OpenGL will become preferred API to use even on M$ OS.

              Comment


              • #17
                Originally posted by kraftman View Post
                It's not obvious while d3d is much slower. While d3d was only successful on Windows it seems OpenGL will become preferred API to use even on M$ OS.
                Funny, I've seen the exact opposite. In my experience, on Windows, OGL is slower then DX. [At least as far as OGL 3.0 and DX 9.0c on XP goes].

                Comment


                • #18
                  Originally posted by gamerk2 View Post
                  Funny, I've seen the exact opposite. In my experience, on Windows, OGL is slower then DX. [At least as far as OGL 3.0 and DX 9.0c on XP goes].
                  Possibly, but I'm talking about L4D2 which was optimized for OpenGL. Normally games are optimized for d3d.

                  Comment


                  • #19
                    Originally posted by kraftman View Post
                    Possibly, but I'm talking about L4D2 which was optimized for OpenGL. Normally games are optimized for d3d.
                    On a different OS in a different state of development. Do we know for sure every D3D graphical feature has been implemented? How much of the increase is due to the different OS? Choice of scheduler makes a difference? AMD/NVIDIA?

                    So yeah, I never take the results of one benchmark as meaning anything. Doing so is just silly.

                    Now, it may turn out that Linux is faster then Windows. Its possible [and frankly, it *should* be given how much Windows does in the background]. But based on years of data, I find it very unlikely OGL would run faster then DX, at least on a Windows based OS.

                    Comment


                    • #20
                      Originally posted by kraftman View Post
                      And who cares about stupid trolls thoughts? If there's a toy OS it's Windows and this was proven many times (nobody serious puts GUI into ring 0!). It was also proven you're a dumb troll:
                      Linking to your own baseless post does not show proof.

                      It does show me that you are now the newest person to be added to my ignore list with the likes of Q and crazycheese and the other loonies that Phoronix for some reason attracts. Bye!

                      Comment

                      Working...
                      X