John R. Hall: Programming Linux Games http://web.archive.org/web/200407161...-this-time.pdf
How Valve Made L4D2 Faster On Linux Than Windows
Collapse
X
-
Originally posted by artivision View Post1. OpenGL is faster not because of Linux, but because parts of the protocol are hard-coded inside GPUs (ARB). And you don't need assembly to accelerate.
Originally posted by artivision View Post2. OpenGL is super-threaded, it can run on many instruction sets at the same time.
4. OpenGL exist on every GPU driver on the planet and there are not compatibility problems. Compilers don't matter, developers write GLSL code. Even precompiled-preoptimized byte-code of a console (ps3) can be turn back to GLSL and then back to different byte-code, and all this statically. Wine does that.
Comment
-
-
this is me, a non techie, translating what was said, please correct me
i think in this case, its ok for valve to have an abstraction layer, it will help other source engine makers to press that "lazy port" button, and get the linux games moved in faster and more often
it also seems they are making main parts native and improving the layer, so unless the cpu you have is pointlessly weak compared to your graphics card, there should be no difference in performance, yay.
the only thing to cross fingers on now is how good they built that layer.
Comment
-
-
Originally posted by artivision View Post1. OpenGL is faster not because of Linux, but because parts of the protocol are hard-coded inside GPUs (ARB). And you don't need assembly to accelerate.
2. OpenGL is super-threaded, it can run on many instruction sets at the same time.
In any case, OpenGL is not tied to an instruction set. It's an API. It's just a collection of C function prototypes and #define's. That's it. That is, in fact, the exact same thing Direct3D is, except replace "C" with "C++" and "#define's" with "COM" (which is basically just a specification for standardized C++ vtable layouts for language interop, excluding the other unrelated things also confusingly called COM and which aren't used in D3D).
Unfortunately, that OpenGL C API imposes some choices from the 1980's that are very much no longer relevant on today's hardware. Like how your device context and render surface are bound together completely. Or how render surface options like depth buffers are bound at surface creation time, an artifact of the pre-Vista/pre-OSX/pre-X11-compositing era when multiple processes literally shared a single desktop framebuffer, and had to negotiate the layout and features there-of. Or how OpenGL uses magic globals internally so that the entire API is not thread-safe or possible to make thread-safe without pulling voodoo magic tricks inside the window-system interop API and cumbersome client-side hacks to use that voodoo. Or how textures are still treated as mutable objects as if they aren't bound up in the GPU and possibly still being used as a render source or target while you're trying to modify them. etc. etc. etc.
3. OpenGL is royalty-free, there are no fees to implement it. Free specification (how it works), not free code.
4. OpenGL exist on every GPU driver on the planet and there are not compatibility problems. Compilers don't matter, developers write GLSL code. Even precompiled-preoptimized byte-code of a console (ps3) can be turn back to GLSL and then back to different byte-code, and all this statically. Wine does that.
OpenGL does not exist for every GPU driver on the planet. Take the crazy shit that's in the Nintendo 3DS, for instance. It doesn't have shaders as we know them (it has programmable features, but they're not compatible with the way GLSL or HLSL work), it does not allow for customizable vertex attributes, and it actually requires that you preload a scene in a proprietary memory layout for the hardware to even consider rendering it. It can do cell shading and custom lighting and other cool fancy GPU features, but sure as hell not with OpenGL.
In any case, yes, you can translate between binary and source. You can disassemble CPU assembly to C, you can disassembly GPU assembly to GLSL, HLSL, or any other source language you invent. That's just a basic fact of source-binary translations in the compiler world. You'll necessarily lose some information, such as comments, formatting, and some of the structure, as the compiler throws away the "noise," but you can recover the core of any compiled program.
5. http://nvidia.fullviewmedia.com/gtc2...5-B-S0610.html No one will give another piece of their technology to Microsoft because monopoly has lost on this area. D3D is over.
The big NVIDIA advances they keep crowing about -- bindless graphics, for instance -- are one of the very reasons that those of who actually work with graphics fucking hate OpenGL. Yes, NVIDIA gets it, and has extensions to fix it (performance-wise; not horrible-API-wise). They've had these extensions for many years. But Khronos has never accepted them into OpenGL proper. AMD, Intel, and others do not support those extensions, which is the problem with extensions and why nobody wants to deal with them. They are only useful if you're developing an in-house visualization app for a specific hardware spec, and are borderline useless if you want to ship an app to millions of consumers with a wide variety of hardware configurations.
Comment
-
-
Originally posted by Wildfire View PostThis is a bit like saying Assembler is faster than C++ because Assembler can be directly translated to CPU instructions, while C++ needs a more complex compiler. Any modern optimizing compiler probably generates faster code than your hand coded Assembler. So I guess its down to how good you are at writing OpenGL vs. Direct3D, how good the compiler is at optimizing these instructions and how good the driver is at translating calls to its OpenGL/Direct3D API into GPU instructions.
Uh, what? Correct me if I'm wrong, but afaik the OpenGL API doesn't know anything about threads. Infact, if my memory serves all calls to an OpenGL context must happen in the same thread (the one that created that context). The GPU takes care of parallel execution.
Uh... developers write code in their language of choice, using the OpenGL API provided by that language or one of its libraries. GLSL (OpenGL Shading Language) is used to write shaders.
2. I was speaking for shaders (compute or not). PS3 runs with 1 channel on RSX(200macGflops) and 3 channels on CELL(6spes=1.8macTflops). As for general is multi-threaded.
3. The graphics code is written on GLSL, the general code is for a specific CPU. The difference is that D3D exists only on x86 and in the future for Arm32.
Comment
-
-
Originally posted by elanthis View PostNot they aren't. There is no "OpenGL protocol." No such thing exists. The methods by which a GPU operate is closer to D3D10/11, which is precisely why Microsoft rewrote the API instead of carrying D3D9 forward, and also is why the intrenals of Gallium3D look a lot like D3D10/11 and why there's a huge API translation layer necessary to accelerate OpenGL using that infrastructure.
Super-threaded doesn't mean what you think it means. Not even close.
In any case, OpenGL is not tied to an instruction set. It's an API. It's just a collection of C function prototypes and #define's. That's it. That is, in fact, the exact same thing Direct3D is, except replace "C" with "C++" and "#define's" with "COM" (which is basically just a specification for standardized C++ vtable layouts for language interop, excluding the other unrelated things also confusingly called COM and which aren't used in D3D).
Unfortunately, that OpenGL C API imposes some choices from the 1980's that are very much no longer relevant on today's hardware. Like how your device context and render surface are bound together completely. Or how render surface options like depth buffers are bound at surface creation time, an artifact of the pre-Vista/pre-OSX/pre-X11-compositing era when multiple processes literally shared a single desktop framebuffer, and had to negotiate the layout and features there-of. Or how OpenGL uses magic globals internally so that the entire API is not thread-safe or possible to make thread-safe without pulling voodoo magic tricks inside the window-system interop API and cumbersome client-side hacks to use that voodoo. Or how textures are still treated as mutable objects as if they aren't bound up in the GPU and possibly still being used as a render source or target while you're trying to modify them. etc. etc. etc.
Well, except for patents. In any event, Direct3D is in the same boat. There is nothing in the world legally that Microsoft can do to stop anyone from implementing it (aside from the exact same patent issues that affect OpenGL). This is why there are in fact various non-Microsoft implementations of D3D, in software translation layers, Wine, Gallium, etc.
There are many, MANY compatibility problems. Porting OpenGL apps between OSes and hardware can be a huge pain in the ass. NVIDIA's GLSL compiler is not identical to AMD's for instance, and code that you write on one is quite likely to either not compile or to compile but render incorrectly on the other. The compilers absolutely do matter. Again, this is a point in D3D's favor, where there is a single compiler frontend in the D3D library itself, and the driver vendors are fed pre-compiled intermediary code, which eliminates shader language interoperability problems.
OpenGL does not exist for every GPU driver on the planet. Take the crazy shit that's in the Nintendo 3DS, for instance. It doesn't have shaders as we know them (it has programmable features, but they're not compatible with the way GLSL or HLSL work), it does not allow for customizable vertex attributes, and it actually requires that you preload a scene in a proprietary memory layout for the hardware to even consider rendering it. It can do cell shading and custom lighting and other cool fancy GPU features, but sure as hell not with OpenGL.
In any case, yes, you can translate between binary and source. You can disassemble CPU assembly to C, you can disassembly GPU assembly to GLSL, HLSL, or any other source language you invent. That's just a basic fact of source-binary translations in the compiler world. You'll necessarily lose some information, such as comments, formatting, and some of the structure, as the compiler throws away the "noise," but you can recover the core of any compiled program.
The purpose of that video is to tell you why NVIDIA is better than AMD (NVIDIA already supports OpenGL 4.3, has extensions like bindless graphics, etc.), not why OpenGL is better than competing APIs. Demos like that have existed on D3D 11 for a couple years, since D3D has had compute shaders, bindless graphics, texture views, and so on for a long time.
The big NVIDIA advances they keep crowing about -- bindless graphics, for instance -- are one of the very reasons that those of who actually work with graphics fucking hate OpenGL. Yes, NVIDIA gets it, and has extensions to fix it (performance-wise; not horrible-API-wise). They've had these extensions for many years. But Khronos has never accepted them into OpenGL proper. AMD, Intel, and others do not support those extensions, which is the problem with extensions and why nobody wants to deal with them. They are only useful if you're developing an in-house visualization app for a specific hardware spec, and are borderline useless if you want to ship an app to millions of consumers with a wide variety of hardware configurations.
2. For the last I just said that Nvidia and ID-Tech6 ditched Microsoft, in order to apply their own technology, and thats because the OpenGL is Open. If they do it for D3D, part of the technology will be Microsoft's.
3. For the technical part read my explanation above.
Comment
-
-
Originally posted by entropy View PostI use W7 for gaming since two years and
never experienced a single crash. It just behaves fine.
I'm a Linux/UNIX guy but claiming W7 is a bad OS seems to be heavily biased, really.
Comment
-
-
Originally posted by artivision View Post1. OpenGL is faster not because of Linux, but because parts of the protocol are hard-coded inside GPUs (ARB). And you don't need assembly to accelerate.
Comment
-
Comment