Announcement

Collapse
No announcement yet.

Intel OpenGL Performance: OS X vs. Windows vs. Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • jrch2k8
    replied
    Originally posted by boast View Post
    So higher opengl version = higher performance? I see...
    not necessarily it just means it using a codepath in gl4 that benefit performance.

    for a simple example if you emulate tessealation with shader when using gl3 and use actual hardware dedicated tessalation if gl4 is present, just switching to gl4 allow you to do the same but hardware accelerated[you can't do that without gl4 cuz the silicon don't exist in previous hardware not cuz is gl4]. another example could be gl4 class hardware can support fp64[at least some of them not sure if all] then your shader compiler can pass fp64 data in 1 cycle instead of 2 fp32[1 per cycle theorically] needed for gl3 class hardware that will provide some nice speed up in your shaders, etc[<-- is more complex that this but you get the idea]

    Leave a comment:


  • russofris
    replied
    Originally posted by nir2142 View Post
    STOP DOING TEST JUST IN MAC AND DO IT IN PC AND MAC IF YOU WANT OSX VS WINDOWS VS LINUX.

    and please stop telling me that its the same hardware because i know it same hardware BUT this is not true benchmarks

    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    My hope is that Michael is doing it to annoy you, so that you move to another forum. I hear that the anandtech forums are full of like-minded individuals. You could go there. Let me help you. Here's a link. See you around.

    F

    Leave a comment:


  • boast
    replied
    Originally posted by artivision View Post
    1. Intel HD 4000 is fast. 16cores*4shaders64bit or 8shaders32bit*FMAC*1.25ghz = 170gflops64bit(nvidia comparison) or 340gflops32bit(AMD comparison) or 500macGflops(AMD 6000 and less without FMAC).

    2. On Linux is a little slower. Not because of Linux but because of OpenGL version support that is newer on Windows.

    3. On Windows its only a little newer: yes can do OpenGL 3.1 and 4.0, but not 3.2, 3.3, 4.1, 4.2, 4.3. And its not a year away, Intel works faster now.

    4. Prefer free software. That way all of them will submit to as.
    So higher opengl version = higher performance? I see...

    Leave a comment:


  • artivision
    replied
    Originally posted by jrch2k8 View Post
    1.) no, in the case of DX is quite different[most drivers] and in the opengl cases you have many variants with specific custom extension [AGL,WGL,GLX among others] and in the case of WGL depending the driver its emulated over DX. so it takes some analisys depending from and destination of the port[this should not happen but it is like that for many reason]

    2.) no, data[api calls but whatever] from a game is not os independant at all[maybe textures] nor is tool independant nor is OS graphic stack independant because you somehow assume opengl is a language or some sort of proto IR language but in the real world is a library that is amazingly flexible and is used in conjuctions many languages[mostly C++/ASM(x86/arm)] and the driver need to be very smart to know what it can do and can't do[or emulate] in each OS. even if is true every OS can manage the hardware they do it extremely different[1:1 translate yeah in movies <-- id4 comes to my mind] sometimes it helps sometimes that force you to rethink a million lines of code.

    Additionally you assume somehow every gl api call is a direct ASM gpu function, no opengl is hardware agnostic[which make it more complex tho] so you can use CPU/GPU/preprocessors/clusters/etc and many of those are not possible on windows while on linux are perfectly standard[no in OSS drivers for now tho] you also wrongly assume that every OS api call do the same is called the same and perform the same which neither is true[is like saying an F35 and helicopter should be the same cuz both fly] [there are many good sites that explain this very deeply and easy to understand google it] and even an algorithm that is efficient in windows can be terribly slow on linux/mac compared to a modified algorithm using the techs in that native OS[many many examples of this | google is your friend] and this mostly drive you to rewrite half of your glsl interpreter to try to find a mid point between OSes.

    and just to name few more factors that make this ridiculous that would force you to rethink most of that code to meet an performance expectation filesystem, cpu scheduler, vectorization, I/O subsystem, latency, memory handling, interrupt handling/OS flexibility[windows pretty much allow any dirty hack you can think of where linux abort compilation or sigsegv your ass out] and many more

    3.) please explain to me this suspend thread thing[why you think is so important] cuz you have like 6 posts getting IANAL about it and after 10 years of developing threaded apps[c++] for linux i never have found a technical reason to suspend threads in efficient code[i always design my apps to be thread safe/small portion/atomics type/etc] and in my windows time i don't remember using them either, so i would like some example or something to get your point here

    I agree to the most.

    Leave a comment:


  • artivision
    replied
    1. Intel HD 4000 is fast. 16cores*4shaders64bit or 8shaders32bit*FMAC*1.25ghz = 170gflops64bit(nvidia comparison) or 340gflops32bit(AMD comparison) or 500macGflops(AMD 6000 and less without FMAC).

    2. On Linux is a little slower. Not because of Linux but because of OpenGL version support that is newer on Windows.

    3. On Windows its only a little newer: yes can do OpenGL 3.1 and 4.0, but not 3.2, 3.3, 4.1, 4.2, 4.3. And its not a year away, Intel works faster now.

    4. Prefer free software. That way all of them will submit to as.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by nir2142 View Post
    STOP DOING TEST JUST IN MAC AND DO IT IN PC AND MAC IF YOU WANT OSX VS WINDOWS VS LINUX.

    and please stop telling me that its the same hardware because i know it same hardware BUT this is not true benchmarks

    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    because is an intel i5 OSS driver vs other OS intel drivers benchmarks??? too much weed??

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by gamerk2 View Post
    Look at this the way I do:

    The driver is processing some data passed in by some game. That data should be the same regardless of the host platform. As such, the processing within the driver should be identical across all platforms. The ONLY parts of the driver that should be different across OS's are any calls that use OS API's, which ideally would be replaced in a 1:1 manner.

    ...then you get into things OS A supports that OS B doesn't, and you start to see a lot of kludges in the code base to make things work. OS A supports created a thread in a suspended state; OS B doesn't. And so on and so forth.

    For example: My driver needs to create a thread in a suspended state. On Windows, simply invoke CreateThread() with the CREATE_SUSPENDED flag. Done.

    On Linux, pthread_create() is the obvious choice...except there's no way to suspend the thread on thread creation. So now you need to kludge the code to approximate the same behavior, often at a performance loss. And of course, its non-standard behavior between different devs, which can (and will) lead to issues when drivers start talking to eachother...[Seriously, POSIX needs to add a parameter to pthread_create() to allow for a suspended startup. Causes too many headaches, espeically in languages like Ada that separate thread creation from thread start.]

    Now, when you run into problems like that a couple hundred times while writing the driver...you get the idea. The driver can be no better then the interface to the OS.
    1.) no, in the case of DX is quite different[most drivers] and in the opengl cases you have many variants with specific custom extension [AGL,WGL,GLX among others] and in the case of WGL depending the driver its emulated over DX. so it takes some analisys depending from and destination of the port[this should not happen but it is like that for many reason]

    2.) no, data[api calls but whatever] from a game is not os independant at all[maybe textures] nor is tool independant nor is OS graphic stack independant because you somehow assume opengl is a language or some sort of proto IR language but in the real world is a library that is amazingly flexible and is used in conjuctions many languages[mostly C++/ASM(x86/arm)] and the driver need to be very smart to know what it can do and can't do[or emulate] in each OS. even if is true every OS can manage the hardware they do it extremely different[1:1 translate yeah in movies <-- id4 comes to my mind] sometimes it helps sometimes that force you to rethink a million lines of code.

    Additionally you assume somehow every gl api call is a direct ASM gpu function, no opengl is hardware agnostic[which make it more complex tho] so you can use CPU/GPU/preprocessors/clusters/etc and many of those are not possible on windows while on linux are perfectly standard[no in OSS drivers for now tho] you also wrongly assume that every OS api call do the same is called the same and perform the same which neither is true[is like saying an F35 and helicopter should be the same cuz both fly] [there are many good sites that explain this very deeply and easy to understand google it] and even an algorithm that is efficient in windows can be terribly slow on linux/mac compared to a modified algorithm using the techs in that native OS[many many examples of this | google is your friend] and this mostly drive you to rewrite half of your glsl interpreter to try to find a mid point between OSes.

    and just to name few more factors that make this ridiculous that would force you to rethink most of that code to meet an performance expectation filesystem, cpu scheduler, vectorization, I/O subsystem, latency, memory handling, interrupt handling/OS flexibility[windows pretty much allow any dirty hack you can think of where linux abort compilation or sigsegv your ass out] and many more

    3.) please explain to me this suspend thread thing[why you think is so important] cuz you have like 6 posts getting IANAL about it and after 10 years of developing threaded apps[c++] for linux i never have found a technical reason to suspend threads in efficient code[i always design my apps to be thread safe/small portion/atomics type/etc] and in my windows time i don't remember using them either, so i would like some example or something to get your point here

    Leave a comment:


  • ChrisXY
    replied
    Originally posted by curaga View Post
    DX on Windows may have something to do with it
    Almost 2 years ago...
    http://cgit.freedesktop.org/mesa/mes...09c1b8903d438b

    Leave a comment:


  • Hamish Wilson
    replied
    Originally posted by nir2142 View Post
    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    Because he already has - not recently, but he has in the past.

    Leave a comment:


  • nir2142
    replied
    STOP DOING TEST JUST IN MAC AND DO IT IN PC AND MAC IF YOU WANT OSX VS WINDOWS VS LINUX.

    and please stop telling me that its the same hardware because i know it same hardware BUT this is not true benchmarks

    and why dont you create a benchmarks between windows and linux in PC and install nvidia or ati official drivers (NOT MESA) ??
    Last edited by nir2142; 08-29-2012, 01:10 PM.

    Leave a comment:

Working...
X