What logic is that you can scale down a 300+ fps down to 60? that's complete bullshit!
Announcement
Collapse
No announcement yet.
Valve's L4D2 Is Faster On Linux Than Windows
Collapse
X
-
Originally posted by Kano View PostWhat logic is that you can scale down a 300+ fps down to 60? that's complete bullshit!
Comment
-
Originally posted by RealNC View PostHm? What do you mean? If you bench Linux vs Windows GL implementations, you end up with how much faster the one is over the other with the specific game 3D engine you tested. If Linux is 3-4% faster, then obviously that's the impact you can project to overall GL performance on that specific game 3D engine. If you run the game on a low-end NVidia GPU that can't deliver more than 30 or 40FPS with that game, then that translates to 1 or 2 FPS difference.
You don't know how much of that performance is variable between the OSes, and how much of it is constant just because the CPU and GPU are that fast.
For example, with a framerate of 60 fps, you have 16.7 ms per frame.
Let's say 5 ms of that is the overhead of the CPU sending commands to the GPU, and the remaining 11.7 ms is the GPU rendering the actual scene.
Now, the GPU won't go any faster than that, regardless of the OS used, so that would mean that less than 30% of the total performance can be affected by the OS at all.
So I think he has a point: you can't just generalize these figures to any arbitrary framerate, since you don't know how much of these figures is caused by OS differences, and how much is constant time spent by the GPU.
Comment
-
Originally posted by Scali View PostI think his point is that these performance figures only give an overall view of CPU+GPU+overhead.
You don't know how much of that performance is variable between the OSes, and how much of it is constant just because the CPU and GPU are that fast.
For example, with a framerate of 60 fps, you have 16.7 ms per frame.
Let's say 5 ms of that is the overhead of the CPU sending commands to the GPU, and the remaining 11.7 ms is the GPU rendering the actual scene.
Now, the GPU won't go any faster than that, regardless of the OS used, so that would mean that less than 30% of the total performance can be affected by the OS at all.
So I think he has a point: you can't just generalize these figures to any arbitrary framerate, since you don't know how much of these figures is caused by OS differences, and how much is constant time spent by the GPU.
Comment
-
Originally posted by RealNC View PostThen I guess the only way to tell is if Valve posted numbers using a low-end GPU. Something tells me that if they do that, we'll indeed see differences in range of just 2FPS.
Comment
-
To think of it another way:
Let's assume the 315 fps is worst-case CPU limited.
So you spend 3.17 ms preparing each frame on the CPU, and rendering is 'free' because the GPU just waits for the CPU all the time.
If you use a GPU that only gets 60 fps, then you are still spending the same 3.17 ms setting up each frame. However, the total frame time is now 16.7 ms rather than 3.17 ms.
So instead of 100% of the time being spent on the CPU, only 19% of the time is related to the CPU, the rest is purely the delay of the GPU.
In other words, instead of the OS being able to influence 100% of the performance, it is now only able to influence 19% of the performance.
If the difference was 3% at 315 fps, then the difference is only 3*0.19 = 0.57% at 60 fps.
So likely you won't see any difference at all, because the above was under the assumption that the 315 fps figure was completely the result of OS overhead, taking any render-time out of the equation, which would be a theoretical best-case scenario.
Comment
-
Makes sense, but as always, real numbers are preferred.
Anyway, my actual point is that there's a misconception among people that it would rather work like this:
315 - 303 = 12
Then they expect that if a game runs at 50FPS on Windows, it would run with 62 on Linux. That's rather unlikely to happen; it would require a constant latency overhead throughout on Windows.
So in the end, I think the "Linux is faster" thingy has been blown out of proportions.
Comment
-
Originally posted by RealNC View PostSo in the end, I think the "Linux is faster" thingy has been blown out of proportions.
I mean, framerates of 315 vs 303 are very much within the margin of error. If this was a videocard review, then both videocards would be seen as having 'equal performance'.
This was clearly part of the marketing ploy from Valve, who are trying to put Windows (8) in a bad light, and trying to promote linux gaming.
Aside from that, I find the numbers suspect anyway. For starters, they are comparing a 32-bit distribution of Ubuntu vs a 64-bit version of Windows (and L4D2 only has a 32-bit version, so unless they specifically made a 64-bit build, the game was not even running natively on Windows, but through the WOW64 layer. If they did have a 64-bit build then it's apples-to-oranges. Previous 64-bit versions of their engine were less than spectacular in performance: http://techgage.com/article/half-lif..._get_excited/2).
Also, these numbers don't line up with the results I get from my own 3d rendering engines. Obviously I assume my own code is not biased towards any OS or API, so I wonder why Valve gets different results from mine (and they DO seem to have a bias, given Gabe Newell's recent comments on Windows 8).
In fact, the differences between Windows XP, Vista and 7 (x86 and x64) were also more or less within that margin of error: http://www.firingsquad.com/hardware/windows_7_gaming/Last edited by Scali; 03 August 2012, 10:22 AM.
Comment
-
Originally posted by Scali View PostYes, that was pretty obvious from the start.
I mean, framerates of 315 vs 303 are very much within the margin of error. If this was a videocard review, then both videocards would be seen as having 'equal performance'.
This was clearly part of the marketing ploy from Valve, who are trying to put Windows (8) in a bad light, and trying to promote linux gaming.
Aside from that, I find the numbers suspect anyway. For starters, they are comparing a 32-bit distribution of Ubuntu vs a 64-bit version of Windows (and L4D2 only has a 32-bit version, so unless they specifically made a 64-bit build, the game was not even running natively on Windows, but through the WOW64 layer.
With enough RAM, performance is equal.
Comment
-
Originally posted by RealNC View PostTH has benchmarked L4D1 in the past:
With enough RAM, performance is equal.
As I say, it's within the margin of error. As you can see, the difference ranges from -5 to +5% between the different OSes overall. You can't declare the fastest OS by looking at one game alone... and certainly not on just one benchmark run either.
Note also that performance is roughly equal there, at framerates of 70-110 fps. Valve is testing on a much faster system, in the 300 fps-range. Things could be quite different there.Last edited by Scali; 03 August 2012, 10:44 AM.
Comment
Comment