Announcement

Collapse
No announcement yet.

Valve's L4D2 Is Faster On Linux Than Windows

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Scali
    replied
    Originally posted by entropy View Post
    Unless your codebase is entangled to troublesome 3rd party code (or generally crappy),
    you can do a proper Linux port without going through hell, obviously.
    Well I don't think that part is the newsworthy part. Developers knew that already.
    I think the newsworthy part is that one of the largest game/engine developers is actually committed to do so.

    In the old days, games were often supported on far more platforms than today, and also with far more differing specs.
    In the late 80s to mid 90s, we had C64, Atari ST, Amiga, PC, Mac, and various consoles. Many games were ported to 5 or more platforms.
    Look at Prince of Persia for example: http://en.wikipedia.org/wiki/Prince_of_Persia
    Apple II, MS-DOS, Amiga, Atari ST, Amstrad CPC, PC-9801, Commodore 64, PC Engine, Turbografx-16 CD, SAM Coup?, X68000, PS2, Xbox, Game Boy, NES, SNES, GBC, GCN1, Wii4, Mac OS, Master System, Mega-CD, Game Gear, FM-Towns, Mega Drive

    So games these days being released for Windows and one or two consoles? Not impressed.
    In the old days you were more free to choose what type of computer you bought, because a lot of games would be available for various computers anyway. You could just pick the computer that had the best ports of the games you liked. These days there is barely any choice in terms of hardware, it's all x86, and either nVidia or AMD GPUs, which aren't that different anyway. And the only OS you can run for most games is Windows.
    Last edited by Scali; 08-03-2012, 11:08 AM.

    Leave a comment:


  • AJSB
    replied
    Even taking in account that it's all inside error margins , the fact remains that just because you play on Linux, you won't be penalized in performance because it's at worse, in same ball park...this is good taking in account that many said that Linux was plagued with bad video drivers (and to some extend this is ture because for starters the Nouveau driver is an under dog)...when after all, with the blobs anyway, they don't seem that bad after all and performance is not worse than in Windows.


    As for the fact that NVIDIA driver has it's own configurations per game in Windows and not in Linux (BUT that might change very soon now that there is a big push from Valve working with NVIDIA, well, 1st of all, i never really used them in windows anyway because i never see any dramatic change in performance....usually let all in stock and change not at an individual basis in the driver but in overall at driver and then in-game.

    Leave a comment:


  • entropy
    replied
    To me, the only thing that really matters is that Linux seems to be on par with Windows.
    I don't really care if it is 5% faster. The next engine benchmark might show Windows being faster by 7%.

    So what?

    Major problems got solved very quickly, progress was reportedly amazing.
    The effort, starting from an existing Mac OSX port, apparently hasn't any negative effect on the mood of the developers.

    Hey, THAT'S the real good news!

    Unless your codebase is entangled to troublesome 3rd party code (or generally crappy),
    you can do a proper Linux port without going through hell, obviously.
    I've to admit, all the FUD sounded quite legitimate sometimes. Not anymore!

    Leave a comment:


  • Scali
    replied
    Originally posted by RealNC View Post
    TH has benchmarked L4D1 in the past:

    http://www.tomshardware.co.uk/64-bit...-31535-11.html

    With enough RAM, performance is equal.
    I've added a link to a more extensive comparison of Windows versions to my post above.
    As I say, it's within the margin of error. As you can see, the difference ranges from -5 to +5% between the different OSes overall. You can't declare the fastest OS by looking at one game alone... and certainly not on just one benchmark run either.

    Note also that performance is roughly equal there, at framerates of 70-110 fps. Valve is testing on a much faster system, in the 300 fps-range. Things could be quite different there.
    Last edited by Scali; 08-03-2012, 10:44 AM.

    Leave a comment:


  • RealNC
    replied
    Originally posted by Scali View Post
    Yes, that was pretty obvious from the start.
    I mean, framerates of 315 vs 303 are very much within the margin of error. If this was a videocard review, then both videocards would be seen as having 'equal performance'.
    This was clearly part of the marketing ploy from Valve, who are trying to put Windows (8) in a bad light, and trying to promote linux gaming.
    Of course this is a good thing. Personally though, I prefer Windows for heavy games, even if they were to appear on Linux. This is simply due to the configurability of the Windows drivers. I'm talking specifically about the game-specific tweaks tab on the NVidia driver panel and the NVidia Inspector and D3DOverrider utilities.

    Aside from that, I find the numbers suspect anyway. For starters, they are comparing a 32-bit distribution of Ubuntu vs a 64-bit version of Windows (and L4D2 only has a 32-bit version, so unless they specifically made a 64-bit build, the game was not even running natively on Windows, but through the WOW64 layer.
    TH has benchmarked L4D1 in the past:

    http://www.tomshardware.co.uk/64-bit...-31535-11.html

    With enough RAM, performance is equal.

    Leave a comment:


  • Scali
    replied
    Originally posted by RealNC View Post
    So in the end, I think the "Linux is faster" thingy has been blown out of proportions.
    Yes, that was pretty obvious from the start.
    I mean, framerates of 315 vs 303 are very much within the margin of error. If this was a videocard review, then both videocards would be seen as having 'equal performance'.
    This was clearly part of the marketing ploy from Valve, who are trying to put Windows (8) in a bad light, and trying to promote linux gaming.

    Aside from that, I find the numbers suspect anyway. For starters, they are comparing a 32-bit distribution of Ubuntu vs a 64-bit version of Windows (and L4D2 only has a 32-bit version, so unless they specifically made a 64-bit build, the game was not even running natively on Windows, but through the WOW64 layer. If they did have a 64-bit build then it's apples-to-oranges. Previous 64-bit versions of their engine were less than spectacular in performance: http://techgage.com/article/half-lif..._get_excited/2).
    Also, these numbers don't line up with the results I get from my own 3d rendering engines. Obviously I assume my own code is not biased towards any OS or API, so I wonder why Valve gets different results from mine (and they DO seem to have a bias, given Gabe Newell's recent comments on Windows 8).

    In fact, the differences between Windows XP, Vista and 7 (x86 and x64) were also more or less within that margin of error: http://www.firingsquad.com/hardware/windows_7_gaming/
    Last edited by Scali; 08-03-2012, 10:22 AM.

    Leave a comment:


  • RealNC
    replied
    Makes sense, but as always, real numbers are preferred.

    Anyway, my actual point is that there's a misconception among people that it would rather work like this:

    315 - 303 = 12

    Then they expect that if a game runs at 50FPS on Windows, it would run with 62 on Linux. That's rather unlikely to happen; it would require a constant latency overhead throughout on Windows.

    So in the end, I think the "Linux is faster" thingy has been blown out of proportions.

    Leave a comment:


  • Scali
    replied
    To think of it another way:
    Let's assume the 315 fps is worst-case CPU limited.
    So you spend 3.17 ms preparing each frame on the CPU, and rendering is 'free' because the GPU just waits for the CPU all the time.
    If you use a GPU that only gets 60 fps, then you are still spending the same 3.17 ms setting up each frame. However, the total frame time is now 16.7 ms rather than 3.17 ms.
    So instead of 100% of the time being spent on the CPU, only 19% of the time is related to the CPU, the rest is purely the delay of the GPU.
    In other words, instead of the OS being able to influence 100% of the performance, it is now only able to influence 19% of the performance.
    If the difference was 3% at 315 fps, then the difference is only 3*0.19 = 0.57% at 60 fps.

    So likely you won't see any difference at all, because the above was under the assumption that the 315 fps figure was completely the result of OS overhead, taking any render-time out of the equation, which would be a theoretical best-case scenario.

    Leave a comment:


  • Scali
    replied
    Originally posted by RealNC View Post
    Then I guess the only way to tell is if Valve posted numbers using a low-end GPU. Something tells me that if they do that, we'll indeed see differences in range of just 2FPS.
    I'm not so sure. The slower the GPU is, the more time the CPU spends waiting on the GPU (the CPU-time per frame is constant, only the rendering time is variable). With a bit of luck, the difference in overhead is masked out entirely by the fact that the driver queues up commands. The slower OS would spend a bit more time in preparing and queuing each command, but as long as each command is in the buffer before the GPU has completed rendering, it doesn't matter (aka being completely GPU-limited, just like how adding faster CPUs to a system won't yield better framerates, using a faster OS wouldn't either).

    Leave a comment:


  • RealNC
    replied
    Originally posted by Scali View Post
    I think his point is that these performance figures only give an overall view of CPU+GPU+overhead.
    You don't know how much of that performance is variable between the OSes, and how much of it is constant just because the CPU and GPU are that fast.
    For example, with a framerate of 60 fps, you have 16.7 ms per frame.
    Let's say 5 ms of that is the overhead of the CPU sending commands to the GPU, and the remaining 11.7 ms is the GPU rendering the actual scene.
    Now, the GPU won't go any faster than that, regardless of the OS used, so that would mean that less than 30% of the total performance can be affected by the OS at all.

    So I think he has a point: you can't just generalize these figures to any arbitrary framerate, since you don't know how much of these figures is caused by OS differences, and how much is constant time spent by the GPU.
    Then I guess the only way to tell is if Valve posted numbers using a low-end GPU. Something tells me that if they do that, we'll indeed see differences in range of just 2FPS.

    Leave a comment:

Working...
X