Originally posted by kollo
View Post
What I am having trouble with is some of these Windows based benchmarks at other websites. Either they are just "test runners" and know nothing of what they run, or they are not very inclined to investigate their findings. I have seen several shoulder shrug "oh well" on some Windows testing anomalies. It's one thing to publish a result, but if you can't explain the why of a result. One site did publish a Win and Linux side by side on a WX and Windows "won" hands down. But the level of transparency is lacking, I couldn't get certain details on the test setup.
So, like you I started examining variances between how an app native to Windows behaves relative to an app native to Linux behaves. Dependencies, libraries, compiler settings, etc.
For example, I work with an vendor application stack that is written and compiled for Windows first, then they go back and recompile it for Linux. For years we ran this app stack on Linux and took many months to tune it. When the Linux OS version became obsolete, we went back and worked with the vendor on a target OS plan. That is when they informed us that their stack was compiled Windows first, Linux second, and that the Windows version ran 10-15% faster because of that. The "why" was never revealed due to internal policies and we wanted that 15%, so we switched the stack to Windows.
This is why I am looking "under the covers" to see where people are getting their material. Because PTS is 100% open source, I can examine it all.
Comment