Originally posted by CleanCut
View Post
Announcement
Collapse
No announcement yet.
The Importance Of Benchmark Automation & Why I Hate Running Linux Games Manually
Collapse
X
-
Michael Larabel
https://www.michaellarabel.com/
-
Originally posted by CleanCut View PostMichael, as a young indie game dev shop, how exactly would you like something like "./mygame --benchmark" to behave? I'm thinking about this from the context of Skyfire, the game we are developing, which I would love to release on Linux first (all prototypes are available for free on Linux, Mac, and Windows): https://agileperception.com/skyfire We're using Unreal Engine 4, if that makes any difference.
My biggest obstacle on Linux, currently, is that I don't own a Linux desktop. We have Linux servers, naturally, but nothing with a real graphics card. So I cross-package a Linux build (native packaging for UE4 requires a real graphics card) from a Windows VM on my Mac Pro workstation.
Either through a VM or dual boot. No destruction to your current desktop required
- Likes 1
Comment
-
Originally posted by PsynoKhi0 View Post
I can understand both sides of the argument. On Michael's side it really boils down to manpower.
On the other side of the spectrum, the following [H]ardOCP article should be a must-read for anyone interested in hardware benchmarks IMO. Canned benchmarks have the inherent risk of merely benchmarking benchmark-"optimized" drivers.
Ironically enough, OpenBenchmarking should be a great platform to crowd-source benchmarks. Sure, the hardware configurations vary wildly, but with enough data, one should be able to extrapolate results, correct?
In the Time section he said:
...but this is actually not even close to being the most important reason for test/benchmark automation...
Comment
-
Originally posted by Michael View Post
Basically just a way to be able to automatically fire up the game (and ideally CLI switches or at least some config file I can modify, to specify the resolution and other settings), run the test without needing any interaction, and to then dump the results to either the standard output or a file.
What kind of information would you like in the results?
Comment
-
Originally posted by liam View Post
In the Time section he said:
And then goes on to eight other reasons that don't have to do with manpower.Last edited by PsynoKhi0; 07 June 2016, 03:19 PM.
Comment
-
Originally posted by PsynoKhi0 View PostAnd maybe I should stop glossing over articles... The problem with canned benchmarks still stands though. Though now it might even infect hardware design too. Case in point, the gtx 1080.
Apitrace seems like the way to go (and he uses it) as that lets you reliably reproduce an arbitrary source of gameplay, but apitrace has requirements that make it a non-optimal solution (you really need the full call stack as well).
Comment
Comment