Announcement

Collapse
No announcement yet.

Finally, Team Fortress 2 Benchmarks For Phoronix!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Kivada View Post
    No more is necessary when talking about the OSS FPS titles, they aren't worth keeping around. If you can't see that you have not played ANY closed source freeware FPS games in the last decade. Crap games like GunZ are better made then the OSS titles.

    Fact: OSS FPS game devs are as useless at producing games as the OSS teams making content creation software.

    Just as there is no viable OSS match to something like Pro Tools or Avid likewise there is nothing in the OSS FPS world that can be considered an equal to any game made in at least the last decade.

    Just look at what is being rendered versus the framerate compared to professionally made games. The quality and optimization just isn't there.

    So yeah, your eyes and frontal lobes, you should try using them some time instead of praising that which is just a clone of a clone of a clone just because it's open source. When all things are near enough to equal, then yeah, go open source, but when the OSS product has been out for a decade and is STILL YEARS behind what is given away for free there is something inherently wrong with the OSS product.
    I dont need you to tell me the obvious: Visual disparity between Valve Tech Demo and Xonotic.
    Why dont you start to address your claim that Dark Places is poorly doctored

    Comment


    • #42
      Originally posted by duby229 View Post
      No I'm really not wrong. I'm right. I'm not arguing against repeatable benchmarks. They certainly have their place, but benchmarking a game in a repeatable way with things like timedemos defeats the purpose of benchmarking a game.
      No, you're wrong. If it's not repeatable, then there is no purpose in benchmarking the game, because it would tell you nothing at all about any future attempts to run the game. There wouldn't be any point to having run it in the first place.

      Maybe you should try explaining what you think the purpose of a benchmark is, since it obviously differs from what everyone else expects. Just to see whether the driver can run the game at all? Nope - that only matters if it's repeatable, because if not the next run through could hit a bug you didn't see before and crash.

      Comment


      • #43
        Originally posted by smitty3268 View Post
        No, you're wrong. If it's not repeatable, then there is no purpose in benchmarking the game, because it would tell you nothing at all about any future attempts to run the game. There wouldn't be any point to having run it in the first place.

        Maybe you should try explaining what you think the purpose of a benchmark is, since it obviously differs from what everyone else expects. Just to see whether the driver can run the game at all? Nope - that only matters if it's repeatable, because if not the next run through could hit a bug you didn't see before and crash.
        Gameplay isnt exactly repeatable. Structuring game benchmarks to be repeatable forces the benchmark to be worthless. The data that you collect as such doesnt mean anything. You don't play timedemos. You never will. Data collected with timedemos can not represent gameplay. If you want to benchmark a game then play it. You can measure all kinds of neat things while playing with the benefit of the data actually representing real world useful information.

        The purpose of benchmarks is to measure differences in performance under actual use cases.

        EDIT: The point in benchmarking is not get results that you think should be representative of future runs. It is to get results for the current run. Period. You can do what you want with the data that you collect, and if you need information from another run, then do it.

        Benchmarking is not about collecting expected data. It's about collecting -actual- data whether it's expected or not. Data that represents the facts as they are. If you need corroborating data then you need to employ scientific methodologies to obtain it. Constraining a game to always artificially return exactly the same data is not scientific and does not represent gameplay. Each time I play through a game, performance -will- be different. Game benchmarks should be made to collect real data that represents actual performance as it is played. Because that is what you do with games.... You play them... So play should be benchmarked...
        Last edited by duby229; 20 July 2013, 07:49 PM.

        Comment


        • #44
          Originally posted by duby229 View Post
          Gameplay isnt exactly repeatable. Structuring game benchmarks to be repeatable forces the benchmark to be worthless. The data that you collect as such doesnt mean anything. You don't play timedemos. You never will. Data collected with timedemos can not represent gameplay. If you want to benchmark a game then play it. You can measure all kinds of neat things while playing with the benefit of the data actually representing real world useful information.

          The purpose of benchmarks is to measure differences in performance under actual use cases.
          Yes, but it doesn't make sense to measure difference between two runs with unrepeatable benchmark, because you simply can't compare the results. It's like a car race where each racer chooses different track with different length, weather and other conditions. You can measure the time for each racer, but you can't compare them, and that in fact makes such race completely meaningless.

          Whatever track you choose in each case, in a single run you still get only the result only for this track and for some specific conditions, so it's not like something more universal, you are still using some finite subset of preconditions exactly like in typical race/benchmark. The fact that you change conditions between two runs doesn't make the result of any single run more representative. You are losing ability to compare the results without gaining anything.

          Looks like you think about some sort of "universal" benchmark, something like a race over all existing roads with any possible weather and other conditions, but obviously it's not possible in reality, any single run is performed with some specific finite subset of the infinite set of all possible conditions. And if you want to be able to compare results and measure the difference between them, you have to use the same subset of conditions for all runs that you are going to compare. Of course, you might want to choose the most representative subset - but it's another question. If you think that some of your test runs would be more representative than existing timedemo - just record it as a new timedemo and use it for subsequent benchmarks, then you'll be able to compare them and measure the difference.

          What you propose is basically the same as recording new demo for each run and then benchmarking it, so why do you think that it will make the result of any single run more representative?

          Comment


          • #45
            Originally posted by vadimg View Post
            Yes, but it doesn't make sense to measure difference between two runs with unrepeatable benchmark, because you simply can't compare the results. It's like a car race where each racer chooses different track with different length, weather and other conditions. You can measure the time for each racer, but you can't compare them, and that in fact makes such race completely meaningless.

            Whatever track you choose in each case, in a single run you still get only the result only for this track and for some specific conditions, so it's not like something more universal, you are still using some finite subset of preconditions exactly like in typical race/benchmark. The fact that you change conditions between two runs doesn't make the result of any single run more representative. You are losing ability to compare the results without gaining anything.

            Looks like you think about some sort of "universal" benchmark, something like a race over all existing roads with any possible weather and other conditions, but obviously it's not possible in reality, any single run is performed with some specific finite subset of the infinite set of all possible conditions. And if you want to be able to compare results and measure the difference between them, you have to use the same subset of conditions for all runs that you are going to compare. Of course, you might want to choose the most representative subset - but it's another question. If you think that some of your test runs would be more representative than existing timedemo - just record it as a new timedemo and use it for subsequent benchmarks, then you'll be able to compare them and measure the difference.

            What you propose is basically the same as recording new demo for each run and then benchmarking it, so why do you think that it will make the result of any single run more representative?
            I have a massive amount of respect for you, and as such I don't want to sound as if I am disagreeing with you. So I'll keep this short by refering to your race track analogy. Each time a racer makes a lap on that same exact track, he will not exactly follow the path of the last lap. Each lap will be unique. To keep the analogy going each lap can be considered a benchmark run where each run was played through the same game with the same settings following very similar paths. But each run will be unique just as each lap will be unique.

            It is in such a way that benchmarks can be exactly representative because each run will represent actual gameplay.

            EDIT: I'm not talking at all about recording timedemos. Instead I'm talking about recording performance data while actually playing the game. That way the data collected is real world. It won't be repeatable, but it will be real.

            EDIT2: Two different people playing the same game on the same settings with the same hardware are very likely to get different performance simply due to their own style of play. What I'm suggesting gives each individual the opportunity to see how a game performs in the real world for them.
            Last edited by duby229; 21 July 2013, 04:04 PM.

            Comment


            • #46
              Originally posted by duby229 View Post
              EDIT2: Two different people playing the same game on the same settings with the same hardware are very likely to get different performance simply due to their own style of play. What I'm suggesting gives each individual the opportunity to see how a game performs in the real world for them.
              And guess what, everyone else plays differently than Michael, thus such results would be meaningless to anyone but Michael.

              The point is that timedemos *are* representative of gameplay. Timedemos are actual runs of actual gameplay. Your playstyle may be different, but it won't change the performance you get too much from the timedemo. If anything, there is more difference between people in how they perceive FPS (for some people 30 FPS is unbearable, for others 15 is fine). But doing benchmarks while not using a timedemo would just yield useless data (oh hey, an old card gave me more FPS than a new one, never mind that I spent the whole time looking at a wall with it!)

              Comment


              • #47
                Originally posted by duby229 View Post
                Gameplay isnt exactly repeatable. Structuring game benchmarks to be repeatable forces the benchmark to be worthless. The data that you collect as such doesnt mean anything. You don't play timedemos. You never will. Data collected with timedemos can not represent gameplay. If you want to benchmark a game then play it. You can measure all kinds of neat things while playing with the benefit of the data actually representing real world useful information.

                The purpose of benchmarks is to measure differences in performance under actual use cases.

                EDIT: The point in benchmarking is not get results that you think should be representative of future runs. It is to get results for the current run. Period. You can do what you want with the data that you collect, and if you need information from another run, then do it.

                Benchmarking is not about collecting expected data. It's about collecting -actual- data whether it's expected or not. Data that represents the facts as they are. If you need corroborating data then you need to employ scientific methodologies to obtain it. Constraining a game to always artificially return exactly the same data is not scientific and does not represent gameplay. Each time I play through a game, performance -will- be different. Game benchmarks should be made to collect real data that represents actual performance as it is played. Because that is what you do with games.... You play them... So play should be benchmarked...
                /smh

                This is idiotic. I'm sorry, but it is.

                No, benchmarking a timed demo run won't be exactly what you get while playing the game. The WHOLE POINT of it is to be a useful approximation though. If it's not a good approximation, then yes, the benchmark would be pointless. If it is a good approximation, then that makes it a good benchmark.

                If all you are doing is collecting benchmarks for 1 run without any expectation of repeating it, THERE IS NO POINT. NONE. ZERO. ZILCH. Why in the world would you even think that could possibly be useful if it has no ability to predict future results?

                Comment


                • #48
                  Originally posted by GreatEmerald View Post
                  And guess what, everyone else plays differently than Michael, thus such results would be meaningless to anyone but Michael.

                  The point is that timedemos *are* representative of gameplay. Timedemos are actual runs of actual gameplay. Your playstyle may be different, but it won't change the performance you get too much from the timedemo. If anything, there is more difference between people in how they perceive FPS (for some people 30 FPS is unbearable, for others 15 is fine). But doing benchmarks while not using a timedemo would just yield useless data (oh hey, an old card gave me more FPS than a new one, never mind that I spent the whole time looking at a wall with it!)
                  Michael is also not the only one that uses PTS. It would be a valuable feature for many people.

                  If someone spends his game time looking at a wall then benchmarks while looking at a wall would be useful to him and a timedemo would be pointless.

                  Comment


                  • #49
                    Originally posted by smitty3268 View Post
                    /smh

                    This is idiotic. I'm sorry, but it is.

                    No, benchmarking a timed demo run won't be exactly what you get while playing the game. The WHOLE POINT of it is to be a useful approximation though. If it's not a good approximation, then yes, the benchmark would be pointless. If it is a good approximation, then that makes it a good benchmark.

                    If all you are doing is collecting benchmarks for 1 run without any expectation of repeating it, THERE IS NO POINT. NONE. ZERO. ZILCH. Why in the world would you even think that could possibly be useful if it has no ability to predict future results?
                    Of course you can repeat it. The data you collect though won't be comparable to the last run. And THAT is the point. The data you collect is representative of THAT run. period. In this way the data collected is representative of what you actually did during that run. A.K.A.... It's real world data. It's not artificially limited by a timedemo.

                    EDIT: Benchmarking is not -at all- about predicting future results. It's all about collecting current real world data. The environment that is being benchmarked should as closely as possible represent real world usage. A timedemo does not do that.
                    Last edited by duby229; 21 July 2013, 04:40 PM.

                    Comment


                    • #50
                      Originally posted by Ramiliez View Post
                      I dont need you to tell me the obvious: Visual disparity between Valve Tech Demo and Xonotic.
                      Why dont you start to address your claim that Dark Places is poorly doctored
                      The games that have picked it over competing engines. It's literally that fucking obvious.
                      Browse and find games created by DarkPlaces engine at ModDB.


                      Tell me now, what games there are worth a damn compared to games of similar scale, number of devs etc.? Why is it not more popular then so called similar engines?

                      Again, it's like I said previously, it's like the complete lack of open source professional content creation tools. Whiny people like yourself would rather expound the virtues of F/OSS instead of looking at what the pros are doing and copying it until there is nothing left to copy and you can start innovating.

                      I want things to progress to greatness, you want things to stay the same so they can wither and die.

                      Comment

                      Working...
                      X