Announcement

Collapse
No announcement yet.

Finally, Team Fortress 2 Benchmarks For Phoronix!

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Kivada View Post
    How is it misleading? Larabel wrote and posted the article, the Phoronix news bot didn't.
    You took the quote tag from another post. There is a difference between the two:

    Originally posted by Michael View Post
    Finally, Team Fortress 2 Benchmarks For Phoronix!
    Originally posted by Michael
    Finally, Team Fortress 2 Benchmarks For Phoronix!

    Comment


    • #32
      Why are you so hung up on the origins of the engine, calling it to be dumped for a newer one?

      It doesn't matter Xonotic hails from Tech1, if it's been upgraded far beyond. Just like XReaL is far beyond Tech4, despite originating from Tech3. By using your logic, anything using xreal should dump it for tech4, since it's "newer".

      Comment


      • #33
        Originally posted by mmstick View Post
        Xorg Edgers PPA does not compile mesa/gallium3d with GLAMOR/s3tc so no radeonsi. It only carries fglrx which is completely broken beyond all repair on radeonsi. Oibaf PPA carries updated stuff but it's also broken on radeonsi and source games cannot be launched without crashing or spewing a ton of errors due to missing files/packages (and even if you manage to figure out all the files you need, it still explodes as soon as you launch a map). There is another ppa which has glamor drivers for radeonsi without bleeding edge stuff, but it doesn't have s3tc texture support so no source games.

        In the end, people with HD 7xxx cards like myself are stuck until something is done.
        I don't use Ubuntu for these and similar reasons..

        The fact is that on just about any other distro it isnt that hard to build. You can do it yourself from thinking about doing it to playing a game in less than an hour.

        Comment


        • #34
          Originally posted by curaga View Post
          Why are you so hung up on the origins of the engine, calling it to be dumped for a newer one?

          It doesn't matter Xonotic hails from Tech1, if it's been upgraded far beyond. Just like XReaL is far beyond Tech4, despite originating from Tech3. By using your logic, anything using xreal should dump it for tech4, since it's "newer".
          Indeed. By his logic, TF2 and HL2 shouldn't be benchmarked, because it's just the same old Quake 1 engine.

          Comment


          • #35
            The OSS games should be benchmarked. It's good for us that they are.. But this world is full of games and it is about time that we got a little of that variety benchmarked on phoronix. In the real world gaming has nothing to do with repeatability. Trying to create a game benchmark that is repeatable is completely retarded. Gaming is about gameplay and it is never the same twice.l
            Last edited by duby229; 07-19-2013, 11:41 AM.

            Comment


            • #36
              Originally posted by duby229 View Post
              The OSS games should be benchmarked. It's good for us that they are.. But this world is full of games and it is about time that we got a little of that variety benchmarked on phoronix. In the real world gaming has nothing to do with repeatability. Trying to create a game benchmark that is repeatable is completely retarded. Gaming is about gameplay and it is never the same twice.l
              I disagree with this. Repeatability is important. Automation is not.

              Comment


              • #37
                Originally posted by Kivada View Post
                So you think that the OSS games currently use for benchmarking look as good as professional class games that have sold in the millions? So you think a poorly doctored up original Quake engine can compete with what Valve did years ago now.

                Theres a reason nobody outside of Linus and BSD users has ever given a shit about the OSS games even though they are free, they instead flock to freeware or paid games, they are not and have not ever been good and the only people that played them save for the captive community of Linux and BSD users who basically had no other choice.

                So again, look it up yourself.
                You hide behind lot of words but you said nothing useful.

                Comment


                • #38
                  Originally posted by Ramiliez View Post
                  You hide behind lot of words but you said nothing useful.
                  No more is necessary when talking about the OSS FPS titles, they aren't worth keeping around. If you can't see that you have not played ANY closed source freeware FPS games in the last decade. Crap games like GunZ are better made then the OSS titles.

                  Fact: OSS FPS game devs are as useless at producing games as the OSS teams making content creation software.

                  Just as there is no viable OSS match to something like Pro Tools or Avid likewise there is nothing in the OSS FPS world that can be considered an equal to any game made in at least the last decade.

                  Just look at what is being rendered versus the framerate compared to professionally made games. The quality and optimization just isn't there.

                  So yeah, your eyes and frontal lobes, you should try using them some time instead of praising that which is just a clone of a clone of a clone just because it's open source. When all things are near enough to equal, then yeah, go open source, but when the OSS product has been out for a decade and is STILL YEARS behind what is given away for free there is something inherently wrong with the OSS product.

                  Comment


                  • #39
                    Originally posted by duby229 View Post
                    The OSS games should be benchmarked. It's good for us that they are.. But this world is full of games and it is about time that we got a little of that variety benchmarked on phoronix. In the real world gaming has nothing to do with repeatability. Trying to create a game benchmark that is repeatable is completely retarded. Gaming is about gameplay and it is never the same twice.l
                    Wrong, benchmarking is all about repeatability of results. However it's unlikely that any minor patches made to the Linux version of Valve's old games will have much of anything to do with performance, since they seem to have taken their time and put together very good ports. That and the majority of their library will run well even on yesterday's "meh" level hardware.

                    What we don't need are benchmarks that run in the hundreds of frames per second, they tell us nothing meaningful, we also don't need benchmarks that spit out an arbitrary number like 3DMark/Passmark/FutureMark.

                    However we do need stupid heavy "burn test" type benchmarks like Furmark, which just renders a hairy ball, but does so at such detail that it's capable of maxing out your GPU completely so you can "worst case scenario" the power management and cooling parts of the drivers.

                    Comment


                    • #40
                      No I'm really not wrong. I'm right. I'm not arguing against repeatable benchmarks. They certainly have their place, but benchmarking a game in a repeatable way with things like timedemos defeats the purpose of benchmarking a game.

                      Comment


                      • #41
                        Originally posted by Kivada View Post
                        No more is necessary when talking about the OSS FPS titles, they aren't worth keeping around. If you can't see that you have not played ANY closed source freeware FPS games in the last decade. Crap games like GunZ are better made then the OSS titles.

                        Fact: OSS FPS game devs are as useless at producing games as the OSS teams making content creation software.

                        Just as there is no viable OSS match to something like Pro Tools or Avid likewise there is nothing in the OSS FPS world that can be considered an equal to any game made in at least the last decade.

                        Just look at what is being rendered versus the framerate compared to professionally made games. The quality and optimization just isn't there.

                        So yeah, your eyes and frontal lobes, you should try using them some time instead of praising that which is just a clone of a clone of a clone just because it's open source. When all things are near enough to equal, then yeah, go open source, but when the OSS product has been out for a decade and is STILL YEARS behind what is given away for free there is something inherently wrong with the OSS product.
                        I dont need you to tell me the obvious: Visual disparity between Valve Tech Demo and Xonotic.
                        Why dont you start to address your claim that Dark Places is poorly doctored

                        Comment


                        • #42
                          Originally posted by duby229 View Post
                          No I'm really not wrong. I'm right. I'm not arguing against repeatable benchmarks. They certainly have their place, but benchmarking a game in a repeatable way with things like timedemos defeats the purpose of benchmarking a game.
                          No, you're wrong. If it's not repeatable, then there is no purpose in benchmarking the game, because it would tell you nothing at all about any future attempts to run the game. There wouldn't be any point to having run it in the first place.

                          Maybe you should try explaining what you think the purpose of a benchmark is, since it obviously differs from what everyone else expects. Just to see whether the driver can run the game at all? Nope - that only matters if it's repeatable, because if not the next run through could hit a bug you didn't see before and crash.

                          Comment


                          • #43
                            Originally posted by smitty3268 View Post
                            No, you're wrong. If it's not repeatable, then there is no purpose in benchmarking the game, because it would tell you nothing at all about any future attempts to run the game. There wouldn't be any point to having run it in the first place.

                            Maybe you should try explaining what you think the purpose of a benchmark is, since it obviously differs from what everyone else expects. Just to see whether the driver can run the game at all? Nope - that only matters if it's repeatable, because if not the next run through could hit a bug you didn't see before and crash.
                            Gameplay isnt exactly repeatable. Structuring game benchmarks to be repeatable forces the benchmark to be worthless. The data that you collect as such doesnt mean anything. You don't play timedemos. You never will. Data collected with timedemos can not represent gameplay. If you want to benchmark a game then play it. You can measure all kinds of neat things while playing with the benefit of the data actually representing real world useful information.

                            The purpose of benchmarks is to measure differences in performance under actual use cases.

                            EDIT: The point in benchmarking is not get results that you think should be representative of future runs. It is to get results for the current run. Period. You can do what you want with the data that you collect, and if you need information from another run, then do it.

                            Benchmarking is not about collecting expected data. It's about collecting -actual- data whether it's expected or not. Data that represents the facts as they are. If you need corroborating data then you need to employ scientific methodologies to obtain it. Constraining a game to always artificially return exactly the same data is not scientific and does not represent gameplay. Each time I play through a game, performance -will- be different. Game benchmarks should be made to collect real data that represents actual performance as it is played. Because that is what you do with games.... You play them... So play should be benchmarked...
                            Last edited by duby229; 07-20-2013, 07:49 PM.

                            Comment


                            • #44
                              Originally posted by duby229 View Post
                              Gameplay isnt exactly repeatable. Structuring game benchmarks to be repeatable forces the benchmark to be worthless. The data that you collect as such doesnt mean anything. You don't play timedemos. You never will. Data collected with timedemos can not represent gameplay. If you want to benchmark a game then play it. You can measure all kinds of neat things while playing with the benefit of the data actually representing real world useful information.

                              The purpose of benchmarks is to measure differences in performance under actual use cases.
                              Yes, but it doesn't make sense to measure difference between two runs with unrepeatable benchmark, because you simply can't compare the results. It's like a car race where each racer chooses different track with different length, weather and other conditions. You can measure the time for each racer, but you can't compare them, and that in fact makes such race completely meaningless.

                              Whatever track you choose in each case, in a single run you still get only the result only for this track and for some specific conditions, so it's not like something more universal, you are still using some finite subset of preconditions exactly like in typical race/benchmark. The fact that you change conditions between two runs doesn't make the result of any single run more representative. You are losing ability to compare the results without gaining anything.

                              Looks like you think about some sort of "universal" benchmark, something like a race over all existing roads with any possible weather and other conditions, but obviously it's not possible in reality, any single run is performed with some specific finite subset of the infinite set of all possible conditions. And if you want to be able to compare results and measure the difference between them, you have to use the same subset of conditions for all runs that you are going to compare. Of course, you might want to choose the most representative subset - but it's another question. If you think that some of your test runs would be more representative than existing timedemo - just record it as a new timedemo and use it for subsequent benchmarks, then you'll be able to compare them and measure the difference.

                              What you propose is basically the same as recording new demo for each run and then benchmarking it, so why do you think that it will make the result of any single run more representative?

                              Comment


                              • #45
                                Originally posted by vadimg View Post
                                Yes, but it doesn't make sense to measure difference between two runs with unrepeatable benchmark, because you simply can't compare the results. It's like a car race where each racer chooses different track with different length, weather and other conditions. You can measure the time for each racer, but you can't compare them, and that in fact makes such race completely meaningless.

                                Whatever track you choose in each case, in a single run you still get only the result only for this track and for some specific conditions, so it's not like something more universal, you are still using some finite subset of preconditions exactly like in typical race/benchmark. The fact that you change conditions between two runs doesn't make the result of any single run more representative. You are losing ability to compare the results without gaining anything.

                                Looks like you think about some sort of "universal" benchmark, something like a race over all existing roads with any possible weather and other conditions, but obviously it's not possible in reality, any single run is performed with some specific finite subset of the infinite set of all possible conditions. And if you want to be able to compare results and measure the difference between them, you have to use the same subset of conditions for all runs that you are going to compare. Of course, you might want to choose the most representative subset - but it's another question. If you think that some of your test runs would be more representative than existing timedemo - just record it as a new timedemo and use it for subsequent benchmarks, then you'll be able to compare them and measure the difference.

                                What you propose is basically the same as recording new demo for each run and then benchmarking it, so why do you think that it will make the result of any single run more representative?
                                I have a massive amount of respect for you, and as such I don't want to sound as if I am disagreeing with you. So I'll keep this short by refering to your race track analogy. Each time a racer makes a lap on that same exact track, he will not exactly follow the path of the last lap. Each lap will be unique. To keep the analogy going each lap can be considered a benchmark run where each run was played through the same game with the same settings following very similar paths. But each run will be unique just as each lap will be unique.

                                It is in such a way that benchmarks can be exactly representative because each run will represent actual gameplay.

                                EDIT: I'm not talking at all about recording timedemos. Instead I'm talking about recording performance data while actually playing the game. That way the data collected is real world. It won't be repeatable, but it will be real.

                                EDIT2: Two different people playing the same game on the same settings with the same hardware are very likely to get different performance simply due to their own style of play. What I'm suggesting gives each individual the opportunity to see how a game performs in the real world for them.
                                Last edited by duby229; 07-21-2013, 04:04 PM.

                                Comment

                                Working...
                                X