Announcement

Collapse
No announcement yet.

14-Way AMD vs. NVIDIA Linux Gaming Performance For DiRT Showdown

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Michael View Post

    I tried experimenting with something like that before but not quite good yet, and haven't had the time to touch it in like two years or so.... For lack of time.

    Since of course I will not assemble such things manually, but am able to determine the launch year/quarter for a majority of hardware out there by querying the time XYZ component started to first appear in the OpenBenchmarking.org databases. That has worked reasonably well and accurately from my tests.

    I do have access to the Amazon shopping API but last time I was toying with that, I still had issues getting accurate prices/matches for the graphics cards or other components. Sometimes they would turn up right but other times searching for a GPU would just show a computer, or there would be some bogus listing.
    All "real-time" price systems for hardware I see on internet are false/bad or click-baiters for resellers.

    So definitely I think launch price is more accurate ( and much more simple to manage as it is fix and can be added in descriptions )

    Moreover it is interesting to see that, for example, you can get a Nvidia 960 GTX card today for 200$ that performs as fast as a 600$ card from 2 years ago.

    Comment


    • #32
      Originally posted by Passso View Post

      All "real-time" price systems for hardware I see on internet are false/bad or click-baiters for resellers.

      So definitely I think launch price is more accurate ( and much more simple to manage as it is fix and can be added in descriptions )

      Moreover it is interesting to see that, for example, you can get a Nvidia 960 GTX card today for 200$ that performs as fast as a 600$ card from 2 years ago.
      But it's not like I maintain a database of launch prices on each graphics card, etc. Plus due to OpenBenchmarking.org integration, would need to be automated. I could potentially add a Wiki-type list on OpenBenchmarking.org of launch price data that people could contribute to, etc. But would likely fall low on priority queue unless someone was helping out or a commercial client wanting it for a contract.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #33
        Originally posted by grigi View Post
        Michael

        Just curious, there is a flag for eON wrappers --eon_disable_catalyst_workarounds and apparently Catalyst 15.7 have the bug fixed that causes eON wrappers to run slower. I know Witcher2 runs about 20% better with that flag now. Can you test if DiRT shows any performance difference with that flag?

        I'm curious if eON is aware that Catalyst 15.7 fixed the ARB_texture_storage issue?
        Witcher 2 have profile in Catalyst 15.7, so it should run fine... but yeah maybe that game switch is needed too.

        Dirt obviosly does not have profile yet, but it might perform fine with some for another game.
        Last edited by dungeon; 18 August 2015, 10:40 AM.

        Comment


        • #34
          Originally posted by Michael View Post

          But it's not like I maintain a database of launch prices on each graphics card, etc. Plus due to OpenBenchmarking.org integration, would need to be automated. I could potentially add a Wiki-type list on OpenBenchmarking.org of launch price data that people could contribute to, etc. But would likely fall low on priority queue unless someone was helping out or a commercial client wanting it for a contract.
          IMHO a shortlist of the 20 CPU / GPU you actually use in your everyday benchmark articles could be enough.

          Not sure you need a complete database which would means far more work, as you precised.

          Comment


          • #35
            Originally posted by nightmarex View Post


            Again this is an EON wrapper game... So I can post games that on radeon blow nvidia out of the water, because they are Windows wine games using gallium 9 that means AMD is the best gaming experience on Linux? You see how stupid that is? You're blind if you take results like this as indication of anything more than emulation. And AMD still has the corner on DX9 games which probably outnumber all Linux games. This logic of testing EON crap is dumb the $1000 nvidia cards are capped out the same as the $300 the same story with the AMD cards, the highend and low end all run the same. It's not a game for bench marking it's a hack so Linux users can play it and nothing more.
            This article is clearly headlined: "14-Way AMD vs. NVIDIA Linux Gaming Performance For DiRT Showdown". It is not headlined with "Nvidia best gaming experience on Linux", it is not headlined with "Gaming performance comparison of Wine games using Gallium9". It is merely there to show Phoronix readers which performance they can expect on which hardware at this point when running this specific game. Nothing more, nothing less, so all your babbling is not only stupid, but utterly offtopic.

            Comment


            • #36
              Originally posted by boxie View Post
              I am hoping that the similarities between DX12 and Vulkan will mean the end of eon style ports, faster ports and less driver tuning
              if anything, they'll be even more viable for more than one reason (and probably even outperforming the original)
              - in these ports CPU is bottleneck (Vulkan addresses that)
              - gains in performance are achieved by multi threaded approach to directx (again, Vulkan addresses that far better than OpenGL)
              - shaders which are probably biggest problems in ports like this would only need HLSL bytecode to SPIR-V

              having something like galium nine, but written on top of Vulkan could do wonders... or not. i'm not having enough knowledge to judge that, but based on what it was said, i would say it is quite safe to assume that. but, one thing is sure, unlike galium nine it would work on all vendors and all drivers

              Comment


              • #37
                Originally posted by justmy2cents View Post
                if anything, they'll be even more viable for more than one reason (and probably even outperforming the original)
                - in these ports CPU is bottleneck (Vulkan addresses that)
                If current poor performance is due to additional CPU overhead translating each draw call and Vulkan and DX12 favour even more draw calls, this could go the other way really fast. I don't know whether that's actually the case, I'm just saying there are scenarios that could make this less than an automatic win across the board. Personally, I'm expecting nothing and will take any gain as a welcome bonus.

                Comment


                • #38
                  Originally posted by bug77 View Post

                  If current poor performance is due to additional CPU overhead translating each draw call and Vulkan and DX12 favour even more draw calls, this could go the other way really fast. I don't know whether that's actually the case, I'm just saying there are scenarios that could make this less than an automatic win across the board. Personally, I'm expecting nothing and will take any gain as a welcome bonus.
                  agreed. this was the reason for "... or not" as i said it could go both ways and i really can't say where, since there is also the fact that submission of draw calls could scale over more cores in much more efficient way.

                  i guess time will tell
                  Last edited by justmy2cents; 18 August 2015, 01:12 PM.

                  Comment


                  • #39
                    Originally posted by bug77 View Post

                    If current poor performance is due to additional CPU overhead translating each draw call and Vulkan and DX12 favour even more draw calls, this could go the other way really fast. I don't know whether that's actually the case, I'm just saying there are scenarios that could make this less than an automatic win across the board. Personally, I'm expecting nothing and will take any gain as a welcome bonus.
                    Heh. Already we're seeing DX12 controversy with Ashes of the Singularity. I guess we knew it wouldn't take long. Developers are claiming performance issues are on the drivers' side of course (of all places, in an API designed to circumvent the driver as much as possible).

                    MS was able to get a lot of people to upgrade to Windows 10 though.

                    Comment


                    • #40
                      Originally posted by johnc View Post

                      Heh. Already we're seeing DX12 controversy with Ashes of the Singularity. I guess we knew it wouldn't take long. Developers are claiming performance issues are on the drivers' side of course (of all places, in an API designed to circumvent the driver as much as possible).

                      MS was able to get a lot of people to upgrade to Windows 10 though.
                      Yeah, well, Ashes was a Mantle demo for a while and DX12 is about putting more control in the hands of the developers. I guess you still can't simply optimize for DX12 and have to go for specific targets instead. Which is sad.

                      Comment

                      Working...
                      X