Announcement

Collapse
No announcement yet.

NVIDIA/AMD Linux Gaming Performance For Hitman 2 On Steam Play

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by birdie View Post

    I'll wait for screenshots. I can't imagine AMD Windows drivers to be this bad. We are talking up to 20% difference which is just huge.
    20% in some games in some situations. It can just be a corner case that is CPU bound (0% slower on really fast CPUs but 200% on other CPUs). There are too many special cases and configurations in graphics to freak out about special results in one benchmark. And if you have a smaller development team compared to your competitor, it is all the corners that are going to be the most uneven, where the bigger team has more resources to optimize just not all common cases, but also most corner cases. So for the main cases, you are battling 1-2%, but in corner cases things are much more uneven.

    It could also be a missing feature, or an unsafe optimization, but I would not rule out random chance making DXVK faster in a few benchmarks/games, especially with AMD.

    Comment


    • #32


      On Techspot during the GTX 1660 launch:

      Before we begin, a few quick notes on our test system. We tested with a Core i9 9900K clocked at 5 GHz with 32GB of DDR4-3200 memory. We used using Adrenalin 2019 Edition 19.2.3 drivers for the Radeon GPUs and Game Ready 419.35 WHQL for the GeForce GPUs.

      Comment


      • #33
        You know, another interesting tidbit. I've used Ubuntu for quite a while, but installed the KDE edition (clean install) earlier today. I played a single game with proton, 7 Days to Die, and got anywhere from 5-25% better FPS vs Ubuntu. Ubuntu has become a dog as of late. So far, none of the issues I had with Ubuntu are present with Kubuntu. There is only a slight audio latency with my bluetooth headset I'm trying to figure out. I've also discovered that KDE has a huge software ecosystem. I fell out of love with KDE back when they jumped to...3.0 I believe? However the current Kubuntu release (19.04) has made me love Linux again. You guys should try testing Kubuntu vs Ubuntu in more games. I may try a few more myself tomorrow. So far KDE (on Kubuntu) is looking like a winner in my book. I haven't even upgraded to Kernel 5.1 yet.

        Comment


        • #34
          Originally posted by M@GOid View Post


          On Techspot during the GTX 1660 launch:
          It is interesting that there are no fixed % of overhead. Each card has a diff amount of degradation.
          The Vega 56 and 64 are using the same drivers as far as I know and yet they have different "overhead".

          Vega64: 89.74/109= 0.82
          Vega56: 81.79/107= 0.76
          RX590: 71.76/80 = 0.89
          RX580 : 64.71/76 = 0.85
          I begin to suspect memory bandwidth issues again for Vega56.

          RTX2070: 89.41/110 = 0.81
          GTX1080: 89.14/109 = 0.82
          GTX1070: 73.79/105 = 0.76
          GTX1060: 54.17/72 = 0.75
          GTX1660ti: 70.16/102 = 0.69
          GTX1660: 62.54/93 = 0.67

          The GTX1660(ti) seems to under perform on steam play (Edit: cant make such a broad statement. It under performs in THIS comparison)
          Last edited by Raka555; 03 May 2019, 03:53 AM.

          Comment


          • #35
            Here are my benchmark results:

            DX11 1440p: 58fps


            DX11 720p: 96fps


            DXVK 1440p: 56fps


            DXVK 720p: 86fps


            Settings:



            As expected, no magic performance gains by running it in DXVK (wine-esync). In fact, the performance hit in CPU limit is bigger than in GPU limit. The performance hit in GPU limit is sensationally small though.

            Don't believe stupid shit on the internet, without actual screenshots at least.

            Comment


            • #36
              Originally posted by aufkrawall View Post
              Here are my benchmark results:

              As expected, no magic performance gains by running it in DXVK (wine-esync). In fact, the performance hit in CPU limit is bigger than in GPU limit. The performance hit in GPU limit is sensationally small though.

              Don't believe stupid shit on the internet, without actual screenshots at least.
              Thanks for this, but can you please state what CPU, GPU etc ?

              If it is in the settings, those are too small, I can not read it.

              Comment


              • #37
                Originally posted by birdie View Post
                in Windows works close to the kernel and in X.org works as a user process in Linux).
                Do you know that nvidia.ko is used by the kernel and runs at kernel level right?

                Comment


                • #38
                  You should throw your fan-less 1030 in there next time for some perspective.

                  Comment


                  • #39
                    Originally posted by Mike Frett View Post
                    You should throw your fan-less 1030 in there next time for some perspective.
                    Don't have a 1030...
                    Michael Larabel
                    https://www.michaellarabel.com/

                    Comment


                    • #40
                      Originally posted by kokoko3k View Post
                      Do you know that nvidia.ko is used by the kernel and runs at kernel level right?
                      Stop embarrassing yourself, please.

                      Originally posted by debianxfce View Post
                      GTX 1650 faded away in a week. That is pretty fast. Use google before you buy something to save your money.
                      Are you an idiot and what your statement has to do with the article being discussed?

                      Comment

                      Working...
                      X