Announcement

Collapse
No announcement yet.

BioShock Infinite Runs Much Faster For RadeonSI On Mesa Git: ~40%

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by theriddick View Post
    I'll get excited when I see 150+++ fps from these AMD cards, I mean if a NVIDIA 950 can get 142fps then AMD should be able to achieve at least that with allot of these cards.

    The real question is whether AMD can migrate these performance improvements over to all other games in due time, they need general platform/opengl improvements not just game specific tweaks.
    The improvements here by Marek were general platform/opengl improvements. And the performance from NVidia that you are asking for was almost certainly achieved (at least partially) using game-specific tweaks. Though it's true that Mesa could really use some multi-threading improvements.
    Last edited by smitty3268; 14 August 2016, 06:06 AM.

    Comment


    • #32
      Originally posted by starshipeleven View Post
      Does noveau even run the game? On what hardware?
      yes, since one week after linux release

      Comment


      • #33
        Originally posted by atomsymbol
        I think it is slightly more accurate to say it is "driver bound".
        It is driver developer vocabular, so from that point of view we say Bioshock is CPU bound

        From user POV whatever uses GPU 100% is better, like on Windows



        And that fire up is just for purpose of nothing but benchmarking , as in reality game would play better and smoother when you cap it with FRTC

        Comment


        • #34
          Originally posted by drSeehas View Post
          ???
          Since when is R7 370 Volcanic Islands?
          It is Southern Islands. Not even Sea Islands. There is no Sea Islands tested at all. HD 7790, HD 8770, R7 260, R7 260X, R7 360 is missing. Hawaii/Grenada has a regression.
          Oops, my mistake. Got it confused it with the 380, which is VI. I thought the Hawaii regression was when using AMDGPU, though? I'm not yet using AMDGPU with my 290 for that reason and haven't noticed any regression.

          Comment


          • #35
            Originally posted by ResponseWriter View Post
            ... haven't noticed any regression.
            Do you use kernel 4.7?

            Comment


            • #36
              Originally posted by atomsymbol

              - How did you put the stats in the upper left corner of the screen?
              It is probably an MSI Afterburner.

              Comment


              • #37
                Originally posted by pal666 View Post
                to be honest you don't need hud to see that, just same results with all cards
                Yes, I know. Just curious what is the limiting factor. Maybe cache/ram latencies?

                Comment


                • #38
                  Originally posted by drSeehas View Post
                  Do you use kernel 4.7?
                  Not yet. I like to wait for the .1 release as the obvious regressions (including those with certain filesystems that have had issues in the past) are usually fixed by then. I guess that's why I didn't see this one.

                  Looks like there's been some progress anyway: https://bugs.freedesktop.org/show_bug.cgi?id=97260

                  Comment


                  • #39
                    Originally posted by ResponseWriter View Post
                    Not yet. ... regressions ... I guess that's why I didn't see this one. ...
                    Exactly!

                    Comment


                    • #40
                      Originally posted by atomsymbol
                      - If I was buying an i3-6100 Skylake, I would be deciding between i3-6100T (35 Watts, 3.2 GHz) and i3-6100 (51 Watts, 3.7 GHz). i7-6700T also looks nice, but it's a different market segment (+200€). Based on your experience with i3-6100, does a 35W CPU make sense in a desktop machine? 51-35=16W is tiny compared to GPU's >=150W.
                      It is not mine video, but yeah there you can see that 2 core high IPC CPU like i3-6100 can sustain RX 480 GPU in BI at its max

                      That is only possible when everything is perfect optimized API/game/drivers... on Linux no way of course with same hardware On Linux for that you basically need one up range more CPU if with nVidia, or both CPU and GPU range with AMD blob... and in the and with opensource driver, hm well... it is probably thing of future

                      Comment

                      Working...
                      X