Announcement

Collapse
No announcement yet.

Linux 5.5-rc7 Kernel Released

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by anarki2 View Post

    I meant 3900X, whatever.

    It works perfectly fine on 5.4, it fails consistently on 5.3. How on Earth did you come to the conclusion that it's a hardware issue then? Elementary logic.
    Well, I did work in hardware retail in the distant past and troubleshooting is an art. As I have pointed out to you with that clip, there are strange hardware issues out there with the Ryzen 3000 series and I just wanted to bring that up as a possibility as it could have been voltage related. Every platform has its own quirks of course, I have seen some with my X58 rig and also now on AM4. But it is also totally possible that the problems come from the software side of things.
    Last edited by ms178; 20 January 2020, 04:44 PM.

    Comment


    • #22
      So many AMD GPU related patches and fixes and stuff going on. I hope BIG NAVI doesn't end up being a flop, be nice to get off this 1080TI (which has served me well). I like the idea of open source drivers and software. You know, progress.

      Comment


      • #23
        Originally posted by aufkrawall View Post
        Yes, also timing tuning. I could run some tests with a Ryzen 3600X and Shadow of the Tomb Raider saw +30% more fps in total CPU bound scenario with 3600 tuned RAM vs. 3200 XMP CL14. Overclocking the CPU just runs into voltage and clock walls.
        So that got me curious and I ended up spending some time on this.
        Originally I did not want to spend much time on tweaking as a few months ago I spent a lot of time on O/C my i7, and so when I got my 3800X I just increased the RAM frequency, dropped the major timings a bit and called it good enough.

        Today I ran STR's benchmark at DCP@3000, my timings@3600 and the bloody long list of timings offered by calculator@3600 with the higher DRAM voltage.
        The average game's FPS didn't change at all, but it makes sense since the game says I'm GPU bound at 99% (I have a 580, running at 1080p).
        Out of curiosity, I thought of comparing the CPU numbers too.
        The first jump gives around 10% improvement for both CPU game and CPU renderer.
        The second jump, so purely timing, gives about 1-2%, I'm not yet convinced that was worth the time and higher voltage, but I'm only using the fast settings, maybe extreme would give more.

        Based on this, I am curious how you could get +30% anywhere. Did you run the native STR or with Proton?
        Last edited by geearf; 21 January 2020, 01:44 AM.

        Comment


        • #24
          Originally posted by R41N3R View Post

          Where AMD really struggles with are APU's... I don't know why, but it took AMD really really long to get them mostly stable, just some weeks ago I suffered from GPU resets on my Ryzen 5 2500U just by starting Plasma. And the Ryzen 3 2200 had many issues as well (firmware bugs, ..
          I have an AMD Athlon 200GE - got it a few weeks after release. There has been issues with older kernels <4.19 and the graphics. im running Debian stretch on it. build the Kernel by myself accorfing to the gentoo wiki page. it ran stable with 4.19+ minor dmesg errors related to acpi and gfx but they consequently vanished. since 5.3 only one left which is related to asus ....but issues where not as bad as people have written. just use latest biosupdates latest firmware and latest stavle kernel...


          i dont get why some people want to have most recent hardware but running it on old kernels....
          Last edited by CochainComplex; 21 January 2020, 04:39 AM.

          Comment


          • #25
            Originally posted by theriddick View Post
            So many AMD GPU related patches and fixes and stuff going on. I hope BIG NAVI doesn't end up being a flop, be nice to get off this 1080TI (which has served me well). I like the idea of open source drivers and software. You know, progress.

            well they have their gfx drivers open source and in the kernel. so changes are transparent no wonder. if it would be closed source nobody would take any notice about it. i like it to see that they are working on their products to make it more reliavle and better.

            Comment


            • #26
              Originally posted by geearf View Post
              Based on this, I am curious how you could get +30% anywhere.
              You need a place with high CPU demands (i.e. many NPCs and high view range, like some spots in Paititi) and with an RX 580 you need to lower resolution and some GPU details further, as the game has high GPU demands too. It would be more relevant with a 2080 Ti for playing this game with up to 144Hz. Not very common, but not a synthie either. I never said this gain would be what to expect all the time and in every game.

              Comment


              • #27
                Originally posted by aufkrawall View Post
                You need a place with high CPU demands (i.e. many NPCs and high view range, like some spots in Paititi) and with an RX 580 you need to lower resolution and some GPU details further, as the game has high GPU demands too. It would be more relevant with a 2080 Ti for playing this game with up to 144Hz. Not very common, but not a synthie either. I never said this gain would be what to expect all the time and in every game.
                Well I don't know about that. If the CPU numbers didn't improve by more than 10%, I doubt dropping graphics settings would allow the GPU numbers to improve by more than 10%, it just does not make sense. But yeah maybe it could happen in a different setting where it is CPU bound I guess. I assume the benchmark covers most generic cases though, so I'm still not convinced of the usefulness of tweaking things, but I'd be more than happy to be convinced otherwise.

                I am curious about trying that with ffmpeg, yuzu and maybe rpcs3.

                Comment

                Working...
                X