Announcement

Collapse
No announcement yet.

The Current RadeonSI Benefits Of Switching To Mesa 13.1-dev Git On Ubuntu 16.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    OpenGL core profile version string: 4.5 (Core Profile) Mesa 13.1.0-devel (git-2e2562c)
    when did that happen?

    Comment


    • #12
      Originally posted by Detructor View Post

      when did that happen?
      A couple weeks ago. After 13.0 branched, they exposed 4.5 in git master for easier testing.

      Comment


      • #13
        ohh okay. So it's only in git but not in the actual release?

        Comment


        • #14
          Originally posted by Detructor View Post
          ohh okay. So it's only in git but not in the actual release?
          Correct. The 13.0 release only exposes 4.3, but all the functionality is there so if you set the right environment variable it can run 4.5 programs just fine. Git just flipped the default up to 4.5.

          Comment


          • #15
            Originally posted by peppercats View Post
            I don't use mesa so my comment may be ignorant, but has anyone tried compiling it with native optimizations + LTO and done benchmarks?
            https://copr.fedorainfracloud.org/coprs/che/mesa/
            i'm not aware of benchmarks

            Comment


            • #16
              Originally posted by Creak View Post
              peppercats I don't think that would change much the results. The major improvements comes from a better management of the memory(-ies), or a better handling of the OpenGL extensions, etc... these kind of stuff aren't very CPU intensive, it just needs to be well thought. One thing that can be CPU intensive is the shaders compilation, but I think it's done with LLVM, not Mesa.
              i'm sure question means mesa+llvm when driver uses llvm. it will improve all results which are cpu-dependent(which are faster on michael's xeon than on other people cpus)

              Comment


              • #17
                Originally posted by lumks View Post
                Aricles like this sounds always like "why you should not use Ubuntu, but ArchLinux"
                Using the newest binaries from PPAs is a lot faster and more convenient than compiling everything from AUR. Just don't use an ancient Ubuntu, and it'll be fine.

                Comment


                • #18
                  Originally posted by lumks View Post
                  Aricles like this sounds always like "why you should not use Ubuntu, but ArchLinux"




                  Manjaro; Linux4.8; Mesa 13; Blender 2.78.a; R9 380 - no crash.
                  I personally love the shit out of Arch, but it's becoming more niche on my network to the point I barely use it right now (right now doesn't mean I'm not coming back) because:

                  a) It's a rolling distro
                  b) It's a rolling distro
                  c) It's a rolling distro

                  Every time I get my set up just right, along comes an update that blows everything to pieces. The last straw for me was Nextcloud on Apache. I had gotten it up and running, everything was syncing nicely both locally and Internettingly and then along comes an update and it all falls to pieces and it never came back on line properly. It was like it took brain damage. The time I spent resolving the issue was unacceptable. I do Nextcloud on other installs and they run swimingly, no worries, Ace! If the chappy is having issues with Ubuntu, a distro based on release cycles, then what chance does he have with Arch, a fast moving system? All he need do is find out his problem and, theoretically, have it fixed. With Arch, he could find the problem and then along comes another issue next update. It's nonsense to use it when you need to get shit done, and can't risk the downtime without thorough testing. And that wouldmean wasted time and resources anyway (or simply a ual-boot).

                  I am well aware of pinning versions, but after a while, what's the bloody point of running a rolling distro if you're pinning everything.

                  Plus, he might also simply just like/prefer Ubuntu.

                  Comment


                  • #19
                    Originally posted by peppercats View Post
                    I don't use mesa so my comment may be ignorant, but has anyone tried compiling it with native optimizations + LTO and done benchmarks?
                    Havind PGO/FDO could be nice too, maybe Solus does that.

                    Originally posted by eydee View Post

                    Using the newest binaries from PPAs is a lot faster and more convenient than compiling everything from AUR. Just don't use an ancient Ubuntu, and it'll be fine.
                    We also have binaries repositories in Arch, LordHeavy builds mesa-git for us more than daily

                    Comment


                    • #20
                      Originally posted by debianxfce View Post
                      Debian packaging system is more advanced that in arch linux
                      You meant bloated? Just compare the run time of apt-get update; apt-get upgrade to pacman -Syu for a similar sized update. I know, apt is doing more work, but the result is that it takes much, much longer.

                      I don't think compiler optimizations are doing a lot for mesa right now. Even with -O0 -ggdb3, real games are only somewhat slower. I think that even small improvements in how mesa handles OpenGL will have most likely a bigger impact, so I believe starting to look into the hardcore compiler optimizations will make sense, once mesa is really well optimized through and through.

                      That said, I have tried Ofast and LTO and I did not notice major differences in games. I did notice major differences in compile time though. LTO easily triples the time it takes to build mesa for (currently) very little benefit.

                      Comment

                      Working...
                      X