Announcement

Collapse
No announcement yet.

Vega 10 Huge Page Support, Lower CS Overhead For AMDGPU In Linux 4.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by agd5f View Post
    This is mainly to take advantage of large pages by saving a level in the page walker for GPUVM if the page is large. Additionally the GPUVM hardware in vega10 supports 4 levels instead of 2 (previous asics) so the page tables should take less memory overall.
    Another big benefit is that if your application has a large memory footprint it can exceed the coverage of the TLBs (Translation Lookaside Buffers aka translation cache entries) in the GPUVM block, so that relatively more of your memory accesses will also require a second memory access (or more) to fetch the TLB-missed translation from page tables and reload it into a TLB.

    Adding support for 2M pages allows each TLB to cover a 512x larger address range, which more or less eliminates TLB thrashing as a performance problem. I believe we mix 4K and 2M pages as needed (big allocations get 2M pages) rather than just forcing everything to 2M and wasting memory but I haven't looked at that part of the final code (I will).
    Last edited by bridgman; 18 August 2017, 07:23 PM.
    Test signature

    Comment


    • #12
      I'm currently using that branch, and while I am completely comfortable with compiling "bleeding edge" kernels, I wonder whether there is a better way to track this repository than by removing the whole directory and cloning the whole repository again.

      But when I'm just doing a "make clean; git pull" I usually end up with all sorts of weird "merge conflict" messages from git (despite not having changed any source file). I tried different commands like "git reset --hard; git pull" etc., but still get merge conflicts from git, so I seem to miss the right combination of git commands that would allow me to track the changes of your repository with pulling just incremental updates - is there a trick to this?

      Comment


      • #13
        Originally posted by dwagner View Post
        I'm currently using that branch, and while I am completely comfortable with compiling "bleeding edge" kernels, I wonder whether there is a better way to track this repository than by removing the whole directory and cloning the whole repository again.
        yes, there is such a way. use prebuilt package for your distro. for fedora mystro256 has copr, for other distros use google

        Comment


        • #14
          Originally posted by pal666 View Post
          use prebuilt package for your distro.
          That is certainly not what I would consider a better way.

          Comment


          • #15
            Originally posted by dwagner View Post
            I'm currently using that branch, and while I am completely comfortable with compiling "bleeding edge" kernels, I wonder whether there is a better way to track this repository than by removing the whole directory and cloning the whole repository again.

            But when I'm just doing a "make clean; git pull" I usually end up with all sorts of weird "merge conflict" messages from git (despite not having changed any source file). I tried different commands like "git reset --hard; git pull" etc., but still get merge conflicts from git, so I seem to miss the right combination of git commands that would allow me to track the changes of your repository with pulling just incremental updates - is there a trick to this?
            I think the problem is that you're working/building your kernel in your git tree.
            I am not sure how it does it, but with a PKGBUILD, there is a clean separation between the 2.
            The working folder will be created anew every time, but I don't have to pull the whole repository again, only the diff, as the git data is stored in another folder.

            Comment


            • #16
              don't forget that while the kernel module might support a card like vega, chances are it will not provide accelerated graphics/libs. hoping for Mesa support soon.

              Comment


              • #17
                Originally posted by pcxmac View Post
                don't forget that while the kernel module might support a card like vega, chances are it will not provide accelerated graphics/libs. hoping for Mesa support soon.
                Are you just saying that latest mesa, libdrm etc.. are also required ? If so then yes... the kernel driver is only one part of the graphics stack.
                Test signature

                Comment


                • #18
                  Originally posted by bridgman View Post

                  Another big benefit is that if your application has a large memory footprint it can exceed the coverage of the TLBs (Translation Lookaside Buffers aka translation cache entries) in the GPUVM block, so that relatively more of your memory accesses will also require a second memory access (or more) to fetch the TLB-missed translation from page tables and reload it into a TLB.

                  Adding support for 2M pages allows each TLB to cover a 512x larger address range, which more or less eliminates TLB thrashing as a performance problem. I believe we mix 4K and 2M pages as needed (big allocations get 2M pages) rather than just forcing everything to 2M and wasting memory but I haven't looked at that part of the final code (I will).
                  Linux supports transparent huge pages so you will end up with a mix depending on allocations and memory fragmentation.

                  Comment


                  • #19
                    Originally posted by dwagner View Post
                    I'm currently using that branch, and while I am completely comfortable with compiling "bleeding edge" kernels, I wonder whether there is a better way to track this repository than by removing the whole directory and cloning the whole repository again.

                    But when I'm just doing a "make clean; git pull" I usually end up with all sorts of weird "merge conflict" messages from git (despite not having changed any source file). I tried different commands like "git reset --hard; git pull" etc., but still get merge conflicts from git, so I seem to miss the right combination of git commands that would allow me to track the changes of your repository with pulling just incremental updates - is there a trick to this?
                    I don't think you are doing it right. use `git fetch <remote>` to get the latest changes from the remote. Replace <remote> with whatever you called the remote when you added it. If it's the tree you cloned from, it will be 'origin'. That branch rebases, so it's probably easier to just delete your current tracking branch and recreate it when I push a new one.

                    Something like:
                    Code:
                    # to do the initial clone
                    git clone <url>
                    git checkout -b <local branch name> <remote>/<remote branch name>
                    
                    # to build
                    make
                    sudo make modules_install
                    sudo make install
                    
                    # to update your tree
                    git fetch <remote>
                    git branch -D <local branch name>
                    git checkout -b <local branch name> <remote>/<remote branch name>

                    Comment


                    • #20
                      Originally posted by dwagner View Post
                      is there a trick to this?
                      You can try something like this:

                      Code:
                      git fetch
                      git reset --hard origin/amd-staging-4.12
                      The first line is like "pull", but it will only get the information from the repo, without applying the new patches.
                      The second line is to reset the local repo (and remove all changes you may have done), and to simply get a fresh copy of the branch.

                      I usually run
                      Code:
                      git fetch --all
                      as I track many sources for the same project (for instance you track Linus', ALex', etc. forks of Linux).

                      Anyway, let me know how it goes.

                      Edit: Well, bested by agd5f !

                      Comment

                      Working...
                      X