Announcement

Collapse
No announcement yet.

Ryzen 3 2200G Video Memory Size Testing On Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by [email protected] View Post
    Anyway, 8GB minus 1GB is still a very good amount of memory for today office use, unless you are one of those freaks with hundreds of tabs open on a browser.
    This is a Linux forum, virus hoover system requirements do not apply. Debian testing Xfce uses 240MB and win10 1GB after booting. Binary packages and memory usage in Debian are a lot of smaller than in the virus hoover. With 2GB RAM and dgpu your office applications work just fine. A gaming machine requires 4GB where 1GB goes to the igpu. With Apus you play mostly at 720p, so lower hardware specs are enough.

    Comment


    • #22
      Ethereum is extremely dependant on ram speed. And monero is even more dependant of ram speed. Ddr4 is simply too slow.

      Comment


      • #23
        Originally posted by Etherman View Post
        Ethereum is extremely dependant on ram speed. And monero is even more dependant of ram speed. Ddr4 is simply too slow.
        I don't know much about cryptocurrency generation, but I definitely agree DDR4 isn't fast enough. I still beleive Rambus was on the right track and JEDEC should have followed through with competitive technology.

        Comment


        • #24
          Originally posted by dungeon View Post

          Yeah, BIOS also allow setting sizes. But thing is, it does not work on so much low - some apps getting broken See this:

          On 64MB in BIOS:


          with 256MB became normal:


          So seems app require 256MB minimum to render fine (while engine allows less, so it goes down to crap), so there should be a way to fake these 256 (or at least up to auto or something safe like that) on a system with 64 default.
          It depends how the game queries that information. GL does not provide it in a standardized way. There are a number of vendor specific GL extensions to get this info (mesa implements several of them). Some games use those, others use the renderer string or other things to pick a size. Assuming they use one of the vendor GL extensions, you can hack mesa to return whatever value you want.

          Comment


          • #25
            I guess another possibility is that the bug comes from the game not querying VRAM size and just assuming there is more than 64MB ?

            Comment


            • #26
              Interesting results that should be rexamined every six months or so. The obvious reason being the expectation that drivers, distro support and other defects will be ironed out!

              By the way folks these new Ryzen APU's are still very new and as a result even under Windows you will see fairly constant driver updates. Im in the process for example of installing four driver updates from HP for Windows.

              Now do i wish for better support under Linux - certainly! Im just trying to point out that sometimes the bleeding edge is a bloody place to be. I just hope that AMD recognizes the importance of well supported chips under Linux and allocates resources to resolve the issues.

              Comment


              • #27
                agd5f, thank you for explanations. Hadn't remembered about the continuity of the memory pools. That and the cache access cost are a good points.
                Good luck on enabling that last bit of work needed for raven ridge. Plus debugging the rest of the issues which apparently are happening

                bridgman, like agd surmised, fake as in report only, and only up to say math.min(math.max(installed mem - 1gb, 64mb), 2gb). This should be enough to fix tge issue dungeon reported. But if the possibilities to query the amount of vram are as many as agd indicated, I'm not sure how successful it would be.

                ​​​​​​​
                In any case, keep up the great work and many thanks for being this forthcoming in a public forum.

                Comment


                • #28
                  Originally posted by agd5f View Post

                  It depends how the game queries that information. GL does not provide it in a standardized way. There are a number of vendor specific GL extensions to get this info (mesa implements several of them). Some games use those, others use the renderer string or other things to pick a size. Assuming they use one of the vendor GL extensions, you can hack mesa to return whatever value you want.
                  I know, but we can say standard is to use vendor extensions on Mesa it should use GLX_MESA_query_renderer or on blob GL_ATI_meminfo...

                  Game itself (it is year 2014. game, on Linux 2015.) require 256MB (but actually engine itself will start it with even less and gave you these crappy textures) and just assume that since it have no idea as it uses Unity 4.3.4 (that is year 2013. engine version while this minor revision on january 2014.) version which does not have ability to do our way of checking it, that is only added as of ver5 (march 1015. +):

                  Linux: Query Mesa driver for amount of video memory when feasible.
                  https://unity3d.com/unity/qa/patch-releases/5.0.0p3

                  So at least in this case this does not look like it could be fixed with mesa query, but it could be if it is possible to adjust GL_ATI_meminfo values since it supports that

                  But for later anyway (i used this game only as example to explain what is issue about, but i am sure i could find more) and for Mesa way of what it supports checking on GLX_RENDERER_VIDEO_MEMORY_MESA there should be way to officially fake reporting this with some variable, specially for APUs of course to adjust reporting of more memory (of course up to some safe precalculated value).

                  So if someone set 64MB UMA in BIOS he can have say 512 MB fake reported to an GL app, that should work i guess Or whoever sets UMA anything bellow Auto should be able to adjust it up to Auto or something like that, whatever is considered more safe.
                  Last edited by dungeon; 02-20-2018, 10:07 PM.

                  Comment


                  • #29
                    Originally posted by fahrenheit View Post
                    But if the possibilities to query the amount of vram are as many as agd indicated, I'm not sure how successful it would be.
                    Nearly 100% succesful if values of GLX_MESA_query_renderer and GL_ATI_meminfo are adjustable for APUs... maybe something really really old or weird checks only with GL_NVX_gpu_memory_info - with that also it would be pure 100% succesful

                    But in this Oddworld case adjusting only GL_ATI_meminfo values would fix an issue, since bellow year 2015. unity engine versions missed mesa's query.

                    Maybe someone should promote new token to GL_ATI_meminfo extension something like APU_ADJUSTABLE_FREE_MEMORY_ATI so we could not name this a hack
                    Last edited by dungeon; 02-20-2018, 11:34 PM.

                    Comment


                    • #30
                      Originally posted by agd5f View Post

                      We support this on Linux just fine. The missing piece is enabling display from system memory (rather than carve out "vram") on Raven. So far we've only enabled this by default on carrizo and stoney.
                      so also on linux, with my carrizo APU (A10 8700p) if and application needs more than 512MB of ram, automatically GPU use system RAM? is there any tool to see how much MB of RAM is using my integrated GPU?
                      thanks

                      Comment

                      Working...
                      X