Announcement

Collapse
No announcement yet.

New Low-Memory-Monitor Project Can Help With Linux's RAM/Responsiveness Problem

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by RomuloP View Post
    Wrong date.

    Christoph Lameterhttp://linux.conf.au/schedule/presentation/130/A key technology in the kernel is the ability to move objects around. This is essential for p...


    You need 2017 or later. Any documentation from 2016 like you just quoted is wrong.

    Over time most Linux systems today require a reboot because performance will suffer as memory becomes more and more fragmented.
    This is not a special problem to server hardware. Fragmentation kills you IO performance. Sometimes quickly sometimes slowly.

    Sorry I don't understand why you are getting a single up vote as everything you are writing it out of date and wrong.

    Really the issue not able to allocate huge pages is hitting performance even on the desktop. Yes the default slab allocator in the Linux kernel uses huge pages so huge page usage is part of default Linux kernel operations.

    HP/THP being server only works loads is bull crap. HP/THP is all systems running Linux. Degrade can happen inside hours to days. So someone using hibernation on laptops does not have todo this many times to get to days of running.

    Methods added to the Linux kernel since 2017 to deal with memory fragmentation use swap. Setting swappiness 0 will not prevent defrag functions in the Linux kernel using swap to perform it.

    Yes people miss that user space page migration in Linux started being implemented as push to swap and pull back in another location.

    Linux kernel new memory defragmentation methods are always implemented first as push to swap and pull back into a different memory location then optimised latter on not to use swap. So disable swap always disables the newest Linux kernel memory defragmentation methods. We really do need a different duck tape solution for this if people wish to be switching off swap.

    So create 4 meg memory swap on a ramdisk set swappiness to 0 and all Linux kernel memory defragmentation methods can run.

    Basically you are not understanding how much the Linux kernel for performance depends on being defragmented.

    Yes fragmentation of memory will make being in swap hell worse. So you might avoid swap thrash by disabling swap but you might slow write to disc to 10% or less of what is possible. So you change you dice roll by disabling swap.

    Works for me is the disable swap option. There are people who will disable swap and see their IO performance tank because they disabled swap so disabled a memory defragmentation feature that they in fact needed.

    Comment


    • #62
      Originally posted by oiaohm View Post
      Wrong date.

      Christoph Lameterhttp://linux.conf.au/schedule/presentation/130/A key technology in the kernel is the ability to move objects around. This is essential for p...


      You need 2017 or later. Any documentation from 2016 like you just quoted is wrong.
      The video just point after step by step slided that nowadays it is isolated from swap subsystem and direct copy is possible, and that the case was already a stable thing upstream, just because it can move pages in swap area as it do map things as in RAM, in terms of NUMA nodes, it does not mean it need swap, also it point to the fact it could do better by not invalidating pointers, just moving them as it is not really touching storage.

      Originally posted by oiaohm View Post
      Over time most Linux systems today require a reboot because performance will suffer as memory becomes more and more fragmented.
      This is not a special problem to server hardware. Fragmentation kills you IO performance. Sometimes quickly sometimes slowly.
      Well I disagree, /proc/buddyinfo after activating /proc/sys/vm/compact_memory shows a completely different story, with or without swap, in any setup where you have practically non existent huge pages.



      Originally posted by oiaohm View Post
      Sorry I don't understand why you are getting a single up vote as everything you are writing it out of date and wrong.
      You have not to sorry, it is simply because people think different and have their own opinion. Maybe you are wrong or maybe I am, you did not need to agree, neither I, neither them.

      Originally posted by oiaohm View Post
      Really the issue not able to allocate huge pages is hitting performance even on the desktop. Yes the default slab allocator in the Linux kernel uses huge pages so huge page usage is part of default Linux kernel operations.

      HP/THP being server only works loads is bull crap. HP/THP is all systems running Linux. Degrade can happen inside hours to days. So someone using hibernation on laptops does not have todo this many times to get to days of running.
      No, it is not a thing in desktop... I use default fedora setup with THP set as madvise that is quite permissive and anonymous hugepages do not make more than 8 MB from a huge amount used.



      Originally posted by oiaohm View Post
      Methods added to the Linux kernel since 2017 to deal with memory fragmentation use swap. Setting swappiness 0 will not prevent defrag functions in the Linux kernel using swap to perform it.

      Yes people miss that user space page migration in Linux started being implemented as push to swap and pull back in another location.

      Linux kernel new memory defragmentation methods are always implemented first as push to swap and pull back into a different memory location then optimised latter on not to use swap. So disable swap always disables the newest Linux kernel memory defragmentation methods. We really do need a different duck tape solution for this if people wish to be switching off swap.

      So create 4 meg memory swap on a ramdisk set swappiness to 0 and all Linux kernel memory defragmentation methods can run.

      Basically you are not understanding how much the Linux kernel for performance depends on being defragmented.
      I think the same about you, because you are supposing that those techniques are really dependent on swap being in the midle, when echo 1 > /proc/sys/vm/compact_memory works and touch DMA without swap in any setup and is instantly and do reduce a lot fragmentation as shown by just looking into /proc/buddyinfo, as, again, as stated on lwn, and in the vídeo you posted, it is just a matter of changing the appropriate page-table entries and adequately update pointers in kernel space (moving a page from a NUMA node to another). Yes it started as a swap dependent thing and it turned into a FUD "every new defrag tech start using swap". It do not depend on it today.

      Originally posted by oiaohm View Post
      Yes fragmentation of memory will make being in swap hell worse. So you might avoid swap thrash by disabling swap but you might slow write to disc to 10% or less of what is possible. So you change you dice roll by disabling swap.

      Works for me is the disable swap option. There are people who will disable swap and see their IO performance tank because they disabled swap so disabled a memory defragmentation feature that they in fact needed.
      I disagree, swap is completely about anonymous pages, not about files, but sure nobody need to agree and you can think I'm wrong. Lets put a valid argument so, there will exist people that will get a hard OOM harder or quicker if dealing with high mem pressure, totally true, this is not a argument for “swap allways nice”.


      Comment


      • #63
        Originally posted by RomuloP View Post
        The video just point after step by step slided that nowadays it is isolated from swap subsystem and direct copy is possible, and that the case was already a stable thing upstream, just because it can move pages in swap area as it do map things as in RAM, in terms of NUMA nodes, it does not mean it need swap, also it point to the fact it could do better by not invalidating pointers, just moving them as it is not really touching storage.
        2017 video tells you where huge pages is used. 2018 video mentions where with different sections they are using swap as duct tape in hard area.

        Originally posted by RomuloP View Post
        Well I disagree, /proc/buddyinfo after activating /proc/sys/vm/compact_memory [COLOR=#000000][FONT=Liberation Serif, serif]shows a completely different story, with or without swap, in any setup where you have practically non existent huge pages.
        This just shows you are in a lucky state.

        Originally posted by RomuloP View Post
        No, it is not a thing in desktop... I use default fedora setup with THP set as madvise that is quite permissive and anonymous hugepages do not make more than 8 MB from a huge amount used.
        One problem the THP/HP used by the slab allocator in the Linux kernel does not show up as anonymous hugepages. Heck don't show up under madvise because they are kernel. There is a third class of huge pages that don't show up in your meminfo. The 2017 video tells you that the slab allocator the current one on startup has already allocated at least 1 huge page. Guess what its not listed in meminfo. Fun Linux kernel internal allocated memory details are not coming out to userspace this leads to people thinking huge pages are not uses as much as they in fact are.

        Originally posted by RomuloP View Post
        I think the same about you, because you are supposing that those techniques are really dependent on swap being in the midle, when echo 1 > /proc/sys/vm/compact_memory works and touch DMA without swap in any setup and is instantly and do reduce a lot fragmentation as shown by just looking into /proc/buddyinfo, as, again, as stated on lwn, and in the vídeo you posted, it is just a matter of changing the appropriate page-table entries and adequately update pointers in kernel space (moving a page from a NUMA node to another). Yes it started as a swap dependent thing and it turned into a FUD "every new defrag tech start using swap". It do not depend on it today.
        I did not say every. I said new memory defragmentation methods for memory. This is a duct tape method those working in Linux kernel memory defragmentation are keeping on returning. So yes compact_memory works with the old mature methods without swap. Yes if you have swap disable it will still improve things. But swap disabled the new methods are not used and if your fragmentation of memory need them to clean it up it is not happening until you enable swap.

        2018 video you have a subsubsystem maintainer in crowd mention using swap.

        Originally posted by RomuloP View Post
        I disagree, swap is completely about anonymous pages, not about files, but sure nobody need to agree and you can think I'm wrong. Lets put a valid argument so, there will exist people that will get a hard OOM harder or quicker if dealing with high mem pressure, totally true, this is not a argument for “swap allways nice”.
        Swap is not 100 percent being used by anonymous pages at the moment. Some of the kernel structures pushed to swap to allow structure defragmentation are not anonymous. Of course after the Linux kernel developers have worked out how to safely move those pages with using swap that code cease to use swap. Problem is this is a cycle.

        1) Linux kernel developers target X structure causing causing memory fragmentation.
        2) they implement a system using swap to defragment it.
        3) they extend this method to where swap comes a temporary validation that method is right.
        4) they cease using swap to defrag this structure they go to 1 and choose another X structure.

        This method was in fact detailed at the end of the 2018 video as they are deciding on how to deal with the remaining structures that were still causing fragmentation.

        This cycle will keep on repeating until the Linux kernel structures are all moveable/defragment supporting. So one day this cycle will stop it has not yet.

        This is why we constantly have a stack of new kernel memory defragmention methods needing swap to function.

        Basically I guess you never watched the 2018 all the way though because you stupidly dismissed it because it was about huge pages. 2017 video tells you where huge pages are in fact used so its at is all the time required. This allows you to understand the 2018 video title correctly and how important it is.

        Comment


        • #64
          oiaohm

          You're saying that without SWAP many memory defragmentation techniques are disabled/can't work. Can you devise/show a reproducible test case which shows (severe) performance degradation due to memory pages being extremely fragmented?

          The SWAP option in the kernel does not have any mention of performance being affected by disabling SWAP. Also, I've just grep'ped linux-5.2/mm/* for CONFIG_SWAP and I don't see anything scary. Granted CONFIG_SWAP builds page_io.o swap_state.o swapfile.o swap_slots.o but a cursory look at these files doesn't reveal any routines implicated in memory defragmentation. There aren't that many options in the kernel which "depends on.*SWAP" (that's regexp) either.

          Also, I've been running without CONFIG_SWAP for over 15 years now and I haven't seen a single instance performance degradation due to this option being disabled. And I run various benchmarks which use lots of memory. Performance varies by less than 1% even after running my PC for days.
          Last edited by birdie; 27 August 2019, 02:39 PM.

          Comment


          • #65
            Originally posted by birdie View Post
            You're saying that without SWAP many memory defragmentation techniques are disabled/can't work. Can you devise/show a reproducible test case which shows (severe) performance degradation due to memory pages being extremely fragmented?
            I putting to the LG messing around with hibernation because their mistake is good one to make highly fragmented memory on demard.

            Originally posted by birdie View Post
            The SWAP option in the kernel does not have any mention of performance being affected by disabling SWAP. Also, I've just grep'ped linux-5.2/mm/* for CONFIG_SWAP and I don't see anything scary. Granted CONFIG_SWAP builds page_io.o swap_state.o swapfile.o swap_slots.o but a cursory look at these files doesn't reveal any routines implicated in memory defragmentation. There aren't that many options in the kernel which "depends on.*SWAP" (that's regexp) either.

            Really you need to look closer. CONFIG_SWAP in this file is kind of important. You have all the functions from page_io.o swap_state.o swapfile.o swap_slots.o stubbed out when you built with CONFIG_SWAP disabled.

            None of the memory defragmentation code is in fact wrapped by CONFIG_SWAP. Yes you are right none of the swap code appears to have a defragmentation role.



            The kernel configuration menu does not cover all the places where swap is in fact used. Heck I guess it did not tell you that it stubs out the functions when you disable it so you can code as if swap functionality is always built into the system and your modules will built and work as long as they are designed to cope with a filled or turned of swap at runtime.

            Yes swap.h is used all over place without any CONFIG_SWAP. Yes migrate.c at times that used in the memory defrag code without swap either that you built without swap or disabled it basically does not function. Please note I said at times. It tries to move stuff around without using swap then attempt to use swap. So party with swap has second chance it successfully moves so second chance to successful defragment using code here.

            There are more items like this spread all over the kernel. Disabled swap broad spread all over Linux kernel effects. Question is does the workload you run going to run into any areas that are downgraded due to swap being disabled. This is very hard to answer absolutely without benchmarking. Simpler to small 4m ram swap provides enough that scattered code can work.

            Originally posted by birdie View Post
            Also, I've been running without CONFIG_SWAP for over 15 years now and I haven't seen a single instance performance degradation due to this option being disabled. And I run various benchmarks which use lots of memory. Performance varies by less than 1% even after running my PC for days.
            You have to remember 15 years go performance degradation due to memory fragmentation was basically normal. Its from 2017 to now that we have seen this cease to be normal but with those who have disabled swap in kernel build they have not seen the all benefits.

            Yes I will give you with benchmarks attempt to have fragmentation happen when you want to can be as tricky as hell. Yet a person can have particular way of doing their job and successfully cause rapid memory fragmentation.

            Swap off by either build or runtime equals less options for the Linux kernel to move memory around these less options weaken the effectiveness of defragmentation of memory and sometimes with new items being defragmented disable that item completely from being defragmented. If you happen to have a workload that well and truly fragments memory you can be looking at very slow IO.

            Turning swap off completely has to be understood if you see after you are running with no swap massive screwed up IO performance there is every chance you workload pattern is a memory fragmenting one and you have effective disabled what ever method to defragment it by disabling swap. Yes indirectly killed it.

            I don't recommend zero swap without serous benchmarking/monitoring of IO performance.

            Yes zram and zswap can be useful for different reasons. But a 4meg ramdisk swap has a lot lower IO/cpu overhead than anything else you can use for swap. Lot of people who test swap enabled/disabled are not considering the IO overhead of the swap option. Yes I have been told a few times using swap in a ramdisk without compress is stupid until I benchmark out 10 to 15% better on particular internal workloads with it present and also see better memory fragmentation numbers. zram and zswap hurt you in cpu time consumed in compression. Disc swap hurts you going though the buses.

            CONFIG_SWAP basically is effecting way more files than you listed as there are many things using the swap functions with or without CONFIG_SWAP enabled.

            Stubbing functions out of disabled parts really does hide quite effectively how broadly used that feature is in fact used and make it hard for people to see quickly how large of a side effect area changing a feature can have.

            Comment


            • #66
              oiaohm

              Testcase please. :-)

              You've said quite a lot and my feeling is that a SWAPless kernel probably works just fine because I haven't found a single case where I could measure any measurable performance degradation for over 15 years.

              Comment


              • #67
                Originally posted by Britoid View Post

                To be fair, your post just proved why swap is important and why you shouldn't remove it.
                Incorrect. The issue appears long before swap is exhausted, when its barely been touched.

                Second, running out of physical RAM, swap or no swap, should not cause a system lock up. Period, end of story.

                I suspect there is a scheduling bug or something in the kernel, because even a very high level of disk IO should not give virtually no time at all to other processes.

                This is actually not necessarilly an issue that can be addressed with oomkiller because swap is untouched, its only physical RAM which is low, we are still a long way from oomkiller being triggered. The lockup is resolved by killing a memory hog app, but low memory should not cause the system to lock up.

                As for an out of memory situation, user should be able to configure what works best for them, either to kill a process automatically, to select which process to kill before hand and to protect certain processes, or have an early warning dialog to ask the user if they want to save work and shut down a few apps gracefully before it gets to the point of killing. Giving user a heads up before the memory exhaustion is reached and some time to save work makes sense with some desktop systems but is not best everywhere.

                By default a number of processes need to be protected such as X

                Comment


                • #68
                  birdie

                  Don't waste your time, he will take lucky as a argument when you point screenshots of real production, and will point you a video with people actually saying current defragmentation techniques are not dependent on swap anymore to say they are because the word "swap" is mentioned there, also he will point a experimental LG hibernation that fail because they dumbily do deduplication on the filesystem and say it is a case for ram fragmentation, even if it is about fragmentation in a completely different place, in FS deduplication.

                  I think there is enough arguments here so people can take conclusion by themselves.

                  Comment


                  • #69
                    I find it interesting that I only started having issues with RAM filling up and the system freezing a few weeks before this issue came up. Before then my system would get slower but it would not totally freeze and have to be reset. Is it possible there is a recent kernel bug that is being overlooked?

                    Comment


                    • #70
                      Originally posted by keantoken View Post
                      I find it interesting that I only started having issues with RAM filling up and the system freezing a few weeks before this issue came up. Before then my system would get slower but it would not totally freeze and have to be reset. Is it possible there is a recent kernel bug that is being overlooked?
                      This is just a guess, but I believe kernel do not take all deep actions needed to guarantee OOM killer conditions to drop pages quickly when OOM condition is achieved, probably it is entering racing condition as Neraxa mentioned, probably with one or multiple of the subsystems on the memory management, moving, compacting, re-flagging, splitting, etc, many cases on those systems lock pages and even isolate from LRU making it prone to race conditions.

                      Comment

                      Working...
                      X