Announcement

Collapse
No announcement yet.

Way-Cooler Is Still Around As An i3-Inspired Wayland Compositor Written In Rust

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by svanheulen

    What do you consider bloated? I'm using i3 and checking in htop it's only using 112M virtual, 12M resident, 10M shared.
    Despite multiple efforts and lots of searching I still haven't figured out how to interpret the memory usage columns that htop provides.

    You have real physical memory. You have swap (disk backed) memory. You have virtual memory which represents both combined. Presumably the kernel can even provide memory beyond the sum of the physical and swap memory if it maps unused pages to a table of memory ranges used to record non-backed, unused virtual pages.

    You've also got memory pages shared between multiple apps. You've got memory that is considered part of the executable data and code regions. You've got stack and heap memory (are tools like top and htop even able to see the difference?).

    htop seems to show the same data for threads belonging to a process and the process itself. So I guess threads can't have their own memory. I don't know.

    You've also got shared libraries that that I guess use some type of shared memory which is probably shown in a virtual address space which may or may not be backed by physical memory, but is definitely used by multiple apps so can't be fairly considered to be part of the memory footprint of any individual app.

    There's probably copy-on-write, memory-saving shenanigans going on too which likely further skews the numbers.


    My current approach to managing my unending confusion in this matter is three fold:
    1. Put lots of physical memory in my machine.
    2. Try and use apps that claim to be lightweight when I don't need the heavy weight equivalents (eg use Transmission instead of Vuze for torrenting and Audacious instead of Rhythmbox for music).
    3. Pray daily to the silicon gods that they do not send their reaper to reap my processes. (They say his name is the OOM Killer and he acts without mercy and without consent)

    Comment


    • #12
      Originally posted by KellyClowers View Post

      It won't fit on his 4MB 386, must be bloated!
      Wait ... can the 386 even address 4MB of memory? I thought it was far more limited than that. I am not too knowledgeable about the exact details of early x86 addressing modes, though. Maybe I am confusing it with the 286.

      Comment


      • #13
        Originally posted by cybertraveler View Post

        Despite multiple efforts and lots of searching I still haven't figured out how to interpret the memory usage columns that htop provides.

        You have real physical memory. You have swap (disk backed) memory. You have virtual memory which represents both combined. Presumably the kernel can even provide memory beyond the sum of the physical and swap memory if it maps unused pages to a table of memory ranges used to record non-backed, unused virtual pages.

        You've also got memory pages shared between multiple apps. You've got memory that is considered part of the executable data and code regions. You've got stack and heap memory (are tools like top and htop even able to see the difference?).

        htop seems to show the same data for threads belonging to a process and the process itself. So I guess threads can't have their own memory. I don't know.

        You've also got shared libraries that that I guess use some type of shared memory which is probably shown in a virtual address space which may or may not be backed by physical memory, but is definitely used by multiple apps so can't be fairly considered to be part of the memory footprint of any individual app.

        There's probably copy-on-write, memory-saving shenanigans going on too which likely further skews the numbers.


        My current approach to managing my unending confusion in this matter is three fold:
        1. Put lots of physical memory in my machine.
        2. Try and use apps that claim to be lightweight when I don't need the heavy weight equivalents (eg use Transmission instead of Vuze for torrenting and Audacious instead of Rhythmbox for music).
        3. Pray daily to the silicon gods that they do not send their reaper to reap my processes. (They say his name is the OOM Killer and he acts without mercy and without consent)
        tl;dr: Look at Resident Memory. That is the closest you can get to actual RAM usage. Also, extra memory for disk caches is important.

        Turns out, memory management on a modern OS is ... complicated. There is no such easily defined thing as "memory usage".

        Every process gets its own "virtual memory space". That is, the OS configures the CPU/MMU in such a way so that each process can only see its own data/code and any other resources it needs (library code, data coming from the files, etc), and nothing else, and lives in the illusion that it is the only program running on the system. The only way for the program to find out information about the outside world is by asking the kernel about it, by using system calls or reading files from /sys or /proc. All threads of the same process share the same virtual address space (that is the distinction between threads and processes, processes have their own virtual memory space). The size of this "virtual memory" is ~3GB on 32-bit systems and insanely massive on 64-bit systems. How much of that virtual address space is actually used for something by the process is shown under the "virtual memory" column in programs like htop. That includes application code, libraries, data, stacks for each thread, heap allocations, any mmap-ed files, shared memory, possibly sparse allocations (for example a virtual machine could "allocate" a large amount of memory at once for the guest OS, but ask the kernel to not actually physically reserve RAM until it actually gets used), etc etc etc ... some of it might be backed by RAM, some by swap, some by files from your filesystem, some by absolutely nothing (and say, the kernel pretends it is all zeroes if the process tries to read it).

        So, the "virtual memory" column tells you nothing about your hardware or your system. It tells you how much data a given process is dealing with (in any shape or form).

        Even normal memory allocations (e.g with the `malloc` function in C) might not actually be backed by RAM. The Linux kernel often delays actually allocating physical RAM until the process actually tries to use the memory. This is done, because a lot of large software (think virtual machines, Java JVM, databases, etc) tries to preallocate large amounts of memory ahead of time and prefers to micro-manage it on its own internally. If the kernel didn't do this and you ran this kind of software, all your RAM would be gone in no time without actually being used for anything (yet?), just sitting there idle, allocated. Instead, linux will wait for you to actually need this memory, and use it for something else until then. This is why, on linux, if you try to `malloc` more memory than is free on your system, `malloc` will actually succeed, but if you try to use it all, the OOM killer will come after you and kill your process (or another process if it believes there is something else out there that is less important than you, and it should kill it instead to free memory for you) when the system runs out of memory. Note that you can control all of this using sysctl. You can disable all of this delayed allocation and make the kernel physically allocate RAM straight away and cause `malloc` to fail if there isn't any (avoiding the OOM killer), but I don't recommend it. This is all done for good reasons. It might seem bizarre at first glance if you don't know much about how modern operating systems work, but it lets you make better use of your hardware.

        You can see now why tracking memory is such a complicated and difficult task. The kernel will report the amount of memory that is backed by actual physical space (RAM or swap) as Resident Memory. (I am not 100% certain if Resident Memory = RAM + swap or just RAM, someone correct me if i am wrong, haven't used swap in a looong time)

        This is why if you want an estimate of how much physical RAM you need for your computing needs, you should look at the processes' Resident Memory. Even then, it is an approximation, as there is also Shared Memory, any `tmpfs` you might have mounted on /tmp or elsewhere, the kernel itself and all your drivers, etc. etc. etc. Look at `free -h` and `df -h` for tmpfs filesystems to get summaries.

        You should also leave some extra RAM for filesystem caching. Linux will use all RAM that is not used for anything else to cache data and metadata from your filesystem. It makes a massive difference to the overall performance and responsiveness of your system, especially if you don't have super fast SSDs. Even if you don't actually plan to use it in any software applications, having extra RAM in your computer is a good thing, as Linux will find uses for it to speed up everything on your computer. In fact, by default, as your RAM starts to fill up, the kernel might prefer to start swapping some stuff (even though your RAM is not actually full yet), to make extra space for the caches. This is usually a *good thing*, as some RAM allocations by your applications might be very rarely used, so it wouldn't hurt performance too much to swap them out, and extra cache can improve performance of your whole system. Again, if you believe that this is not the case on your particular system, you can always change it using sysctl (look at vm.swappiness).

        Also, if you are using ZFS, note that due to the way the ZFS driver works (it kinda emulates Solaris on top of Linux, it is not fully native), ZFS's caches will show up as part of the used memory (the same as memory from your applications) rather than disk cache memory. This is not actually a problem, as it will still be managed properly (ZFS will free it when your applications try to allocate), but it does make memory statistics/reporting seem weird (looks like a lot more memory is used than actually is).

        Source: I may or may not have spent a lot more hours of my free time than I'd like to admit being fascinated by memory management and reading relevant kernel docs and lkml threads and lwn articles.

        Comment


        • #14
          Originally posted by tajjada View Post
          Wait ... can the 386 even address 4MB of memory?
          Yes. https://en.wikipedia.org/wiki/RAM_limit

          Comment


          • #15
            I'm guessing i3 means something other than Intel's entry level desktop processor family.

            Comment


            • #16
              Originally posted by bachchain View Post
              I'm guessing i3 means something other than Intel's entry level desktop processor family.
              Are you people allergic to google? https://i3wm.org/

              Comment


              • #17
                Originally posted by tajjada View Post

                Wait ... can the 386 even address 4MB of memory? I thought it was far more limited than that. I am not too knowledgeable about the exact details of early x86 addressing modes, though. Maybe I am confusing it with the 286.
                I picked that because I had a 386 with 4MB of RAM (it was the 486 era, so we got the 386 cheap (for the time, so under $2k))

                Comment


                • #18
                  Originally posted by bachchain View Post
                  I'm guessing i3 means something other than Intel's entry level desktop processor family.
                  Is this your first time on Phoronix?

                  Comment


                  • #19
                    Originally posted by tajjada View Post

                    Wait ... can the 386 even address 4MB of memory? I thought it was far more limited than that. I am not too knowledgeable about the exact details of early x86 addressing modes, though. Maybe I am confusing it with the 286.

                    Comment


                    • #20
                      Originally posted by tajjada View Post
                      tl;dr: Look at Resident Memory. That is the closest you can get to actual RAM usage. Also, extra memory for disk caches is important.
                      Thanks for writing all that. Great post. It jives with what I've learned so far.

                      It is a fascinating area of computing to look into.

                      Comment

                      Working...
                      X