Announcement

Collapse
No announcement yet.

Jemalloc 5.3 Released With Many Speed & Space Optimizations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by david-nk View Post
    I used jemalloc 3 a lot and managed to achieve huge memory savings for some applications. However, jemalloc 5 doesn't seem that great anymore. It seems to use way more memory for minuscule performance gains.

    Also, it has some pretty bad behaviors that cause it to retain memory far longer than it should.
    For example, if you have 8 worker threads processing images and each of them, at some point, loads an image that uses up 10 GB of memory, that means the application will keep 80 GB allocated forever. So jemalloc is no longer a good default option like it might have been a few years ago.
    Sounds like this occurred for you https://github.com/jemalloc/jemalloc/issues/1398 ... there is already a workaround for that situation. Probably should be that should be the default... since otherwise idle threads all will leak memory.

    Comment


    • #12
      Originally posted by C8292 View Post
      Why isn't jemalloc default?
      As it stands, jemalloc does not always work properly when memory overcommit is disabled. That is often the case on cluster compute nodes. By experience, we had to disable jemalloc from computational biology software that would not run on a 1TB machine with jemalloc and no overcommit, whereas it would run with the standard glibc allocator and use very little memory. Not that jemalloc made the software use more memory, but it doesn't support no-overcommit well and would complain and crash. It is a known issue.

      At minimum to become standard it would have to behave properly in that setting.

      Comment


      • #13
        Originally posted by guspitts View Post

        As it stands, jemalloc does not always work properly when memory overcommit is disabled. That is often the case on cluster compute nodes. By experience, we had to disable jemalloc from computational biology software that would not run on a 1TB machine with jemalloc and no overcommit, whereas it would run with the standard glibc allocator and use very little memory. Not that jemalloc made the software use more memory, but it doesn't support no-overcommit well and would complain and crash. It is a known issue.

        At minimum to become standard it would have to behave properly in that setting.
        Why do you disable overcommit? Do you handle OOM in a particular way?

        Comment


        • #14
          Originally posted by sinepgib View Post

          Why do you disable overcommit? Do you handle OOM in a particular way?
          On large memory machine with many users, overcommit has not worked for us. When OOM would trigger, the machine was basically frozen solid and everyone was badly affected. Without overcommit, the process requiring too much memory gets a NULL pointer back and either exit or crashes immediately. Other processes are not affected.

          There are downsides to no having overcommit. Technically, a process having more than 50% of memory could be denied a simple fork+exec. That does not seem happen in practice. Processes compiled with the address sanitizer (`-fsanitize=address`) do not work.

          For us, not having overcommit made our compute nodes more stable. (I don't disable overcommit on my laptop on the other hand).

          Comment


          • #15
            Originally posted by bug77 View Post

            That info needs more context to be relevant. It is possible for an allocator to reserve memory in advance when it detects frequent allocations. It will use more memory, but in doing so it may also dramatically improve throughput.

            "More than good enough" is probably a reference to most apps that are not actually memory intensive.
            i used it with icinga2. it was for some reason allocating more and more memory - which seemed like a memleak. eventually it would oom.

            i switched it jemalloc for it, and memory usage started staying under 3.5GB, as opposed to going well over 12GB.

            Comment


            • #16
              Michael how about a benchmark of different malloc implementations? Jemalloc vs mimalloc vs gnu malloc
              Last edited by oleid; 17 May 2022, 09:54 AM.

              Comment


              • #17
                Originally posted by yoshi314 View Post
                i've had cases of apps leaking memory with glibc and not leaking it with jemalloc. the 'more than good enough' is seriously debatable (personally speaking).
                or maybe it was not leaking but generally higher memory footprint. but the difference was truly dramatic.
                I'm not going to outright call bullshit on this, since it's always possible that you hit a corner case, but I've written daemons that have stable footprints after months of uptime processing TB of data.
                There may be an inherent problem with glibc that you managed to encounter, but it's certainly not NOT "more than good enough" for a vast majority of cases. If that was true, we'd all be rebooting our servers (or at least, restarting all the userspace pieces) every week because of the memory leaks.

                Comment


                • #18
                  Originally posted by bug77 View Post

                  That info needs more context to be relevant. It is possible for an allocator to reserve memory in advance when it detects frequent allocations. It will use more memory, but in doing so it may also dramatically improve throughput.

                  "More than good enough" is probably a reference to most apps that are not actually memory intensive.
                  Regarding the last paragraph, the major thing holding the glibc allocator back was it's single threadiness which have improved much over the years. It would be interesting to see some benchmarks done here (not exactly trivial of course) but me feeling of it is that glibc is much faster and have a much lower overhead than it had some 10 years ago.

                  Comment


                  • #19
                    Originally posted by bug77 View Post
                    "More than good enough" is probably a reference to most apps that are not actually memory intensive.
                    I was referring to this: https://www.phoronix.com/scan.php?pa...c-thread-cache
                    With the per-thread malloc cache, the performance difference between jemalloc and the GLIBC allocator was almost eliminated.

                    Comment


                    • #20
                      Originally posted by sinepgib View Post

                      Why do you disable overcommit? Do you handle OOM in a particular way?
                      With certain applications (no swap, limited memory), it is beneficial to get NULL for allocation failures instead of getting a valid pointer and driving the system into OOM. But apparently the other side of the use case spectrum (bigmem, lot of memory hogs and users) disabling memory overcommit is also a solution.
                      Last edited by zboszor; 16 May 2022, 11:42 PM.

                      Comment

                      Working...
                      X