Announcement

Collapse
No announcement yet.

Facebook Developing THP Shrinker To Avoid Linux Memory Waste

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Alex/AT
    replied
    Originally posted by yump View Post
    Huge pages are great for anything with a big working set and substantially random access pattern. The game Factorio, in particular, absolutely loves huge pages. Apparently the best way is to LD_PRELOAD mimalloc in place of glibc malloc and tell it to use huge pages with an environment variable.
    This completely fits into 'literally helps only a few workloads'.
    And yes, madvise is indeed an ok way to use these. It's nice distros are starting to adopt exactly that way of doing things for problematic stuff.

    Leave a comment:


  • yump
    replied
    Originally posted by mangeek View Post

    THP in general does not, though Fedora|CentOS|RHEL have it enabled by default, and it did have a very rocky start and leave a lot of people with a bad impressions on the desktop side. Huge pages help make some workloads much more efficient, so an improvement to THP is good no matter what. If this change works well and mitigates some of the bad effects of huge pages (more memory consumption, allocation stalls), it'll probably open the door to enabling THP by default, including on desktops.
    I'm on Fedora and /sys/kernel/mm/transparent_hugepage/enabled says "madvise". I'm pretty sure that means programs don't get huge pages unless they explicitly request them, so not exactly "transparent".​ More transparent than statically assigned hugetlbfs, I suppose.

    Originally posted by Alex/AT View Post

    Not 'unless you're running a server', but unless you 'benchmark it versus idiotic^W synthetic condition of static 1GB (!!!) page size in 2-VM environment'.
    Huge pages are great for anything with a big working set and substantially random access pattern. The game Factorio, in particular, absolutely loves huge pages. Apparently the best way is to LD_PRELOAD mimalloc in place of glibc malloc and tell it to use huge pages with an environment variable.

    Leave a comment:


  • Alex/AT
    replied
    Originally posted by halo9en View Post
    Arch also has THP enabled by default. However, unless you're running a server...
    Not 'unless you're running a server', but unless you 'benchmark it versus idiotic^W synthetic condition of static 1GB (!!!) page size in 2-VM environment'.

    Leave a comment:


  • halo9en
    replied
    Arch also has THP enabled by default. However, unless you're running a server...

    Leave a comment:


  • Alex/AT
    replied
    As THP literally helps only a few workloads and can heavily hurt most other, this is one other thing that should be disabled by default.

    Leave a comment:


  • mangeek
    replied
    Originally posted by Mitch View Post
    Excuse my ignorance, but am I right in guessing that this doesn't affect normal desktop usage such as Browsers, DE's, Videogames, Video + Photo Editing, engineering?
    THP in general does not, though Fedora|CentOS|RHEL have it enabled by default, and it did have a very rocky start and leave a lot of people with a bad impressions on the desktop side. Huge pages help make some workloads much more efficient, so an improvement to THP is good no matter what. If this change works well and mitigates some of the bad effects of huge pages (more memory consumption, allocation stalls), it'll probably open the door to enabling THP by default, including on desktops.

    Leave a comment:


  • CochainComplex
    replied
    tip for that pic

    Leave a comment:


  • Mitch
    replied
    Excuse my ignorance, but am I right in guessing that this doesn't affect normal desktop usage such as Browsers, DE's, Videogames, Video + Photo Editing, engineering?

    Leave a comment:


  • JEBjames
    replied
    Michael

    Typo/Grammar "Eventually they engineers are hoping that with the THP Shrinker, they" should be something else. Perhaps "Eventually their engineers are hoping that with the THP Shrinker they" would be a bit better.

    Leave a comment:


  • mangeek
    replied
    I'm a big fan of huge pages. I think we should probably be using them wherever possible except in the most restrictive environments. Internal kernel memory structures, the page cache, filesystem tuning for small file affinity within 1MB boundaries to optimize large-page block caching, swap... I'll bet there are a ton of efficiencies to be had when more and more of the overhead is handled in 1-2MB chunks instead of 4K at a time.

    Leave a comment:

Working...
X