Announcement

Collapse
No announcement yet.

Latest Slab Cgroup Memory Controller Patches Saving ~1GB RAM Per Host On Facebook Servers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by tuxd3v View Post
    the thing here is that eventually some app will need big swatts of memory, and it allocates from the free pool, so having available swats of memory will prevent the machine from swapping( taking out of equation here the Fragmentation memory issue.. ).

    Some servers we operate, we need to be dropping caches with regular periods of time,
    so that machine doesn't enter in swap, due to memory being waisted in cache..

    Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency
    That memory in cache _is_ available to use in that case. If some other process needs more memory, and there's not enough unused memory, pages will be taken from the cache; that data's already safely on mass storage one way or another (in a file, or paged out) so it can be taken without any additional paging.

    Comment


    • #32
      Originally posted by rlkrlk View Post
      That memory in cache _is_ available to use in that case.
      no _its_ not.
      it takes a lot of time to be available, and a poor job is also made , that doesn't return the desired amounts to the free pool, when you need it, because a lot of factors the most basic one is already locks on that page cache, used by other processes, and so on..

      You are only focusing yourself in the after request, and after some time, not when the request is really made for more memory..
      You can see the system gradually entering in swap( I see this everyday, and annoys me.. not having a knob for it like in 2.4 series..), not because of not enough memory, but just because tons are cached, if you drop caches then you are in a better position to not enter in swap, but even then, dropping caches, is another problem that could take a lot of time depending on the amount of page cache that is being used at the moment by processes..

      So using Ram for cache could be a dual issue, I am not against it usage,
      I am against unlimited amounts of cache being used that makes the systems enter in swap, which seems ridicule, since you have tons of it, it happens that its badly distributed, or badly cache limited..

      This problems usually happens in big databases, by big I mean databases with some 15-30TB or more( at least on this ones I observe this a lot of times ), machines with lots of data, will require in certain times of the month special queries that would retrieve statistical data, from that day, or that month or that year( the horror case ), and yes you have the memory for them and even more, it happens that the kernel does a poor job not having a "physical barrier" for page cache..

      I doesn't bother to have some milliseconds more in IO, I bother when the machine almost freezes..
      And need my constant assistance, due to not having a simple knob for that..

      If you have a big IO problem,
      You shouldn't solve it with page cache, but instead invest some millions in first class storage, and first class network, then reserve a reasonable amount of Ram for page cache accordingly with the workloads you expect( for that server.. so its always server specific ), the rest maintain in the free pool, because big queries will need that ram, its a loss of processing power/processing time/human resources, to need to allocate a SysAdmin, to a server when some queries are done, just because the page cache system is broken..and its very frustrating for the sysadmin seeing tons of page cache, the machine going down the toilet to swap, and the machine has lots of mem Ram( of course that I am excluding memory fragmentation, because I know that the problem I am speaking about is not related with that.. fragmentation augments the problem but its not the cause here.. ).

      I hope that now you could understand, that I am not speaking about your laptop workloads case, where the amounts of page cache are reduced because you just don't do a lot of IO searching for information on disc in big amounts....
      So in this cases you would not experience this problems, but remember linux is heavily used in the datacenter,, and I hope someone solves this problems,
      And a simple way, would be to implement the limit that existed in kernel 2.4 for page cache, in this way we can tune the system for the workload it will have..

      Comment

      Working...
      X