Originally posted by tuxd3v
View Post
Announcement
Collapse
No announcement yet.
Major Rewrite Of Linux's FS-Cache / CacheFiles So It's Smaller & Simpler
Collapse
X
-
-
Originally posted by tuxd3v View PostCacheFiles files should benefit for cache from local filesystems..so it should theoretically also benefit desktop's,
The amount of the benefit its something that we don't know, it will depend from case to case, considering each use case,
But this, if CacheFiles is what I am thinking about, I am not sure..
Maybe someone more educated on the matter can elaborate more about the subject..
- Likes 1
Comment
-
Originally posted by cl333r View Post
Not an expert on any on this but I was wondering if there should be a totally new POSIX 2.0 (like we did with Vulkan vs GL, or Wayland vs X11), it shows its age by dealing with its old naming conventions from the 1980's and little issues like that. I for one don't like that when you list files in any folder the system also lists "." and "..", so 99.9% of code has to test each file name against these names:
for (...) {
if (d->d_name != "." && d->d_name != "..")
// go on
}
you don't need posix 2.0, just provide higher-level api on top of it, which will use more powerful language and do all preprocessing you want
- Likes 1
Comment
-
Originally posted by flower View PostI would love to have a way to tell the kernel "this is a process that runs rarely please don't cache after it's done"
Would help with backups and very big greps
Comment
-
Originally posted by pal666 View Post
Originally posted by waxhead View Post
Absolutely, but personally I would prefer putting a program that benefits from nocache to a cgroup and set some memory limits (easy with systemd).
- Likes 1
Comment
-
Originally posted by sinepgib View PostWhile this is entirely true, it is also true that keeping pages you can determine will not be needed again is also a waste of memory. Keep in mind your disk is much bigger than you physical memory. This means there will probably be filesystem pages competing for that memory at some point, you want to reduce those conflicts if you can do it easily.
That's where tools like down1 shared come in handy.
Comment
-
Originally posted by sinepgib View PostI think they meant outside the process. Like nocache enables you to tell cp without actually modifying cp.Last edited by pal666; 30 November 2021, 03:18 PM.
Comment
-
Originally posted by set135No, this is not how things work. The page cache works two ways, if you start writing to a file, unless you are doing a synchronous operation it goes into memory, then if either time passes, or you start to run low, or execute a 'sync', it automaticly gets flushed to underlying storage. The other use is when you read any file, it stores that data in the cache, so that if you want to read it again-- perhaps many times, it does not happen at the glacial speed of disk. As time goes on, pages are aged can be dropped in favor of more recent 'hot' data being read. Either way, if something needs system memory, the kernel will start flushing buffered writes, or just drop as much buffered reads as required. There are also caches for other things like dentries, but generally unless some settings have been made pathological, manual flushing of caches is mostly only useful for things like benchmarks or diagnosing memory leaks.
But According to it, this can't happen:
Code:$ free -m total used free shared buff/cache available Mem: 15000 1887 274 181 12839 12606 Swap: 2047 1 2046
Originally posted by set135Now, system memory can get used up, and this is generally from misbehaving applications allocating too much, and not freeing it, so thrashing can certainly occur. Sometimes, when people get excited about their memory being used, though, they are looking at something like 'free' and and seeing that it all seems to have been used up, not realizing that the caches will be flushed or dropped as required.
see above example.
ho no, I am not allocating too much..
Code:find / -type f -exec dd if={} of=/dev/null bs=1M 1>/dev/null 2>&1 \;
Originally posted by set135I have moderately large storage (16TB) that I regularly back-up, and constantly read and write large files to, and my buff/cache stats quickly fill up my 32GB ram, as I run 24/7 for months at a time, and I never see thrashing unless one of my damn web browsers (or something) starts pigging up all the available memory.
I can show you another example:
Code:# free -m total used free shared buff/cache available Mem: 2018 246 51 4 1720 1705 Swap: 1009 51 958
And the thing is...
At this point, I rather prefer to read from disk directly, with no cache,
Than having pages swapping in and out, because as it seems, my "precious" caches are more precious than trashing....so lets continue trashing... prevent at all cost dropping the caches..Last edited by tuxd3v; 30 November 2021, 06:59 PM. Reason: it pushes a [/QUOTE] to the end of the string.. you end up with 2 [/QUOTES].. :S
Comment
Comment