Originally posted by the-burrito-triangle
View Post
F2FS Brings Interesting "Device Aliasing" Feature To Linux 6.13 To Carve Out Partition
Collapse
X
-
-
-
Originally posted by [deXter] View PostOoh, seems like they also added a lazytime mount option.
F2FS is starting to look more and more interesting. Anyone here using it as their daily driver?
Leave a comment:
-
-
Originally posted by Alexmitter View Post
No thats actually BS. Modern TLC/MLC drives are just enormously more reliable compared to especially the early SSDs. Its factually very hard to write a modern quality SSD to death by normal and even poweruser usage. Yes the industry wants SLC drives but that is more of a precaution and the price they pay is both higher per drive and decreased capacity as factually, those SLC drives are just TLC drives running in SLC mode.
I work in RMA mate... it is not a precaution. Modern drives fail more than you think, write lock saves thou, you can have you data usually. But the modern NANDs have much more spare bits for recovery, but they die, as they have to hold much more voltage levels thus are prone to having more bad cells. They are more volatile by design, and the the risk is thrown at consumers, for the manufactures have greater margins, it would be okay if the prices would drop accordingly, but not really.
I still have like 64GB crucial M4 drives like nothing happened to them. 2015 Samsung 950Pro 400TBW for the 512GB model, 960Pro has 800TBW for the 1TB then 990pro 600TBW 1TB, where is the progress? WD Black SN770 1TB 600TBW. The Corsair MP600 had 1600TBW !! for 1TB model, that's 2020 with Toshiba 96 level NAND, and now MP700 has 700TBW for 1TB model... who is telling BS? Enormously more reliable? It is vice versa.
The thing that's weird. There are years... reviewers only have like 1-2Tb drives at hand, tech grows, but the price is the same, average SSDs are not getting larger also, they are getting more stupid for sure, with gen5 power consumption and useless linear write stats, but we are getting scammed here. 4TB should be an average norm already, but it ain't. Average entry laptop still is being sold with 512GB drive, that's pretty nuts, considering those are entry level drives, you cannot fill them also as they start to crawl.Last edited by Ferrum Master; 27 November 2024, 09:01 AM.
Leave a comment:
-
-
Originally posted by Ferrum Master View Post
Contrary... modern drives have much less endurance being multi level and much higher density, they more prone unlike the older SLC or MLC drives with much higher tech node, but they were much more robust. Enterprise/industrial drives still up to day use only SLC type NANDs... guess why?
It is pretty funny, but we are actually being scammed here.
Leave a comment:
-
-
one of the best features of F2FS is that everybody loves it: look at the comments: no drama over its authors, no fanboism, no hate.
Leave a comment:
-
-
Originally posted by Quackdoc View PostThis is bloody cool, I so plan on tinkering with this
Leave a comment:
-
-
Originally posted by Namelesswonder View PostThe claim of compression increasing read throughput is really only applicable for scenarios where the scale between CPU power and bus speed/storage speed is lopsided in favor of the CPU. Situations with a modern processors being attached to garbage bin SSDs or eMMC/SD cards.
Leave a comment:
-
-
Originally posted by Chewi View Post
You've been able to do that already since forever using a loop device. I think the difference here is that it's more optimised, but I'm really not sure.
Leave a comment:
-
-
Originally posted by [deXter] View Post
Compared to BTRFS, it's 2-3x faster.
Compared to the other two though it's not much faster (or maybe even a bit slower in a couple of cases), but one big advantage over the two is that F2FS supports compression.
Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
The main overall advantage though was that is that it's supposed to be flash-storage friendly, but that doesn't really mean much these days due to controller abstraction, and drives these days having high endurance compared to early SSDs. Probably makes more sense for less-complex flash media though, like eMMC and SD cards.
F2FS doesn't have useful space saving compression, it has write endurance saving compression. Whatever compression that is accomplished only saves the amount of writes done to the storage, the free blocks and bitmap are still factor in the uncompressed size, meaning you save no space. With a perfect 2x compression ratio a 256GiB drive using F2FS can have 256GiB of data loaded onto it before it reports itself as full, but only 128GiB of compressed data is written.
It's possible to reclaim the overwrite, however doing so makes the compressed files immutable, relegating this to only being useful for cold storage write-once.
The claim of compression increasing read throughput is really only applicable for scenarios where the scale between CPU power and bus speed/storage speed is lopsided in favor of the CPU. Situations with a modern processors being attached to garbage bin SSDs or eMMC/SD cards.
Leave a comment:
-
Leave a comment: