F2FS Brings Interesting "Device Aliasing" Feature To Linux 6.13 To Carve Out Partition

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AndyChow
    replied
    Originally posted by the-burrito-triangle View Post
    Is there any benefit of using F2FS over EXT4, XFS, or BTRFS?
    Not unless you're running the OS on a microSD or eMMC module. But if you are, it could have advantages. eMMC modules are very common on single-board computers and industrial hardware.

    Leave a comment:


  • TheCycoONE
    replied
    Originally posted by [deXter] View Post
    Ooh, seems like they also added a lazytime mount option.
    F2FS is starting to look more and more interesting. Anyone here using it as their daily driver?
    I have for years but only on devices that it was meant for; a converted chromebook with a microsd card and an eMMC drive. I don't think there's any compelling reason to use it on more sophisticated drives.

    Leave a comment:


  • plonoma
    replied
    Wouldn't it be more on point to show the Linux mascot with a label maker?
    Last edited by plonoma; 27 November 2024, 08:47 AM.

    Leave a comment:


  • Ferrum Master
    replied
    Originally posted by Alexmitter View Post

    No thats actually BS. Modern TLC/MLC drives are just enormously more reliable compared to especially the early SSDs. Its factually very hard to write a modern quality SSD to death by normal and even poweruser usage. Yes the industry wants SLC drives but that is more of a precaution and the price they pay is both higher per drive and decreased capacity as factually, those SLC drives are just TLC drives running in SLC mode.

    I work in RMA mate... it is not a precaution. Modern drives fail more than you think, write lock saves thou, you can have you data usually. But the modern NANDs have much more spare bits for recovery, but they die, as they have to hold much more voltage levels thus are prone to having more bad cells. They are more volatile by design, and the the risk is thrown at consumers, for the manufactures have greater margins, it would be okay if the prices would drop accordingly, but not really.

    I still have like 64GB crucial M4 drives like nothing happened to them. 2015 Samsung 950Pro 400TBW for the 512GB model, 960Pro has 800TBW for the 1TB then 990pro 600TBW 1TB, where is the progress? WD Black SN770 1TB 600TBW. The Corsair MP600 had 1600TBW !! for 1TB model, that's 2020 with Toshiba 96 level NAND, and now MP700 has 700TBW for 1TB model... who is telling BS? Enormously more reliable? It is vice versa.

    The thing that's weird. There are years... reviewers only have like 1-2Tb drives at hand, tech grows, but the price is the same, average SSDs are not getting larger also, they are getting more stupid for sure, with gen5 power consumption and useless linear write stats, but we are getting scammed here. 4TB should be an average norm already, but it ain't. Average entry laptop still is being sold with 512GB drive, that's pretty nuts, considering those are entry level drives, you cannot fill them also as they start to crawl.
    Last edited by Ferrum Master; 27 November 2024, 09:01 AM.

    Leave a comment:


  • Alexmitter
    replied
    Originally posted by Ferrum Master View Post

    Contrary... modern drives have much less endurance being multi level and much higher density, they more prone unlike the older SLC or MLC drives with much higher tech node, but they were much more robust. Enterprise/industrial drives still up to day use only SLC type NANDs... guess why?

    It is pretty funny, but we are actually being scammed here.
    No thats actually BS. Modern TLC/MLC drives are just enormously more reliable compared to especially the early SSDs. Its factually very hard to write a modern quality SSD to death by normal and even poweruser usage. Yes the industry wants SLC drives but that is more of a precaution and the price they pay is both higher per drive and decreased capacity as factually, those SLC drives are just TLC drives running in SLC mode.

    Leave a comment:


  • cynic
    replied
    one of the best features of F2FS is that everybody loves it: look at the comments: no drama over its authors, no fanboism, no hate.

    Leave a comment:


  • onlyLinuxLuvUBack
    replied
    Originally posted by Quackdoc View Post
    This is bloody cool, I so plan on tinkering with this
    I wait for the phoronix benchmark first

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by Namelesswonder View Post
    The claim of compression increasing read throughput is really only applicable for scenarios where the scale between CPU power and bus speed/storage speed is lopsided in favor of the CPU. Situations with a modern processors being attached to garbage bin SSDs or eMMC/SD cards.
    It's worth noting that this is pretty good for things like image sequence caches, I found benefits in my game loading speeds quite a bit too

    Leave a comment:


  • mb_q
    replied
    Originally posted by Chewi View Post

    You've been able to do that already since forever using a loop device. I think the difference here is that it's more optimised, but I'm really not sure.
    Loop device maps its blocks to offsets in the underlying file, but they may move around the underlying block device depending on the FS decisions, especially on ones like F2FS or BTRFS which especially avoid mutating blocks. AFICT this does a constant offset-to-offset mapping, like swapfile or LVM, for a better performance but also bypassing F2FS reliability (let's say) and wear levelling mechanisms altogether. F2FS is for smartphones, so it is probably for VM images since Android is currently evolving in this direction.

    Leave a comment:


  • Namelesswonder
    replied
    Originally posted by [deXter] View Post

    Compared to BTRFS, it's 2-3x faster.
    Compared to the other two though it's not much faster (or maybe even a bit slower in a couple of cases), but one big advantage over the two is that F2FS supports compression.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    The main overall advantage though was that is that it's supposed to be flash-storage friendly, but that doesn't really mean much these days due to controller abstraction, and drives these days having high endurance compared to early SSDs. Probably makes more sense for less-complex flash media though, like eMMC and SD cards.
    https://docs.kernel.org/filesystems/...implementation There's many pitfalls to the compression implementation in F2FS.

    F2FS doesn't have useful space saving compression, it has write endurance saving compression. Whatever compression that is accomplished only saves the amount of writes done to the storage, the free blocks and bitmap are still factor in the uncompressed size, meaning you save no space. With a perfect 2x compression ratio a 256GiB drive using F2FS can have 256GiB of data loaded onto it before it reports itself as full, but only 128GiB of compressed data is written.

    It's possible to reclaim the overwrite, however doing so makes the compressed files immutable, relegating this to only being useful for cold storage write-once.

    The claim of compression increasing read throughput is really only applicable for scenarios where the scale between CPU power and bus speed/storage speed is lopsided in favor of the CPU. Situations with a modern processors being attached to garbage bin SSDs or eMMC/SD cards.

    Leave a comment:

Working...
X