So there's a lot of information and some noise going on here, so maybe I can add some more of that
Traditionally, embedded devices (think WRT54g, lets not go much before that), would have a couple of megabytes (I think 4 mb was the 'first wrt54g, 8 mb if you had the 'bigger' version later) of 'parallel nand'. Now I never dug too deep into what kind of nand it was (MLC or SLC, but likely SLC, more on that later). Those worked ok for a while, but did in the end break down. They where used to use JFFS2 and their predecessors depending on the manufacturer and the linux in use.
The far more reliable NOR flash was often not used due to cost of course. As time went on, NOR probably was still more expensive, but not as much, but was far more reliable and much lower pin count. So it made sense for a lot of embedded devices to use this. Especially as those where available in bigger sizes too. (Think bios chips still, 4 - 16 Megabytes was quite achievable back then).
With the need for more storage, the 'MLC nand' chips came to be a more common thing, think the early android phones but also those very cheap tablets that flooded the market. Most of those had 4 GB of MLC parallel nand chips. Smaller modules (64 - 512 megabytes) tended to be SLC flash).
Due to many reliability issues (fs corruption and flash failure) and price reductions on eMMC chips, eMMC is now quite heavily in use in embedded systems that need more then a few megabytes of ram or are not trying to be the absolute cheapest. Parallel nand is just not common anymore (also due to pincount), but is of course still heavily used 'behind the scenes'.
So as mentioned previously as well, raw NAND is very poor (in contrast the the far more reliable but relatively very expensive NOR flash). Simply reading from MLC NAND flash causes bit flips, and this can just get worse and worse without even noticing (bit rot quite literally). SLC btw, suffers from this far far less (TLC far more?). While I don't want to get into the details of the why, but very briefly in summary, SLC uses 0 V to represent a logical 0, 5 V to represent a logical 1; MLC (multi layer, which really just means two) can actually represent 2 bits in this way, e.g. 0 V is still 00, but 5 V is actually 11. TLC ... well you get the drill. Due to these 'smaller' voltage differences, chips are simply more sensitive to changes (iow less tolerance).
So going back again, JFFS2 and its predecessors implemented a 'rudimentary' wear leveling algorithm. Its use and design is for relatively small flashes. It was simply not designed for gigabyte flashes. But this wear leveling was needed for both NAND and NOR flash to not overly exert a single flash cell.
UBI with UBIFS ontop, came along later to address the shortcommings of JFFS2 and was more of a modern FS. But this is only needed if you want to do reading AND writing from flash.
Now, in theory it IS possilbe to dump a squashfs image on raw flash, but with the whole 'reading causes bit flips' even then, you don't want to use this on NAND. Nor is fine. But then you are still constantly writing into the same location, think of firmware updates, but also 'splitting' the flash into a 8:2 ratio for example, meant that you are using 80% of your flash statically, without wearleveling (with a squashfs on top) and you'd be only utilizing the wearleveling algorithm on that 20% of your storage. While this works, it's not very efficient and doesn't make good use of the flash life. UBI + squashfs however is of course a solution solving this.
Anyway, back to the SLC/MLC NAND thing again. Back in the day, linux could talk to flash chips, and we had wearleveling algorithms in place, but due to the 'kamikaze' of flash (reading causes bit flips) we never know things are going bad. So only SLC flash was 'really' supported, and while you could talk to the MLC chip, it was only a matter of time before things would fail. Some manufacturers did implement some sort of algorithms in their (GPL violating) drivers to attempt to address this, but more often then not, without huge success, just good enough for the device to outlast its warranty.
Very recently (in the 5.x kernels) the linux kernel started to 'properly' support MLC, by introducing something of a 'manager' of the flash, that does 'magic' in the background (I'm not familiar with the details I admit, but i guess they simply constantly scan the flash during idle time, and repair corruption based on crc bytes). But this was defiantly not the case 15 years ago
Now, eMMC is 'just' raw nand flash, but has a micro controller to do the whole flash rejuvination etc etc stuff. It's almost like the M.2/Sata SSD, but those controllers are FAR more suffisticated (ssd's are expected simply do more, and bigger speeds).
So we now have flash available, in the form of eMMC, at a low pin count, which is fully managed and exposed to linux as a block device, e.g. any block based fs can go ontop of it (which I forgot to mention, on raw flash, you can't just mkfs or anything, as its' not a block device, its a character device). The only 'downside' is, that we have no control over the controller that is inside of the eMMC chip. E.g. there is firmware, that probably needs to be updated (bugs happen, known ones exist) and we have no control (improvements?) over the algorithm either.
And then, how do we format these devices? Well we can use ext[234], zfs, btrfs etc on them just fine. But these devices are relatively speaking still very very slow. And the algorithms/capabilities of the microcontroller are not super advanced, and so using a fs that better address the needs of such a controller makes sense. It is expected, that the longlevity of a eMMC device with f2fs is probably better (as is probably the performance) then using ext4/btrfs. Its probably not perfect yet, and every controller is probably slightly different, but one thing we can hope to expect from samsung, who makes both eMMC's, SD cards and did develop this fileystem, to have done this with purpose
Could you use f2fs on a piece of spinning rust? probably. can you use f2fs on a mtd (character) device using mtdblock? sure. Is it recommended, probably not at all. SSD's is an interesting question though, would it be beneficial? Maybe, would it be useful? probably not. We have far more powerful/featurefull filesystems for those devices, so using f2fs is probably not that great. Vice versa, embedded systems are usually constrained on ram, so using btrfs/zfs etc there is probably not a great idea (those filesystems tend to be memory hungry) where f2fs is simply 'lighter'.
And then finally 'but squashfs wtfomg' yeah, squashfs is great, but it is of course read-only, which makes it not useful at all by itself if you want to persist data but the combination (squashfs + overlayfs + j2ffs/f2fs) that is of course 'the golden ticket' as openwrt is using that combo as well, for good reason
Traditionally, embedded devices (think WRT54g, lets not go much before that), would have a couple of megabytes (I think 4 mb was the 'first wrt54g, 8 mb if you had the 'bigger' version later) of 'parallel nand'. Now I never dug too deep into what kind of nand it was (MLC or SLC, but likely SLC, more on that later). Those worked ok for a while, but did in the end break down. They where used to use JFFS2 and their predecessors depending on the manufacturer and the linux in use.
The far more reliable NOR flash was often not used due to cost of course. As time went on, NOR probably was still more expensive, but not as much, but was far more reliable and much lower pin count. So it made sense for a lot of embedded devices to use this. Especially as those where available in bigger sizes too. (Think bios chips still, 4 - 16 Megabytes was quite achievable back then).
With the need for more storage, the 'MLC nand' chips came to be a more common thing, think the early android phones but also those very cheap tablets that flooded the market. Most of those had 4 GB of MLC parallel nand chips. Smaller modules (64 - 512 megabytes) tended to be SLC flash).
Due to many reliability issues (fs corruption and flash failure) and price reductions on eMMC chips, eMMC is now quite heavily in use in embedded systems that need more then a few megabytes of ram or are not trying to be the absolute cheapest. Parallel nand is just not common anymore (also due to pincount), but is of course still heavily used 'behind the scenes'.
So as mentioned previously as well, raw NAND is very poor (in contrast the the far more reliable but relatively very expensive NOR flash). Simply reading from MLC NAND flash causes bit flips, and this can just get worse and worse without even noticing (bit rot quite literally). SLC btw, suffers from this far far less (TLC far more?). While I don't want to get into the details of the why, but very briefly in summary, SLC uses 0 V to represent a logical 0, 5 V to represent a logical 1; MLC (multi layer, which really just means two) can actually represent 2 bits in this way, e.g. 0 V is still 00, but 5 V is actually 11. TLC ... well you get the drill. Due to these 'smaller' voltage differences, chips are simply more sensitive to changes (iow less tolerance).
So going back again, JFFS2 and its predecessors implemented a 'rudimentary' wear leveling algorithm. Its use and design is for relatively small flashes. It was simply not designed for gigabyte flashes. But this wear leveling was needed for both NAND and NOR flash to not overly exert a single flash cell.
UBI with UBIFS ontop, came along later to address the shortcommings of JFFS2 and was more of a modern FS. But this is only needed if you want to do reading AND writing from flash.
Now, in theory it IS possilbe to dump a squashfs image on raw flash, but with the whole 'reading causes bit flips' even then, you don't want to use this on NAND. Nor is fine. But then you are still constantly writing into the same location, think of firmware updates, but also 'splitting' the flash into a 8:2 ratio for example, meant that you are using 80% of your flash statically, without wearleveling (with a squashfs on top) and you'd be only utilizing the wearleveling algorithm on that 20% of your storage. While this works, it's not very efficient and doesn't make good use of the flash life. UBI + squashfs however is of course a solution solving this.
Anyway, back to the SLC/MLC NAND thing again. Back in the day, linux could talk to flash chips, and we had wearleveling algorithms in place, but due to the 'kamikaze' of flash (reading causes bit flips) we never know things are going bad. So only SLC flash was 'really' supported, and while you could talk to the MLC chip, it was only a matter of time before things would fail. Some manufacturers did implement some sort of algorithms in their (GPL violating) drivers to attempt to address this, but more often then not, without huge success, just good enough for the device to outlast its warranty.
Very recently (in the 5.x kernels) the linux kernel started to 'properly' support MLC, by introducing something of a 'manager' of the flash, that does 'magic' in the background (I'm not familiar with the details I admit, but i guess they simply constantly scan the flash during idle time, and repair corruption based on crc bytes). But this was defiantly not the case 15 years ago
Now, eMMC is 'just' raw nand flash, but has a micro controller to do the whole flash rejuvination etc etc stuff. It's almost like the M.2/Sata SSD, but those controllers are FAR more suffisticated (ssd's are expected simply do more, and bigger speeds).
So we now have flash available, in the form of eMMC, at a low pin count, which is fully managed and exposed to linux as a block device, e.g. any block based fs can go ontop of it (which I forgot to mention, on raw flash, you can't just mkfs or anything, as its' not a block device, its a character device). The only 'downside' is, that we have no control over the controller that is inside of the eMMC chip. E.g. there is firmware, that probably needs to be updated (bugs happen, known ones exist) and we have no control (improvements?) over the algorithm either.
And then, how do we format these devices? Well we can use ext[234], zfs, btrfs etc on them just fine. But these devices are relatively speaking still very very slow. And the algorithms/capabilities of the microcontroller are not super advanced, and so using a fs that better address the needs of such a controller makes sense. It is expected, that the longlevity of a eMMC device with f2fs is probably better (as is probably the performance) then using ext4/btrfs. Its probably not perfect yet, and every controller is probably slightly different, but one thing we can hope to expect from samsung, who makes both eMMC's, SD cards and did develop this fileystem, to have done this with purpose
Could you use f2fs on a piece of spinning rust? probably. can you use f2fs on a mtd (character) device using mtdblock? sure. Is it recommended, probably not at all. SSD's is an interesting question though, would it be beneficial? Maybe, would it be useful? probably not. We have far more powerful/featurefull filesystems for those devices, so using f2fs is probably not that great. Vice versa, embedded systems are usually constrained on ram, so using btrfs/zfs etc there is probably not a great idea (those filesystems tend to be memory hungry) where f2fs is simply 'lighter'.
And then finally 'but squashfs wtfomg' yeah, squashfs is great, but it is of course read-only, which makes it not useful at all by itself if you want to persist data but the combination (squashfs + overlayfs + j2ffs/f2fs) that is of course 'the golden ticket' as openwrt is using that combo as well, for good reason
Comment