Announcement

Collapse
No announcement yet.

F2FS Preparing To Introduce New "Secure Erase" Functionality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Azrael5
    replied
    Originally posted by starshipeleven View Post
    change permissions?
    This should be an issue regardless of F2FS, ext4 or XFS or whatever linux filesystem as all these filesystems use the same permission system.

    If you just format it with terminal commands or Gparted (or KDE partition manager) then it is owned by root user. If you formatted with more user-friendly applications like gnome-disks I think it sets owner to the user, because that's what 99% of people expect.

    Anyway,

    sudo chown <your username> /path/to/where/you/mounted/the/drive

    should change owner to your user, from a terminal, and should be available in all distros.

    If you have KDE (on OpenSUSE it works like this), go in the drive with file manager, click the button to go "up one level" (the arrow pointing up, on the left of the top button bar) Then you rightclick the folder of your drive and select "open with" ---> "File Manager Super User Mode", it will open a new file manager window where you are root, then click on the "up one level" button again to get to the folders again, select the folder of your drive and right click, select Properties, click on Permissions tab, then you can change the owner of the folder with the file manager GUI, similar to how it works on Windows File Explorer.
    Many thanks, I'm going to try. This issue happens only with F2FS formatting while it doesn't occur when the device is formatted in Ext4 or NTfS file system by Kde partition manager.
    Last edited by Azrael5; 08-04-2020, 12:20 PM.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by Azrael5 View Post
    I formatted a USB stick in F2FS file system. Unable to write, paste or copy any file, because of lacking of permissions. No problem with EXT4. Which is the reason of this problem?
    change permissions?
    This should be an issue regardless of F2FS, ext4 or XFS or whatever linux filesystem as all these filesystems use the same permission system.

    If you just format it with terminal commands or Gparted (or KDE partition manager) then it is owned by root user. If you formatted with more user-friendly applications like gnome-disks I think it sets owner to the user, because that's what 99% of people expect.

    Anyway,

    sudo chown <your username> /path/to/where/you/mounted/the/drive

    should change owner to your user, from a terminal, and should be available in all distros.

    If you have KDE (on OpenSUSE it works like this), go in the drive with file manager, click the button to go "up one level" (the arrow pointing up, on the left of the top button bar) Then you rightclick the folder of your drive and select "open with" ---> "File Manager Super User Mode", it will open a new file manager window where you are root, then click on the "up one level" button again to get to the folders again, select the folder of your drive and right click, select Properties, click on Permissions tab, then you can change the owner of the folder with the file manager GUI, similar to how it works on Windows File Explorer.
    Last edited by starshipeleven; 08-04-2020, 05:05 AM.

    Leave a comment:


  • Azrael5
    replied
    I formatted a USB stick in F2FS file system. Unable to write, paste or copy any file, because of lacking of permissions. No problem with EXT4. Which is the reason of this problem?

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by Azrael5 View Post
    Is there any way for the linux user to chose the kind of file system during the installation of the operating system?
    Depends from the distro.

    You usually need to either choose "manual partitioning" when installing or some other option that isn't "default" when the installer is deciding what to do with the disk.

    Leave a comment:


  • Azrael5
    replied
    Is there any way for the linux user to chose the kind of file system during the installation of the operating system?

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by oliver View Post
    Anyway, most of these parallel raw NAND flashes tend(ed) to be SLC; as MLC didn't exist back then, so there was far far fewer problems. But it also very heavily depends on the flash controller and its software. A lot of engineers over the years put in effort (in proprietary systems) to make this as reliable as they could. But if you look at raw cheap MLC flash (tends to start at 512 MiB, upto several gbytes); is never used in raw 'more reliable than tablets' consumer stuff.
    Fair enough, most of the NAND I've seen so far wasn't that large, 128-256 MB at most.

    Now you speak of 'network stuff' to which I'm thinking high-end stuff (not your WRT54G and other crappy hardware).
    No I'm talking of crappy access points and wifi routers and embedded NAS and whatnot. Consumer stuff, cheap and cheerful, just how we like it. They started with SPI NOR but since around 4-5 years ago many pivoted to NAND. Nowadays if the firmware is bigger than 8 MB it's usually NAND, if less it's usually NOR.

    Well "it depends" if you use ubiblock or mtdblock at least, to offer a squashfs partition, why would performance be bad? Why would it kill the flash? So that is 'none-sense'. However to use mtdblock + ext4 for example, yes, that's absolute devastating to your flash and performance.
    yeah whatever, of course a read-only filesystem is fine

    I'm not too familar what openwrt x86 does to be honest. I tried to ues it 10 years ago as a 'mini http vm to experiment with' before we did containers. But didn't have to deal with any of these problems, as I just had a r/w virtual disk
    using the squashfs root and overlay allows "reset to default" feature and is recommended, also because the r/w ext4 images can't upgrade the kernel among other things (as OpenWrt has no facility to do that because of its history, updating the kernel is done with a firmware upgrade procedure, which works only with the squashfs root+overlay)

    But OpenWRT still generally assumes you use some form of flash based device I suppose but it's a niche even for them I guess
    Yes they expect some flash device, but x86 is not as much niche now. In the last years the community packages added docker, lxc, kvm and a lot of packages that really do not run that well on your average embedded. It's also being used as KVM guest by quite a few, I see every now and then that someone maintains virtualized interfaces and stuff for VM guests.

    Leave a comment:


  • oliver
    replied
    Originally posted by starshipeleven View Post

    Quite frankly, I'd like to strongly disagree with all these claims. I've handled A LOT of embedded (network stuff) that had raw NAND and I've never observed a significant amount of devices that just go up in flames because their firmware got corrupted because it's handling the flash badly.
    Well then we have my 25+ years vs your 25+ years and we probably looked at different markets

    Anyway, most of these parallel raw NAND flashes tend(ed) to be SLC; as MLC didn't exist back then, so there was far far fewer problems. But it also very heavily depends on the flash controller and its software. A lot of engineers over the years put in effort (in proprietary systems) to make this as reliable as they could. But if you look at raw cheap MLC flash (tends to start at 512 MiB, upto several gbytes); is never used in raw 'more reliable than tablets' consumer stuff.

    As the proof is in the pudding, show me some (a lot) of embedded consumer grade stuff where this is not true Obviously I can show you a lot of stuff where it is true. Now you speak of 'network stuff' to which I'm thinking high-end stuff (not your WRT54G and other crappy hardware). As those, for the last few decades where mostly using SPI-NOR flash, or SLC nand flash.

    Originally posted by starshipeleven View Post
    So either the chances of blowing up have been inflated out of proportion or all devices were using some non-upstream flash driver that dealt with any issue.
    Well lets take Allwinner for example, they had a Raw MLC flash driver as part of their SDK, that was (really bad) but tried to address most of these issues. And again, MLC flash (google it) was only done by boris brazzilion of Free-Electrons/Bootlin very recently. I know because we hired him 7 or 8 years ago? (maybe less, but not much) to help us support the olimex boards with raw MLC nand flash. In the end, he said, it's possible, but it'll be a huge effort, as we don't have MLC support in the kernel. The best he could do in the short time/budget we agreed uppon, was do 'pseudo SLC' support, where we effectively cut capacity in half, to treat MLC as SLC. But even that, left us with problems where we had corrupted bootloaders after a few hundreds/thousands boots. Luckily, this was also the time where eMMC become feasable and we asked Olimex to re-spin the lime2 boards with eMMC, and well, the rest is in the puddin' or at least history https://git.kernel.org/pub/scm/linux...e2-emmc.dts#n3

    Originally posted by starshipeleven View Post
    Phoronix showed some benchmarks about it and it was actually pretty good on mechanical drives.
    Oh I believe that in a heartbeat. It just wasn't designed with that in mind. Just because of that, doesn't mean its a bad fs for a HDD; if you don't mind loosing features but want more raw performance, it might be a (weird in terms of naming) compromise

    Originally posted by starshipeleven View Post
    It's foolish to use a block device filesystem on a raw flash device, performance is bad and it will kill the flash.
    Well "it depends" if you use ubiblock or mtdblock at least, to offer a squashfs partition, why would performance be bad? Why would it kill the flash? So that is 'none-sense'. However to use mtdblock + ext4 for example, yes, that's absolute devastating to your flash and performance.

    Originally posted by starshipeleven View Post
    stock firmware of many devices does not even go as far as doing overlays, it just has a tiny partition with j2ffs or some other proprietary filesystem where it stores a config file, while the rest of the filesystem is a squashfs or is otherwise read-only.
    oh yeah for sure, openwrt is in that regard 'far superior' by choice i'd argue maybe too. Allowing you to perceive the entire OS as 'rw capable' but with an easy factory reset mechanism.
    "Vendors" don't see the need for that. IF it is based on linux, they do what you say, have a single (or a few) config files on a separate partition, have some propriatery application in RO, and always load this config file. This works fine as well, but as a 'system' of course is not generic or scaleable

    Originally posted by starshipeleven View Post
    OpenWrt has migrated most devices to use an UBI with a squashfs and a UBIFS overlay, on some devices where they control the bootloader too, the kernel too is in the UBI. Or anyway that is the default.
    The overlay of x86 and some SD card images uses F2FS instead (in some other cases it's ext4)
    I'm not too familar what openwrt x86 does to be honest. I tried to ues it 10 years ago as a 'mini http vm to experiment with' before we did containers. But didn't have to deal with any of these problems, as I just had a r/w virtual disk
    But OpenWRT still generally assumes you use some form of flash based device I suppose but it's a niche even for them I guess

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by oliver View Post
    Anyway, back to the SLC/MLC NAND thing again. Back in the day, linux could talk to flash chips, and we had wearleveling algorithms in place, but due to the 'kamikaze' of flash (reading causes bit flips) we never know things are going bad. So only SLC flash was 'really' supported, and while you could talk to the MLC chip, it was only a matter of time before things would fail. Some manufacturers did implement some sort of algorithms in their (GPL violating) drivers to attempt to address this, but more often then not, without huge success, just good enough for the device to outlast its warranty.

    Very recently (in the 5.x kernels) the linux kernel started to 'properly' support MLC, by introducing something of a 'manager' of the flash, that does 'magic' in the background (I'm not familiar with the details I admit, but i guess they simply constantly scan the flash during idle time, and repair corruption based on crc bytes). But this was defiantly not the case 15 years ago
    Quite frankly, I'd like to strongly disagree with all these claims. I've handled A LOT of embedded (network stuff) that had raw NAND and I've never observed a significant amount of devices that just go up in flames because their firmware got corrupted because it's handling the flash badly.

    So either the chances of blowing up have been inflated out of proportion or all devices were using some non-upstream flash driver that dealt with any issue.

    Could you use f2fs on a piece of spinning rust? probably.
    Phoronix showed some benchmarks about it and it was actually pretty good on mechanical drives.

    can you use f2fs on a mtd (character) device using mtdblock? sure. Is it recommended, probably not at all.
    It's foolish to use a block device filesystem on a raw flash device, performance is bad and it will kill the flash.

    And then finally 'but squashfs wtfomg' yeah, squashfs is great, but it is of course read-only, which makes it not useful at all by itself if you want to persist data but the combination (squashfs + overlayfs + j2ffs/f2fs) that is of course 'the golden ticket'
    stock firmware of many devices does not even go as far as doing overlays, it just has a tiny partition with j2ffs or some other proprietary filesystem where it stores a config file, while the rest of the filesystem is a squashfs or is otherwise read-only.

    as openwrt is using that combo as well, for good reason
    OpenWrt has migrated most devices to use an UBI with a squashfs and a UBIFS overlay, on some devices where they control the bootloader too, the kernel too is in the UBI. Or anyway that is the default.
    The overlay of x86 and some SD card images uses F2FS instead (in some other cases it's ext4)

    Leave a comment:


  • oliver
    replied
    So there's a lot of information and some noise going on here, so maybe I can add some more of that

    Traditionally, embedded devices (think WRT54g, lets not go much before that), would have a couple of megabytes (I think 4 mb was the 'first wrt54g, 8 mb if you had the 'bigger' version later) of 'parallel nand'. Now I never dug too deep into what kind of nand it was (MLC or SLC, but likely SLC, more on that later). Those worked ok for a while, but did in the end break down. They where used to use JFFS2 and their predecessors depending on the manufacturer and the linux in use.

    The far more reliable NOR flash was often not used due to cost of course. As time went on, NOR probably was still more expensive, but not as much, but was far more reliable and much lower pin count. So it made sense for a lot of embedded devices to use this. Especially as those where available in bigger sizes too. (Think bios chips still, 4 - 16 Megabytes was quite achievable back then).

    With the need for more storage, the 'MLC nand' chips came to be a more common thing, think the early android phones but also those very cheap tablets that flooded the market. Most of those had 4 GB of MLC parallel nand chips. Smaller modules (64 - 512 megabytes) tended to be SLC flash).

    Due to many reliability issues (fs corruption and flash failure) and price reductions on eMMC chips, eMMC is now quite heavily in use in embedded systems that need more then a few megabytes of ram or are not trying to be the absolute cheapest. Parallel nand is just not common anymore (also due to pincount), but is of course still heavily used 'behind the scenes'.

    So as mentioned previously as well, raw NAND is very poor (in contrast the the far more reliable but relatively very expensive NOR flash). Simply reading from MLC NAND flash causes bit flips, and this can just get worse and worse without even noticing (bit rot quite literally). SLC btw, suffers from this far far less (TLC far more?). While I don't want to get into the details of the why, but very briefly in summary, SLC uses 0 V to represent a logical 0, 5 V to represent a logical 1; MLC (multi layer, which really just means two) can actually represent 2 bits in this way, e.g. 0 V is still 00, but 5 V is actually 11. TLC ... well you get the drill. Due to these 'smaller' voltage differences, chips are simply more sensitive to changes (iow less tolerance).

    So going back again, JFFS2 and its predecessors implemented a 'rudimentary' wear leveling algorithm. Its use and design is for relatively small flashes. It was simply not designed for gigabyte flashes. But this wear leveling was needed for both NAND and NOR flash to not overly exert a single flash cell.

    UBI with UBIFS ontop, came along later to address the shortcommings of JFFS2 and was more of a modern FS. But this is only needed if you want to do reading AND writing from flash.

    Now, in theory it IS possilbe to dump a squashfs image on raw flash, but with the whole 'reading causes bit flips' even then, you don't want to use this on NAND. Nor is fine. But then you are still constantly writing into the same location, think of firmware updates, but also 'splitting' the flash into a 8:2 ratio for example, meant that you are using 80% of your flash statically, without wearleveling (with a squashfs on top) and you'd be only utilizing the wearleveling algorithm on that 20% of your storage. While this works, it's not very efficient and doesn't make good use of the flash life. UBI + squashfs however is of course a solution solving this.

    Anyway, back to the SLC/MLC NAND thing again. Back in the day, linux could talk to flash chips, and we had wearleveling algorithms in place, but due to the 'kamikaze' of flash (reading causes bit flips) we never know things are going bad. So only SLC flash was 'really' supported, and while you could talk to the MLC chip, it was only a matter of time before things would fail. Some manufacturers did implement some sort of algorithms in their (GPL violating) drivers to attempt to address this, but more often then not, without huge success, just good enough for the device to outlast its warranty.

    Very recently (in the 5.x kernels) the linux kernel started to 'properly' support MLC, by introducing something of a 'manager' of the flash, that does 'magic' in the background (I'm not familiar with the details I admit, but i guess they simply constantly scan the flash during idle time, and repair corruption based on crc bytes). But this was defiantly not the case 15 years ago

    Now, eMMC is 'just' raw nand flash, but has a micro controller to do the whole flash rejuvination etc etc stuff. It's almost like the M.2/Sata SSD, but those controllers are FAR more suffisticated (ssd's are expected simply do more, and bigger speeds).

    So we now have flash available, in the form of eMMC, at a low pin count, which is fully managed and exposed to linux as a block device, e.g. any block based fs can go ontop of it (which I forgot to mention, on raw flash, you can't just mkfs or anything, as its' not a block device, its a character device). The only 'downside' is, that we have no control over the controller that is inside of the eMMC chip. E.g. there is firmware, that probably needs to be updated (bugs happen, known ones exist) and we have no control (improvements?) over the algorithm either.

    And then, how do we format these devices? Well we can use ext[234], zfs, btrfs etc on them just fine. But these devices are relatively speaking still very very slow. And the algorithms/capabilities of the microcontroller are not super advanced, and so using a fs that better address the needs of such a controller makes sense. It is expected, that the longlevity of a eMMC device with f2fs is probably better (as is probably the performance) then using ext4/btrfs. Its probably not perfect yet, and every controller is probably slightly different, but one thing we can hope to expect from samsung, who makes both eMMC's, SD cards and did develop this fileystem, to have done this with purpose

    Could you use f2fs on a piece of spinning rust? probably. can you use f2fs on a mtd (character) device using mtdblock? sure. Is it recommended, probably not at all. SSD's is an interesting question though, would it be beneficial? Maybe, would it be useful? probably not. We have far more powerful/featurefull filesystems for those devices, so using f2fs is probably not that great. Vice versa, embedded systems are usually constrained on ram, so using btrfs/zfs etc there is probably not a great idea (those filesystems tend to be memory hungry) where f2fs is simply 'lighter'.

    And then finally 'but squashfs wtfomg' yeah, squashfs is great, but it is of course read-only, which makes it not useful at all by itself if you want to persist data but the combination (squashfs + overlayfs + j2ffs/f2fs) that is of course 'the golden ticket' as openwrt is using that combo as well, for good reason

    Leave a comment:


  • discordian
    replied
    Originally posted by starshipeleven View Post
    On PC? Much less testing and much less experience in issues that can happen to it if compared to the other filesystems, so its fsck isn't as good in case you are in trouble, if compared to ext4/xfs.
    neither fsck nor extundelete ever helped me with ext4, f2fs aint as old as ext4 but its been in use since 2012.
    Originally posted by starshipeleven View Post
    If compared to btrfs/zfs it lacks features.
    I never claimed different
    Originally posted by starshipeleven View Post
    A tar.xz can't be mounted and read like a filesystem so you need to waste some RAM to make a ramdisk to decompress it into, and you may or may not have that space, squashfs is a filesystem, you can mount it and read the contents while it is still compressed.
    No shit, and a RO filesystem is a differnt ballpark as a read/write one.
    Originally posted by starshipeleven View Post
    You are not making consumer stuff I guess.
    Industrial and realtime, hard + software. I deal with embedded CPUs running linux down to STM32 and AVR. No consumer stuff, yes.

    Originally posted by starshipeleven View Post
    I'll send a notice to Zyxel, Netgear, Asus, Synology and a whole list of consumer embedded device manufacturers that discordian said they can't do that.
    NAND "read disturb" exists, if you have Explanation to why this is no issue?
    is that really "raw" NAND or does it come with some proprietary blob, is it some SPI chips that atleast do FEC (NAND is bloody useless without), how are bad sectors handled.
    I might not be in the consumer branch but I evaluated several propietary FTLs and NAND Chips (now nearly a decade back).

    Originally posted by starshipeleven View Post
    I've also seen a mtd partition that was a tar.xz flashed raw on NAND, that was decompressed on boot into a RAMdisk (Zyxel NSA31x line of NAS products), how's that for bit errors?
    From my memory the rate of raw bit errors was around 1 in a million, good luck storing a few MB without FEC.

    Originally posted by starshipeleven View Post
    Ah, and kernel image is still 100% flashed raw on NAND everywhere, no OEM is using a sane uboot that can load stuff from UBI, and in many cases it won't handle bad blocks in the kernel mtd partition either so if that part of the flash develops a bad block the device is fucked.
    Guess they saved 4$ by not using a NOR for the kernel. Sounds really fun.
    Originally posted by starshipeleven View Post
    OpenWrt usually places the squashfs inside an UBI volume, so it's at least protected from bad sectors and such (bad sectors that can happen when doing a firmware upgrade, as that is the moment you are writing a new squashfs image).
    A bad sectors is one where there are more Errors than a FEC can correct, without FEC you cant reliably do anything (back to why I cant imagine someone using raw NAND).

    Originally posted by starshipeleven View Post
    Consumer embedded. Not smartphones/tablets of course, but IP cameras, the central control station for the cameras, NAS, network devices (routers/access points), more specific devices that can't just recycle smartphone hardware, and a whole lot of IoT stuff for consumers, like the "smart washing machines" that have wifi and can connect to a central server so you can start/stop it remotely (totally safe I swear).
    You have some model numbers of the SOCs used?

    A direct NAND IF is a thing of the past AFAIK, look for ex. at the i.MX283 that had multiple channels for NAND chips and you had to do FEC yourself. Might be that I am out of the loop on that, and SPI chips do that in HW?

    Leave a comment:

Working...
X