Announcement

Collapse
No announcement yet.

F2FS Gets More Fixes In Linux 4.21 With The File-System Now Supported By Google's Pixel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • aht0
    replied
    Yeah, I meant the "wrong" Optane. My bad.

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by aht0 View Post
    Optane is over-hyped anyway. I'd rather use a RAM disk, more capacity for the money, faster and no wear.. Not to mention no restriction on hardware, other than amount of RAM. Optane has pretty strict requirements.
    That's probably not the Optane we were talking about.

    Intel in their infinite wisdom has called their newer SSD technology with the same name of their SSD-caching software, which is also the same name they used in the past for a failed attempt at making a similar SSD-cache which was total crap and had significant hardware limitations (like needing its own dedicated slot on a laptop).

    We (me and pal666) are talking of NVMe SSDs made using 3D XPoint storage memory, which has much less latency than current SSD flash, and is very well-suited for cache drives (which is probably why they resurrected the old Optane market name and made a caching software for Windows so you can use them as cache, as Windows can't do that on its own to the contrary of Linux/BSD)

    While they CAN be used with the Optane Windows-only software to be cache of stuff, that's an additional feature on top of normal SSDs using standard interfaces. Also the "Kabylake or later" processor limitation is only to be able to use this Windows caching software thing, they have been used on any other system and they work fine as normal SSD, still crushing any other SSD around. For example here on Phoronix they were tested on an AMD EPYC server board. https://www.phoronix.com/scan.php?page=article&item=intel-optane-900p&num=1

    You could be using them as SSD cache drives for ZFS (which is a well-known thing with normal SSDs already, afaik), or for a Linux raid with Bcache, as long as your system can see and use NVMe SSDs. You could use them as boot drives too, although their capabilities would probably not matter as much in an OS drive.

    While they are relatively expensive they still cost far less than a system that has a comparable amount of free RAM you can use as cache (like for example 280GB for 300$, but lower capacities are much cheaper as well, of course the 1.5TB enterprise models cost an arm and a leg, but that's an awful lot of cache and servers that have so much memory to spare are much more expensive than that anyway)
    Last edited by starshipeleven; 03 January 2019, 12:02 AM.

    Leave a comment:


  • aht0
    replied
    Optane is over-hyped anyway. I'd rather use a RAM disk, more capacity for the money, faster and no wear.. Not to mention no restriction on hardware, other than amount of RAM. Optane has pretty strict requirements.

    Leave a comment:


  • pal666
    replied
    Originally posted by starshipeleven View Post
    Optane is 3D NAND, still flash technology, it's just able to use the third dimension so it is more compact (and fast)
    it has nothing to do with flash. flash has to erase page before write, optane doesn't

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by pal666 View Post
    it is designed for flash. not every ssd is flash, optane for example isn't
    and it is time to use optane if you value speed
    Optane is 3D NAND, still flash technology, it's just able to use the third dimension so it is more compact (and fast)

    Leave a comment:


  • pal666
    replied
    Originally posted by stqn View Post
    Is it time to use F2FS on SSDs instead of, say, ext4?
    it is designed for flash. not every ssd is flash, optane for example isn't
    and it is time to use optane if you value speed

    Leave a comment:


  • starshipeleven
    replied
    Originally posted by stqn View Post
    Is it time to use F2FS on SSDs instead of, say, ext4? I’m still unsure of the actual benefits of this file system.
    It was designed for flash devices with crappy controllers, like USB flash drives, SDcards or eMMC. On an actual SSD there is much less difference as the SSD controller is vastly more powerful and can deal with all you can throw at it.

    Leave a comment:


  • Vistaus
    replied
    Originally posted by Brane215 View Post
    I'm running it on 500GB EVO870. It works just fine.
    The question was if there's any noticeable improvement over, say, ext4, not whether it works fine.

    Leave a comment:


  • V10lator
    replied
    Originally posted by discordian View Post

    Benefits should be less writes to the underlying flash.
    I'm not sure this is true cause AFAIK f2fs uses GC and wear-leveling algos which move chunks of i.e. longterm read-only data around to higher lifetime in case the underlyings FTL GC / wear-leveling algos aren't good enough. These algos might actually cause more writes esp. on idle (so when i.e. ext4 does nothing f2fs starts to move data around).

    The FTLs on modern SSDs should be good enough to have less to none benefit from these algos in f2fs but I'm using it since years on different SSDs (and other FTL based flash media) without any problems.

    Leave a comment:


  • Brane215
    replied
    I'm running it on 500GB EVO870. It works just fine.

    Leave a comment:

Working...
X