Announcement
Collapse
No announcement yet.
"Project Springfield" Is Red Hat's Effort To Improve Linux File-Systems / Storage
Collapse
X
-
-
Originally posted by elatllat View PostSure but AFAIK they are practically similar (both systems get slow with to many snapshots)
This mid 2018 article shows BTRFS snapshots vs LVM + EXT4 snapshots, where BTRFS performs better, and is apparently about database workloads (I only looked over the poorly presented figures and following paragraphs that analyze them.
Looking over my notes, there is a claim about BTRFS snapshots being possible to diff against, while the same isn't possible with LVM snapshots? openSUSE also provides boot to snapshot functionality with GRUB, useful if something goes bad and you're unable to boot into the system but can use a fallback snapshot without having to remember/reference a series of commands, which I imagine you'd have to do for similar with LVM?
I think BTRFS send/receive is also useful here for snapshots when syncing to another system, such as for backups. The notes don't reference LVM here but state it's faster and more efficient than rsync which I imagine is what you'd be using to achieve similar without BTRFS/ZFS?
Another note mentions LVM snapshots need you to reserve disk space for them underneath the filesystem. As I don't know much about LVM personally, perhaps you can confirm if that's more hassle to accommodate for vs having it as a native feature of the filesystem.
Originally posted by elatllat View PostJust last week there were 12 bug fixes:
it's fine for most but a company will want to select a stable default to reduce support and maintain the image of providing a reliable product.
You're combining LVM with a filesystem yeah? What EXT4 or XFS? Did you bother to search those up on the list? They are still receiving fixes, does that mean they're not stable? Facebook uses BTRFS heavily(millions of servers apparently), data is very important to them, openSUSE defaults to BTRFS, they even removed the XFS home partition and are full BTFS now, even Google trusts it for ChromeOS and would use it for Android once native encryption can be supported. Some other companies that use it in production as well, including NAS products.
I know BTRFS has not been all that reliably years back, I evaluated it for a hardware product at an old job before and had to sadly reject it as it wasn't ready. All of the issues from back then have been resolved now afaik.
Originally posted by elatllat View PostWith LVM one can db and snapshot the same volume, one should not with alternatives (AFAIK).
Originally posted by elatllat View PostBut if you want to shuffle file systems LVM is your friend and once you are already using LVM (and cryptsetup) the advantages of btrfs diminish. When btrfs gets encription and caching it will make choosing between it and LVM&friends easier..
If you're using LVM and happy with it, that's great I think the same applies that if you're already using BTRFS, the advantages of LVM diminish. Each have pro/cons right now depending on your needs. I think encryption and caching support have been said to be a bit challenging for BTRFS to support properly, so ZFS might be a better option depending how it's future pans out with Linux. Meanwhile this RedHat effort seems pretty interesting too.
- Likes 1
Comment
-
Originally posted by pal666 View Posti use btrfs on lvm and lvm reduces btrfs advantages by zero amount(its snapshots and raids are inferior to btrfs). lvm just adds one unique advantage to btrfs
Originally posted by pal666 View Posti have two separate btrfs filesystems on lvm to compensate for still existing limitation of only one raid level per btrfs filesystem.
Comment
-
Originally posted by polarathene View Post...This mid 2018 article...BTRFS send/receive...
Comment
-
Originally posted by intelfx View PostMore so than SATA SSD controllers? What would that be, then? Wasn't the whole point of NVMe to do less in the controller?
1. a Sata connection is slower. But the processing speed of the controller is in fact the same in Sata and NVMe versions, So SATA SSD has more processing time to deal with issues than a NVMe SSD. This is the first half of the problem.
2. Next problem is 3-5 bits per flash cell in fact have slower writing speeds in fact slower writing speeds than the NVMe connection can transfer. So now NVMe is providing data to the storage device faster than it can in fact store in final storage. How do NVMe cheat they use areas of 1 to 2 bit per cell storage that are fast to write than transfer behind back to long term 3-5 bit storage behind the OS back. Of course what happens when that write buffer of 1 to 2 bits in fact get filled the nice stalls/miss behaviour. So your file system being faster in the OS can in fact be a downside as this can result in providing data to the NVMe faster than it can safely process.
3-5 bits per cell write speed is still faster than SATA connection can provide. So Sata controller has more processing time to deal with problems due to slower transfer rate of sata and the flash storage 3-5 bits per Cell is still technically fast enough for Sata. Where the NVMe connection high transfer speed gives bugger all time to deal with issues and modern day flash storage the 3-5 bits per Cell versions is technically too slow to deal with NVMe transfer speeds and to bench mark well the controller is basically borrowing from peter to pay paul by using 1 to 2 bit flash storage to temp store writes and crossing fingers that the workload slows off that it can catch up..
Modern day NVMe drives using 3-5 bits per Cell basically have all the same problems as device managed SMR without the options of SMR to switch to host managed or host aware to inform the block device/file system what is going on. So currently the file system and block device above the NVMe cannot know if it pushing the NVMe too hard until it stalls out or miss behaves somehow.
Comment
-
Originally posted by polarathene View PostStability? BTRFS is pretty stable these days, are you biasing this based on issues from years back or features that are marked as unstable(RAID 5/6)?Last edited by gbcox; 01 July 2020, 10:35 PM.
- Likes 1
Comment
-
This may be the most recent from Phoronix, not sure. As you can see perf issues across the board for NVMe.
https://www.phoronix.com/scan.php?pa...esystems&num=2
Comment
-
Originally posted by theriddick View PostThis may be the most recent from Phoronix, not sure. As you can see perf issues across the board for NVMe.
https://www.phoronix.com/scan.php?pa...esystems&num=2
That is if you are using a Optane 900p 280GB that is a single bit per cell NVMe. So no that is not a test for general across the board performance with NVMe but only for single bit NVMe stuff. This is not the type of NVMe that you find that you put into m.2 slots.
- Likes 1
Comment
Comment