Yeah,....
Well good luck with that.. the world waits on baited breath I'm sure for your solution..
The rest of us who actually have work to do already have one.
Announcement
Collapse
No announcement yet.
Linus Torvalds Doesn't Recommend Using ZFS On Linux
Collapse
X
-
Originally posted by k1e0x View PostBut in your preferred method if it's file based every time a VM wrote to it's drive you would have to calculate the checksum for the entire virtual disk. ZFS does this only for the blocks modified.
If the IMA side is checksum the data blocks does it make any sense for the file system to duplicate?
XFS developer is not talking about IMA checksum being a file based one but a block one being calculated and checked above the file system. It has to had to be a full file based one because it could not see the to block side though the file system but that will not be the case after the iomap changes are complete in the Linux kernel.
Originally posted by k1e0x View PostAnd besides.. nobody does full file level checksums, It's only used for secure boot really.. probably because it's dog slow.
So first thing to make IMA checksums in fact work and not be dog slow is make them a block based checksum. To be block based checksums they have to be able to see the blocks that make the file up from above the file system.
Omg block level != file level
Leave a comment:
-
I was being sarcastic and no file level checksums are wrong because if that is how you want to do it then you loses the ability to provision a VM block device on top or put a iscsi SAN on it or other object block layer. ZFS allows you to format other filesystems to block devices any way you want in your pool. It can even emulate storage.. useful for developing things like XFS I'd imaging. heh.. for VM's it makes things like qcow, qcow2 (and whatever else they come up with next) obsolete. BSD's bhyve has no support for things like that.. it doesn't need it, it has ZFS.
If I wanted a checksum on the file.................................. I could just do it today exactly the same way *you* are because ZFS supports xattr's exactly the same way. Then I can be "layer complete" lol - (sometimes I really wonder what the hell you're talking about.. it's already in there that way if you want to do it)
But in your preferred method if it's file based every time a VM wrote to it's drive you would have to calculate the checksum for the entire virtual disk. ZFS does this only for the blocks modified. Even if you slice them up, ZFS is still doing less work because it'd doing it on the atomic unit. And you get all the other cool features, snapshots, network portability, incrementals, compression, per dataset encryption, tier based cache on nvme's, writable cloning (duplicate a master image 100 times and only consume the delta) ..all for free. No development necessary it's already done, working and in production today on Linux.
And besides.. nobody does full file level checksums, It's only used for secure boot really.. probably because it's dog slow.
And you are right to a point, no file system is perfect.. as I said before many time, I don't care what client systems and home users use. I don't care what file system your phone runs. I care about storage in enterprise.
I'm also calling BS that you used a prime computer at your highschool.. what highschool in the 80's owned a mini-computer? Ya right.. Imagine the budget. What the hell would you use it for? No.. you looked it up on wikipedia and that's great that you're trying to impress me. Wow even using linux a year longer than me.. tell me, cuz I forget.. what was Linux used for back then? You aren't showing yourself to be very trustworthy. You can just say "ok the ageism shit was wrong"Last edited by k1e0x; 23 January 2020, 04:33 AM.
- Likes 1
Leave a comment:
-
Originally posted by k1e0x View PostOmg block level != file level.
block level != file level. IRIX is block level+ file level then it came block level+file level+integrity level. This was something kind of unique to the IRIX systems made direct io on them work insanely well the iomap work in current day Linux kernel is bring this to Linux.
I would guess you would not be watching this work.
Originally posted by k1e0x View PostYou can drop the agism card to make your point but I've worked on mainframes (PrimeOS a non posix OS you've never heard of for a reason).
Originally posted by k1e0x View PostMy first time using Unix was 1988 and my first time using Linux was 1994.
Originally posted by k1e0x View PostI must not get it because I haven't been around long enough even though I've managed large storage arrays and worked for tech companies you talk about and use every day.Dr Sean Bradyhttps://lca2020.linux.org.au/schedule/presentation/218/All professionals possess some measure of expertise, and not only is this expertise usefu...
"Keynote: Drop Your Tools – Does Expertise have a Dark Side?" - Dr Sean Brady
It would pay you to watch this. In fact you Expertise can be the very reason you are not getting what the XFS file system developer is up-to.
Originally posted by k1e0x View PostI guess checksuming your data makes it harder to check it's integrity,
If the IMA side is checksum the data blocks does it make any sense for the file system to duplicate?
This was in fact a answer from the video at the conference over teaching XFS old dog new tricks. Lot of it is really teaching the old XFS dog tricks it use to know.
So the question is where should you be performing the checksums. ZFS locations it performs it checksums may be wrong and the XFS lead developer does not agree with where ZFS does things. Is the right place the file system code or should it be system wide generic. Integrity means you must generate checksum.
When you look at the work to make iomap that is basically a library shared between Linux kernel file system drivers in future this give a different place to put in block checksums.
Full file checksums are need by the integrity layers so are per block checksums.yes the block is to locate what section in a full file had in fact been messed with.
ZFS was not really design to work with system wide integrity. When I say system wide you will be wanting to confirm that X file on two different file systems are in fact the same. Like you are coping to backups and the backups are not ZFS for some reason. Current ZFS system lets say I copy a file to a UDF for burning on disc to transfer the protections are gone.
Its about time you ZFS guys pull head out ass and work out your data protection being restricted to ZFS is a bug.
Leave a comment:
-
Originally posted by oiaohm View PostFunny you arguement here. This is not bolting on this is using something that in the old design that is meant to be used that way. In fact something you admit latter ZFS is doing.
IMA was the example he used. But he also said it did not have to be IMA. Really there is no reason why a file system cannot a flag in it superblock to say I have X feature so VFS layer added feature need to be enabled. It is possible to say all files in this file system have checksums in xattr.
See ZFS is placing check-sum is xattr for whole file. This does not have to be implement in the ZFS file system. If you implemented where the IMA/fsverify are this could be taken out of ZFS and implement for all file systems that support xattr to have whole file checksumming. Lets really stop design checksum per file system on per file system based system it only created increases validation work and at times unrequired extra processing.
Notice something I enable IMA on ZFS by the current design of ZFS this can be calculating the same checksum twice and consuming twice the number of bytes in the xattr with no real advantage. Basically whole file checksumming need to get out of file systems and moved to the layer above. This is Linux coming into alignment with old IRIX ways.
Block level check sums can be stored in the block layer. Don't have to be stored in the file system. Rework of iomap in Linux is about changing things so checksums on blocks going to storage can be calculated and signed by the VFS/IMA level independent of file systems and in fact be more complete end to end. This is Linux coming into alignment with old IRIX ways again.
That the problem you are too young. You have not consider that most of what you are doing with ZFS was done over a decades before. Problem when XFS was ported to Linux the other bits around it did not come with it being the XVM(logical volume manager) or the differences in VFS bit above XFS for whole file checksumming did not come either. So the XFS we have been looking at is the feature crippled version. Fully implemented version of XFS is a lot stronger competitor and it also moves a lot of other file systems up as competitive on data protection.
So you are working on the younger tech with ZFS and don't get it. In fact the way ZFS does whole files checksums comes from the historic way XFS did it. Technology wise ZFS is the child of XFS in a lot of the file protection stuff. XFS on IRIX checksums on files were done in the layer above the file system in the VFS layer with IMA is. So having full file checksum in the layer above is a long standing technology design choice for whole file check sums.
Lets say in 10 years time someone implements ZFS without checksum then some other file system comes along claims a whole stack of advantages over ZFS because it has checksums are they not idiots. This is exactly the mistake you have made with XFS totally missing how far ahead it was. The old design has some very interesting points it about time you stop claiming bolting stuff on XFS developer currently is just implementing the stuff that should have been implemented to fully port XFS originally.
The original model around XFS is design to be bolt together to reduce down the duplication of effort. So every file system does not have to write a block layer or a validation layer stuff.
You can drop the agism card to make your point but I've worked on mainframes (PrimeOS a non posix OS you've never heard of for a reason). My first time using Unix was 1988 and my first time using Linux was 1994. I must not get it because I haven't been around long enough even though I've managed large storage arrays and worked for tech companies you talk about and use every day.
Whatever you think man.. I guess checksuming your data makes it harder to check it's integrity, Just let firmware do it what could go wrong (737 MAX?) Opensource is bad unless it's GPL and ZFS is just unstable and loses data all over the place. Since nothing else on Linux is even half baked I guess you recommend everyone use NetApp. Way to promote software freedom, open source and Linux in the enterprise. +1Last edited by k1e0x; 22 January 2020, 11:47 PM.
Leave a comment:
-
Originally posted by k1e0x View Postis instead of designing a new proper architecture that fits your needs, you want to bolt on a system designed for something completely different (secure boot) and place calculated checksum into xattr records?
Originally posted by k1e0x View PostI believe the kernel also needs a list on this to know what files have checksums on them. What could go wrong? (probably a great deal, I have no idea)
Originally posted by k1e0x View PostZFS checksums the xattr record too. heh (this is file based, not block based.. they actually can be used together.. if you want..)
Notice something I enable IMA on ZFS by the current design of ZFS this can be calculating the same checksum twice and consuming twice the number of bytes in the xattr with no real advantage. Basically whole file checksumming need to get out of file systems and moved to the layer above. This is Linux coming into alignment with old IRIX ways.
Originally posted by k1e0x View PostBlock based however lets you do this for example as SAN storage.
Originally posted by k1e0x View PostI've never seen anyone use this before, maybe android does, it's not default on anything I know of and they don't do it on entire file systems for sure.
Originally posted by k1e0x View PostOr you could just use a well trusted long standing technology designed for just that.:
Lets say in 10 years time someone implements ZFS without checksum then some other file system comes along claims a whole stack of advantages over ZFS because it has checksums are they not idiots. This is exactly the mistake you have made with XFS totally missing how far ahead it was. The old design has some very interesting points it about time you stop claiming bolting stuff on XFS developer currently is just implementing the stuff that should have been implemented to fully port XFS originally.
The original model around XFS is design to be bolt together to reduce down the duplication of effort. So every file system does not have to write a block layer or a validation layer stuff.
Leave a comment:
-
Originally posted by oiaohm View Post
Now I get the rabbit hole of you have screwed up.
Are you right in a XFS deployment the answer is only maybe.
The Linux system is a sandwich. Block layer-File System-Integrity Measurement Architecture(IMA)
If the IMA side is checksum the data blocks does it make any sense for the file system to duplicate? The XFS lead developer at a Linux Conf Au a few years back answered this exactly question this way why no data block checksums in the XFS file system design, This is why when you were saying the loopback stuff XFS developer was doing seam stupid its not something else is at play.
Once you consider if you are allowing the block layer to be visible though the file system the IMA on top of the file system can do per data block check sums in a file system neutral way.
Standard min requirement for IMA stuff is the file system supports extended attributes but it would be possible to make IMA work without this. So you can have end to end integrity and not give a crap about the file system.
Next is from file system to block layer. File system has not been able to tell block layer useful stuff like all these blocks own together and should be put on storage item because having half of them would as useless as having none of them.
Horrible question both of you. With ZFS can you in fact maintain your integrity insurance of the file system the data has been moved to is not ZFS. This is something the IMA path offers this is one thing that makes ZFS a terrible long term design.
What ZFS happens to be is the Block, Filesystem and IMA glued into one. You guys are being too tunneled visioned to see that XFS developer is working on making something equal but done in a different way that the end to end protection does not depend on what file system or under laying block layers you are using.
PS something else to remember iomap library is only just starting to be deployed out in the Linux kernel. There were a lot things the Linux kernel block layer could not do. There was a lot information that the file systems could not tell the block layer and a lot of information that the IMA of Linux could not access that is going to change.
The old block layer systems in Linux API/ABI are going to get deprecated so file systems will not be able to use them some point into the future.
Linux kernel is on a very interesting route and that route is going to make ZFS life harder supporting Linux and also make ZFS features less important.
I've never seen anyone use this before, maybe android does, it's not default on anything I know of and they don't do it on entire file systems for sure. But.. hey.. don't let me stop your dreams and duct tape fantasies.
Or you could just use a well trusted long standing technology designed for just that... naah... We say NIH to things like that here in Linux and fix everything with sed and python instead.Last edited by k1e0x; 22 January 2020, 02:40 PM.
Leave a comment:
-
Originally posted by ryao View PostURL="https://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf"]https://web.mit.edu/Saltzer/www/publ...d/endtoend.pdf[/URL].
Originally posted by k1e0x View PostXFS has checksums also, but only on metadata. They cite it's too slow to do it on data blocks. (A problem ZFS's "terrible" design allowed them to solve.. if only XFS's design was as bad as ZFS's they could have data block checksums. )
The Linux system is a sandwich. Block layer-File System-Integrity Measurement Architecture(IMA)
If the IMA side is checksum the data blocks does it make any sense for the file system to duplicate? The XFS lead developer at a Linux Conf Au a few years back answered this exactly question this way why no data block checksums in the XFS file system design, This is why when you were saying the loopback stuff XFS developer was doing seam stupid its not something else is at play.
Once you consider if you are allowing the block layer to be visible though the file system the IMA on top of the file system can do per data block check sums in a file system neutral way.
Standard min requirement for IMA stuff is the file system supports extended attributes but it would be possible to make IMA work without this. So you can have end to end integrity and not give a crap about the file system.
Next is from file system to block layer. File system has not been able to tell block layer useful stuff like all these blocks own together and should be put on storage item because having half of them would as useless as having none of them.
Horrible question both of you. With ZFS can you in fact maintain your integrity insurance of the file system the data has been moved to is not ZFS. This is something the IMA path offers this is one thing that makes ZFS a terrible long term design.
What ZFS happens to be is the Block, Filesystem and IMA glued into one. You guys are being too tunneled visioned to see that XFS developer is working on making something equal but done in a different way that the end to end protection does not depend on what file system or under laying block layers you are using.
PS something else to remember iomap library is only just starting to be deployed out in the Linux kernel. There were a lot things the Linux kernel block layer could not do. There was a lot information that the file systems could not tell the block layer and a lot of information that the IMA of Linux could not access that is going to change.
The old block layer systems in Linux API/ABI are going to get deprecated so file systems will not be able to use them some point into the future.
Linux kernel is on a very interesting route and that route is going to make ZFS life harder supporting Linux and also make ZFS features less important.Last edited by oiaohm; 22 January 2020, 12:36 AM.
Leave a comment:
-
Originally posted by nivedita View PostThe CDDL still has the terms allowing Sun (now Oracle) to publish a new version, and for anyone to then choose to use that version.
This website is for sale! sec.report is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, sec.report has it all. We hope you find what you are searching for!
Sun ceased as a legal usable in 2010. Big problem here is legally Oracle is not the license steward of CDDL but a dead non functional company is.
Leave a comment:
-
I've tried ZFS for shits and giggles, and it completely obliterated my data, then started insulting me with the vocabulary of a 3rd grader. To hell with that.
Leave a comment:
Leave a comment: