Originally posted by pavlerson
View Post
Announcement
Collapse
No announcement yet.
Btrfs Updates Sent In For The Linux 4.17 Kernel
Collapse
X
-
- Likes 2
-
Originally posted by geearf View Post
Oh yup different order of magnitude.
My current slowest partition has 384010, and more to go as too much free space
Comment
-
Originally posted by F.Ultra View Post
That quote does not mean what you think it does. And this is not the first time you brought up it either Kebbabert, and not the first time some one had to point it out either. Torvalds did not say that quote in relation to any other Kernel out there so you cannot use it as an example on how stable the Linux kernel is compared with others. You just cannot. But of course you will continue to do so, over and over.
But of course, you can continue to deny the quotes from Linus, Andrew Morton, Theo de raadt, Con Kolivas, etc over and over.Last edited by pavlerson; 09 April 2018, 06:38 AM.
Comment
-
Originally posted by pavlerson View PostWhat? Did you read the text? There are several Linus quotes there. He says Linux is bloated, too complex, afraid of an error that cannot be evaluated anymore, etc. Does that sound high quality to you? Did you read what Theo de Raadt said about how low quality Linux is? Read the text.
But of course, you can continue to deny the quotes from Linus, Andrew Morton, Theo de raadt, Con Kolivas, etc over and over.
Let's pretend that a program have a job. It accepts a number , duplicate that number and return it. Now if this program use 16bits to deal with that number it can only work correctly with a input number from 0 to 32767. If nothing calls that program with a value higher than 32767 it works forever and never fails to return the correct result, but if something calls that program with a number outside of that range the result is bogus.
You might argue that the program is badly designed that does not take this into account and you'll be right, but as ironic as this may seem it does not affect stability at all unless the program is used outside the design specification which may or may not require the number to be within a certain range.
You have to understand that all software has bugs, all software are designed with some limits, and yes, I'll agree with you that when software complexity increases it is often hader to track down problems. However, this does NOT necessarily affect stability as long as the darn thing is tested.
Of course, if you use things that are not as well tested you by definition take part in the testing. You probably find bugs and hopefully you report them so they can be fixed and thus the stability of the software increase. It may not be well designed, but it can still be very reliable especially if you stick to a certain subset of features.
I do agree with you that code complexity is an issue. Given enough eyeballs all bugs are shallow right? but the problem is when you are running out of eyeballs.
There are ways around that of course and that is to modularize and separate code so it is still manageable for the number of eyeballs available.
http://www.dirtcellar.net
- Likes 1
Comment
-
Originally posted by waxhead View Post
Yes of course I am saying that BTRFS is reliable even if I don't use all features.
Would you claim that the Linux kernel is unreliable because BTRFS is a feature of the kernel?!
And YES, I have heard about all the data loss episodes end it is usually a result of people running old kernels, earlier than 4.4, a development kernel or simply that people don't understand how certain features work and get surprised when they loose data because they did not understand how to use BTRFS properly.
Comment
-
Originally posted by k1e0x View PostI've used ZFS as a root file system on Linux for the past 4 years using Arch, Gentoo, openSuSE and Ubuntu without a single problem. (of course you'll reply this dosen't exist because -insert dumb fud argument here-) My only wish is installers supported it better.Last edited by pal666; 10 April 2018, 02:38 PM.
- Likes 1
Comment
-
About BTRFS stability:
I've got a home fileserver that's been running the same BTRFS filesystem since 2012. It started with a pair of 2 TB Western Digital Red drives and now it's 4 6 TB Reds and 2 4 TB Toshibas. I've had a drive fail (one of the 2 TB Reds) and I've converted it from RAID-1 to RAID-5 to RAID-10, all in BTRFS, not MD or LVM.
I have had to scrub and rebalance it a few times in order to fix weird problems, but I've never had to restore it from backup.
I've also run BTRFS on my Dell laptop SSDs. Two laptops so far going back to 2014. I've hit the out of space errors. I've had to use USB sticks to add enough space to the filesystem so I could rebalance. I've had Docker create enough snapshots that I had to clean out the whole Docker subvolume.
But I've never lost any data. I'd say BTRFS is very stable.
Comment
-
Originally posted by pavlerson View PostWhat? Did you read the text? There are several Linus quotes there. He says Linux is bloated, too complex, afraid of an error that cannot be evaluated anymore, etc. Does that sound high quality to you? Did you read what Theo de Raadt said about how low quality Linux is? Read the text.
But of course, you can continue to deny the quotes from Linus, Andrew Morton, Theo de raadt, Con Kolivas, etc over and over.
But once again: You cannot use that quote in relation to anything else than Linux itself. Linus says it in regards to Linux and not in a comparison with other systems or kernels.
What you seam to not understand here is that Linux could be ten times worse than what Linus says in that quote and still be better than the Solaris kernel. Or it could be ten thousand times better and way worse than Solaris, there simply is no comparison to any other system in that quote.
Comment
Comment