If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
But no mention of the the other reasons why osol did emerge in the spring.
There were some serious bugs that would lead to loosing data or even total destruction of pools.
Are these fixed?
If I recall correctly one of the things was, with a full pool and not copious memory deleting a zfs would basically hang your machine forever if de-duplication was on.
I don't know where I got the idea but, I thought the kernel devs switched to a model where there is only a one week merge window and the rest of the development cycle goes into fixing bugs.
Maybe you are correct. Maybe the link I posted, is lying? Maybe Andrew Morton also lied when he said that the code quality is bad?
... We need to slow down the merging, we need to review things more, we need people to test their f--king changes!" ...
I don't know where I got the idea but, I thought the kernel devs switched to a model where there is only a one week merge window and the rest of the development cycle goes into fixing bugs.
Wow this is so exciting. BZZZZZT slowaris is long dead. Its a cool name, and it was way rad in the 90's, but it over, linux ripped its ass open.
What? Dont you know Linux is a piece of shit, when compared to a real Enterprise Unix as Solaris? Every serious sysadmin knows that Linux have severe problems with stability, scalability and what not. You want to see some links?
Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.
"The [linux source code] tree breaks every day, and it's becomming an extremely non-fun environment to work in.
We need to slow down the merging, we need to review things more, we need people to test their f--king changes!"
"Citing an internal INTEL corp study that tracked kernel releases, Bottomley said Linux performance had dropped about two per centage points at every release, for a cumulative drop of about 12 per cent over the last ten releases. "Is this a problem?" he asked.
"We're getting bloated and huge. Yes, it's a problem," said Torvalds."
"I used to think [code quality] was in decline, and I think that I might think that it still is. I see so many regressions which we never fix.
...
it would help if people's patches were less buggy."
Linux sucks as a file server.
http://www.enterprisestorageforum.com/sans/features/article.php/3749926
"Go mkfs a 500 TB ext-3/4 or other Linux file system, fill it up with multiple streams of data, add/remove files for a few months with, say, 20 GB/sec of bandwidth from a single large SMP server and crash the system and fsck it and tell me how long it takes. Does the I/O performance stay consistent during that few months of adding and removing files? Does the file system perform well with 1 million files in a single directory and 100 million files in the file system?
My guess is the exercise would prove my point: Linux file systems have scaling issues that need to be addressed before 100 TB environments become commonplace. Addressing them now without rancor just might make Linux everything its proponents have hoped for."
Linux has scaling problems. Sure, Linux runs on super computers on Top500 (which are just a fast network with a bunch of PCs) or on a 1024 core machine from SGI Altix (which is just some blades on a fast switch) - but that is not the same thing as a running a large machine. Linux always runs on networks. Not on a single large computer.
I have lots of similar links, you want to see them? When Linux kernel devs says that Linux code quality is bad and low quality. So it seems that Linux has code quality problems, is buggy, bloated and has scaling problems. Dont you agree?
Slackware is still on LILO, and a lot of signifcant distributions still default to GRUB such as Arch, Fedora, Gentoo, Mandriva, openSUSE, and PCLinuxOS. The biggest movement to GRUB2 is by Ubuntu derivatives.
One surprise there is Sabayon 5.3 and 5.4 (a Gentoo derivative); like the 'buntus, it uses GRUB2.
I think of Sabayon as almost a cross between Gentoo and Ubuntu, except that despite it being based on Gentoo, it goes further out than base Gentoo does (let alone base Ubuntu); for example, Sabayon is using kernel 2.6.36, which neither Gentoo or Ubuntu has adopted yet (except for test-case builds). However, like Ubuntu (and unlike Gentoo) it actually has a relatively friendly graphical installer.
Run on Oracle hardware, using Oracle license within Oracle company?
I think it will not make to top ten on distro watch. Very possibly not even top100.
It doesn't just run on Oracle hardware. It runs on most modern x64 hardware from Laptops to servers. The main purpose of the "Express" release is so that developers can ensure that their commercial applications which are certified to run on Solaris 10 run correctly on Solaris 11.
The company I work for recommends Solaris 10 as the default OS for one of it's key products. They also release for Linux but Solaris is the most common platform for our customers of that product. As a result the devs are gonna be creating a whole set of VM's in VMware Labmanager so that they can ensure that everything works fine before Solaris 11 is released next year.
The other key market for "Express" releases are Sys Admins. It gives us time to pick up any new skills required for Solaris 11 before we have to put it into production.
Leave a comment: