Announcement

Collapse
No announcement yet.

Oracle Plans To Bring DTrace To Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by bluetomato View Post
    Kebabbert - you seem to love the sound of your own voice. There's a saying "I don't need anger management, you just need to STFU". This applies very much in your case. If Solaris were a blow-up doll, you would be pimping yourself out day and night.
    Thanks for your constructive remark.




    Coming back to important facts - of which you don't have ANY in your posts - BtrFS is superior to ZFS due to pervasive usage of B+ trees.
    So you do have important facts? That BTRFS is superior because it uses B+ trees? Are you serious? Let me see..
    -BTRFS is the best!
    -Why?
    -Because it uses B+ trees!
    -So? Does that Btrfs superior? Why?
    -Just because it is, here is the important fact: "BTRFS is superior because it uses B+ trees!"
    -Eh?

    Can you show me your important facts? Everyone knows that BTRFS uses B+ trees, but why is that better? Can you show us that important fact?




    They may have similar features and hence high-level users like yourself may think that ZFS is the original, since it came earlier. However, BtrFS (named for B+ TRee FS) has a newer and more original approach, which lets it deliver ZFS features in a simpler and more elegant manner. Makes BtrFS more flexible as well. e.g.
    BTRFS is a ZFS wannabe, and ripoff. Just like Systemtap is a DTrace wannabe and ripoff. BTRFS is a lesser copy, inferior to ZFS. There are lot of features that ZFS have, that BTRFS lacks.

    You sound like "XXX is better, because it is written in C!! And YYY is written in Pascal!!!". Why do I care what language YYY is written in? As long as YYY is more stable, better, and have more features I dont care if it is written in BASIC.

    If ZFS is using a data structure, and BTRFS uses another data structure, why would I care? As long as ZFS is safe, stable and superior I do not care. Your argument is quite strange actually. BTRFS is superior because it uses another data structure. Does that automatically make BTRFS better and more stable? No, it is up to the developers. If the developers are bad, then it does not matter which data structure they are using, or which programming language the software will still be bad.

    A bad programmer can screw up things, even if he uses the best programming language and data structures. Whereas a good programmer will succeed even if he uses a bad language and bad data structures.




    Can you convert ext4 to ZFS on the fly ? No. But you can convert ext4 to BTRFS on the fly. Why ? Please do some research on why...
    The reason you can not convert ext4 to ZFS, is because no Solaris user runs that unsafe filesystem called ext4. I hope you know that ext4 is unsafe, and might corrupt your data.

    Do you mean because BTRFS has a specific feature that ZFS does not have, this single feature makes BTRFS better and more stable etc? Are you serious? There a LOTS of features that ZFS have, that BTRFS lack. Do you want me to write down a list and ask "can BTRFS do this? No? Then it must be bad"?




    I registered just to tell you to either talk like an Engineer or STFU. You fill your empty posts with vast amounts of meandering prose to hide the complete lack of any content. Please don't waste any more electrons - there's a kid somewhere in a developing country who needs it more.
    I dont see you talk like an engineer? You provided no support for your claims, it was just propaganda with no backup. And also, you insulted me. On the other hand, I provided links and support for my claims.

    Other than that, welcome to this forum!

    Comment


    • #32
      BadKebab: BTRFS is superior in design - it is a significant step forward. Which is why it is able to support all the important ZFS features, AND MORE, otherwise it would not be a step forward, would it ? In that light ext4 conversion is an example of how that design makes it forward-looking and inclusive. I didn't say it was a feature, I explicitly pointed it out as a helpful side-effect due to superior BTRFS design.

      By your logic, Solaris is a copy of all OS's that came before it because mainframes have always had the features offered by Solaris....

      ZFS is amazing, BTRFS will be even better

      Originally posted by kebabbert View Post
      ... There are lot of features that ZFS have, that BTRFS lacks. ...
      Name five that matter ?

      Originally posted by kebabbert View Post
      ...
      If ZFS is using a data structure, and BTRFS uses another data structure, why would I care? As long as ZFS is safe, stable and superior I do not care. Your argument is quite strange actually. BTRFS is superior because it uses another data structure. Does that automatically make BTRFS better and more stable? No, it is up to the developers. If the developers are bad, then it does not matter which data structure they are using, or which programming language the software will still be bad.
      ...
      Yes, BTRFS is elegant and flexible due to this CORE design approach. ZFS is powerful in feature list and good in design, but BTRFS is powerful in feature list and elegant in design.

      Originally posted by kebabbert View Post
      A bad programmer can screw up things, even if he uses the best programming language and data structures. Whereas a good programmer will succeed even if he uses a bad language and bad data structures.
      You have obviously never done any systems programming, let alone programming file systems. Bad data structures will eventually kill any project. No discussions. This tells me that you don't understand systems design and hence are intellectually unable to understand my point about B+-trees in BTRFS.

      Originally posted by kebabbert View Post
      The reason you can not convert ext4 to ZFS, is because no Solaris user runs that unsafe filesystem called ext4. I hope you know that ext4 is unsafe, and might corrupt your data.
      You do know that any file system can corrupt your data ? Even ZFS ? What protections does ZFS have against in-RAM corruption ? ZERO. Maybe that's why it should be called ZeroFS ???

      Originally posted by kebabbert View Post
      Do you mean because BTRFS has a specific feature that ZFS does not have, this single feature makes BTRFS better and more stable etc? Are you serious? There a LOTS of features that ZFS have, that BTRFS lack. Do you want me to write down a list and ask "can BTRFS do this? No? Then it must be bad"?
      ...
      You are a dumbass who refuses to listen. BTRFS has one single DESIGN aspect, i.e. PERVASIVE reuse of B+-trees, which makes it a better design. Converting ext4 is a side-effect that happens to show how elegant the design is - resulting in simplicity and therefore flexibility.

      People like you know next to nothing but attach themselves to a successful product/entity like ZFS and act as if you invented it. Its a means of projection to compensate for the lack of your brains in real life, and is quite rewarding, esp. on the internet where its easier for you to be evasive and keep shifting the argument between various meaningless points, by using misdirection and red herrings (calling ext4 conversion a feature, incapacity to understand why B+tree usage is so smooth, name calling ext4 as bad when ZFS can also result in corrupt data)

      Comment


      • #33
        Originally posted by kebabbert View Post
        The SGI Altix server with thousands of cores, is the same thing. Just look at the benchmarks, they are all embarrasingly parallell workloads; that is; cluster workloads. Not SMP workloads.

        Someone explains:
        "I tried running a nicely parallel shared memory workload (75% efficiency on 24 cores in a 4 socket opteron box) on a 64 core ScaleMP box with 8 2-socket boards linked by infiniband. Result: horrible. It might look like a shared memory, but access to off-board bits has huge latency."

        So, you are wrong. There are no big "SMP" Linux servers on the market today. Of course, there are lots of clusters running Linux, and Linux is very good at running clusters. But for one fat huge server, the biggest Linux server I have seen benchmarks on, is 48 cores. There might be bigger. From the article above:
        I'll grant the point, although large SMP systems wouldn't make much sense outside of highly parallel loads, and ScaleMP is not SGI.
        Thus, either you re program your workload into a clustered workload, or you get an SMP server, a single fat server with 8-socket, or "if you are lucky and can find one, a 16-socket Xeon box". But I dont know if there are any 16 socket Linux boxes today. I know that Oracle sells an 8-socket x86 server, so you should install Linux onto that, but I dont know how well Linux would scale on 8-socket with 64 cores?
        The reasonable expectation: the same as it scales on a single-image cluster with 64 cores, if not better due to the reduced overhead.
        It's good enough for Oracle to certify RHEL, OEL, SuSE, and Oracle VM (Linux + Xen) on it, though I can't say that proves too much judging by Oracle's past moves.
        Ted Tso, ext4 creator, just recently explained that until now, 32 cores was considered exotic and expensive hardware to Linux developers but now that is changing and that is the reason Ted is now working on to scale up to as many as 32 cores. But Solaris/AIX/HP-UX/etc Kernel devs have for decades had access to large servers with many cpus. Linux devs just recently has got access to 32 cores. Not 32 cpus, but 32 cores. After a decade(?), Linux might handle 32 cpus too.
        http://thunk.org/tytso/blog/2010/11/...-presentation/


        I think it is well known that Linus has a big ego and can be a prick sometimes. That Linus would name his own creations after himself, is quite reasonable? Stallman said "I am not naming GNU for Stallmanix" - criticizing Linus for having big ego.
        Whoosh.
        In case you didn't notice, the second example (git) means "an obnoxious person" in British English.
        As far as how Linux got the name, I'd refer you to "Just for Fun"--but you probably wouldn't read it anyhow.
        Most cases I know of where he has been criticised, were jokes that went over wrong. There may be other cases.


        All this stuff you talk about, I dont know if people consider that tech being new and revolutionary? Is it unique and everyone drools over it? No.

        Can you name something that everyone drools over, and wants? For instance, ZFS, DTrace, etc. Something that is really hyped? I have never heard Solaris or IBM AIX gurus being excited over something that Linux has. Can you name something? But everyone is drooling over ZFS and DTrace, and either porting it, copying it or stealing it.
        First:
        Do you mean exclusively in the server world, or are you unaware of anything else existing?
        Because screen autorotate (first done on Linux) is now mandatory on mobile devices.
        Sun wrote DRM code for Solaris, to get acceleration on Intel graphics from Mesa code written for Linux
        (IE: Sun had to copy Linux features to port 3d acceleration from Linux to Solaris).

        Wake on Wireless Lan isn't something most 'gurus' would know about. It was a new feature in Linux 3.0 (which is a few months old), so few Linux users have heard about it. And perhaps if you're not running laptops, it isn't that interesting. But for a system administrator with mobile computers, or someone who wants have action xyz taken when a certain change in the networking takes place (maybe an alert when the router goes offline, or a better wardriving setup), it may be much more valuable.

        Linux compatability--It's proof that Linux has more applications. And Sun, and the BSDs, and SCO, and HP, all wished they had the same application base.
        Similarly, Sun put out a kit to port network drivers from Linux to Solaris, but had to pull it since it couldn't meet the GPL requirements.
        Kernel in userland--Probably not interesting to the average user or sysadmin. It does mean it's easier to test an environment from a different OS, and so on.
        Speaking of which, does Solaris support chroot install from another OS?

        Also, Linux has a 4K stack instead of 128k, making for fewer OOM conditions. From what I've heard, Linux network code is faster, but that could be outdated: can Solaris in a virtual machine saturate a 10G ethernet connection?


        By the way:
        B+ trees make for faster searches. Of course, a faster search for a corrupt file does no good, so I wouldn't say it makes BtrFS 'better' just yet.

        Comment


        • #34
          Originally posted by kebabbert View Post
          Thanks for your constructive remark.





          So you do have important facts? That BTRFS is superior because it uses B+ trees? Are you serious? Let me see..
          -BTRFS is the best!
          -Why?
          -Because it uses B+ trees!
          -So? Does that Btrfs superior? Why?
          -Just because it is, here is the important fact: "BTRFS is superior because it uses B+ trees!"
          -Eh?
          And do you have any? It is widely known Oracle wants to kill old, crappy, legacy slowlaris and this is one of the reasons why they're working on better btrfs. The known fact is you're dumb troll from osnews. Btw. btfrs unlike zfs is 64bit system, so there's lower overhead.

          Comment


          • #35
            Originally posted by kebabbert View Post
            The reason you can not convert ext4 to ZFS, is because no Solaris user runs that unsafe filesystem called ext4. I hope you know that ext4 is unsafe, and might corrupt your data.
            You knew zfs is also unsafe and can corrupt your data, but you ignored that FACT.

            Comment


            • #36
              Originally posted by kraftman View Post
              And do you have any? It is widely known Oracle wants to kill old, crappy, legacy slowlaris
              Well, your claim contradicts official Oracle plans and roadmaps.
              http://www.computerworld.com/s/artic...bs_nose_at_IBM
              "The Solaris operating system is by far the best Unix technology available in the market," Ellison said. "That explains why more Oracle databases run on the Sun Sparc-Solaris platform than any other computer system."

              Regarding "slowlaris", I have showed you numerous benchmarks where Solaris is faster than Linux. In fact, what you call "Slowlaris" has several world records today, beating everyone else. Here are several official benchmarks, showing that Solaris is fastest in the world. Just look at some entries here:
              http://blogs.oracle.com/BestPerf/ent...enterprise2010

              Do you have any support for your claim that Oracle wants to kill Solaris, or is it the old FUD again? You confessed you FUD sometimes, and I would not be surprised if this is just more of your old FUD. Can you show any links that show that Oracle wants to kill Solaris?



              and this is one of the reasons why they're working on better btrfs.
              Well, ZFS makes money for Oracle today. BTRFS does not make money. And BTRFS is really buggy and unstable. If Oracle where really serious with BTRFS, then Oracle would reassign lots of developers to BTRFS. Oracle would kill off ZFS and kill Solaris, and reassign all those Solaris developers. That has not happened.



              The known fact is you're dumb troll from osnews.
              Lots of insults from the Linux fans. Why is that? Is it because Linus Torvalds is a prick, calling OpenBSD developers for "masturbating monkeys" because they focus on security? With such a master....



              Btw. btfrs unlike zfs is 64bit system, so there's lower overhead.
              Holy shit. This can not be true?? Jesus. That is really a bad design choice from the BTRFS team. I really hope this is not true, because that will make BTRFS much worse than I ever imagined. Are you sure?

              Comment


              • #37
                Originally posted by kraftman View Post
                You knew zfs is also unsafe and can corrupt your data, but you ignored that FACT.
                I am talking about design problems with ext4, making it vulnerable to data corruption. Because ext4 has no protection to data corruption. Ext4, by design, is not able to detect nor repair data corruption. Computer science researchers have proved this, with research at universities.

                Regarding ZFS. ZFS, by design, is able to detect and repair data corruption. Thus, design of ZFS is safe. Of course there are bugs in ZFS which means that people has had problems even when running ZFS. When the last bugs are gone, ZFS will be a completely safe filesystem. (That will probably not happen, because every complex software has bugs no matter how hard you try to find them).

                The thing is, ext4 does have bugs, and also is not safe by design - which makes ext4 vulnerable to data corruption. ZFS is safe by design - which has been proven by researchers.

                Comment


                • #38
                  http://article.gmane.org/gmane.comp....onest+timeline

                  Inside of Oracle, we've decided to make btrfs the default filesystem for
                  Oracle Linux. This is going into beta now and we'll increase our usage
                  of btrfs in production over the next four to six months. This is a
                  really big step forward, but it doesn't cover btrfs in database
                  workloads (since we recommend asm for that outside of the filesystem).

                  What this means is that absolutely cannot move forward without btrfsck.
                  RH, Fujitsu, SUSE and others have spent a huge amount of time on the filesystem
                  and it is clearly time to start putting it into customer hands.

                  Comment


                  • #39
                    Originally posted by kebabbert View Post
                    I am talking about design problems with ext4, making it vulnerable to data corruption. Because ext4 has no protection to data corruption. Ext4, by design, is not able to detect nor repair data corruption. Computer science researchers have proved this, with research at universities.

                    Regarding ZFS. ZFS, by design, is able to detect and repair data corruption. Thus, design of ZFS is safe. Of course there are bugs in ZFS which means that people has had problems even when running ZFS. When the last bugs are gone, ZFS will be a completely safe filesystem. (That will probably not happen, because every complex software has bugs no matter how hard you try to find them).

                    The thing is, ext4 does have bugs, and also is not safe by design - which makes ext4 vulnerable to data corruption. ZFS is safe by design - which has been proven by researchers.
                    This is meaningless. ZFS is not safe by design, because in scenarios when there are long pauses between fs checks it can be corrupted. I showed you this once. You can only say ZFS is safer by default. There are ways than can make Ext4 nearly completely safe. Those 'computer science researches' didn't prove this, but they showed it has some mechanisms that can help in fighting silent data corruption.

                    Comment


                    • #40
                      Originally posted by kebabbert View Post
                      I am talking about design problems with ext4, making it vulnerable to data corruption. Because ext4 has no protection to data corruption. Ext4, by design, is not able to detect nor repair data corruption. Computer science researchers have proved this, with research at universities.

                      Regarding ZFS. ZFS, by design, is able to detect and repair data corruption. Thus, design of ZFS is safe. Of course there are bugs in ZFS which means that people has had problems even when running ZFS. When the last bugs are gone, ZFS will be a completely safe filesystem. (That will probably not happen, because every complex software has bugs no matter how hard you try to find them).

                      The thing is, ext4 does have bugs, and also is not safe by design - which makes ext4 vulnerable to data corruption. ZFS is safe by design - which has been proven by researchers.
                      ZFS will always be able to detect data corruption (I believe), but repair will only happen if you are using one of the many RAID-Z configurations. Admittedly though if you're using ZFS and NOT running with RAID-Z then you really shouldn't be managing servers.

                      Comparing ZFS to EXT4 though is rather unfair. EXT4 was as you say never designed for that protection. A more correct comparison would be to compare UFS to EXT4, as they both serve the same purpose. And before you say that you shouldn't ever use UFS, there are times where the overhead of ZFS shouldn't be bothered with, say when you are running many virtual solaris systems with all of their storage already on ZFS. There's no reason to put ZFS inside of ZFS.

                      Comment


                      • #41
                        Originally posted by kebabbert View Post
                        Holy shit. This can not be true?? Jesus. That is really a bad design choice from the BTRFS team. I really hope this is not true, because that will make BTRFS much worse than I ever imagined. Are you sure?
                        Well, I don't care about PR bull and crappy and biased benchmarks against very old Linux systems. System that has 30% slower binaries with highest optimization simply can't be fast, can it? The reality shows slowlaris isn't interesting for Oracle, but they keep it because of old sun (thankfully's dead) customers. As for btrfs being 64bit system that's true and that's very good design choice.

                        Comment


                        • #42
                          Yes, I know that BTRFS will be default filesystem for Oracle Linux. That is perfectly in order, because Linux is inferior to Solaris, and BTRFS is inferior to ZFS.

                          Larry Ellison said that Linux is for lowend, and Solaris is for Highend. I cant find that link now, but if you want, I can google for that link. He also said
                          http://www.pcworld.com/article/21256..._to_sparc.html
                          Ellison now calls Solaris "the leading OS on the planet,"
                          It makes sense to use ZFS for highend Solaris, and BTRFS for lowend Linux. Of course, if Larry really were serious with making Linux as good as Solaris, then Larry would relicense ZFS so Linux could use it. But Larry is not relicensing ZFS - why is that? ZFS is better than BTRFS, and ZFS is mature. Larry wants to keep Solaris for highend features, and Linux for lowend.

                          But I am surprised that DTrace comes to Linux, because DTrace is much better than anything Linux has. Does really Larry want Linux to be as good as Solaris?

                          There is a huge technical post by a famous DTrace contributor, where he compares Systemtap to DTrace. Among others, he says that Systemtap might crash the server which makes it not usable for production servers. Whereas DTrace is safe and can be used in production servers. He goes on and on, and it seems that Systemtap has missed some of the main points of having a dynamically instrument like DTrace.
                          http://dtrace.org/blogs/brendan/2011...ing-systemtap/

                          But if DTrace comes to Linux, that is good for Linux. Here is the main architect of DTrace, trying out Linux DTrace.
                          http://dtrace.org/blogs/ahl/2011/10/...is-not-dtrace/

                          Comment


                          • #43
                            Originally posted by simcop2387 View Post
                            ZFS will always be able to detect data corruption (I believe), but repair will only happen if you are using one of the many RAID-Z configurations. Admittedly though if you're using ZFS and NOT running with RAID-Z then you really shouldn't be managing servers.
                            This is not really correct that you need raid for ZFS to be able to repair data corruption. You can use a single disk and get repair. You have to specify "copies=2", which stores all data twice on the single disk. This halves the storage capacity of the disk. Someone said that no filesystem (except ZFS) can get repair on a single disk.


                            Comparing ZFS to EXT4 though is rather unfair. EXT4 was as you say never designed for that protection. A more correct comparison would be to compare UFS to EXT4, as they both serve the same purpose. And before you say that you shouldn't ever use UFS, there are times where the overhead of ZFS shouldn't be bothered with, say when you are running many virtual solaris systems with all of their storage already on ZFS. There's no reason to put ZFS inside of ZFS.
                            Absolutely, I agree that it is unfair to compare ZFS to ext4. ZFS is modern and detects and repair data corruption. ext4 has legacy design and should be compared to the old UFS. In that case, I dont know who is better. I would not be surprised if ext4 is better.

                            Comment


                            • #44
                              Originally posted by kraftman View Post
                              This is meaningless. ZFS is not safe by design, because in scenarios when there are long pauses between fs checks it can be corrupted. I showed you this once.
                              You did? I missed that. Can you please post it again?

                              Regarding if there are long pauses between fs checks, no, that does not matter. If you dont do a ZFS fs check in one year (as I have), what happens is that ZFS always detects corruption. Detection is an ongoing process, that never stops. If ZFS detects corruption, ZFS will automatically repair the corruption (if you have redundancy like raid or "copies=2") when you request the data.
                              https://blogs.oracle.com/elowe/entry...ves_the_day_ta
                              "I've been running over a week now with a faulty setup which is still corrupting data on its way to the disk, and have yet to see a problem with my data, since ZFS handily detects and corrects these errors on the fly."

                              This guy has a weak power supply, and ZFS detects corruption in his setup almost immediately. Earlier, the earlier filesystem he used, did not detect these problems. He ran for years without no one telling him that data corruption occured all the time. But ZFS notices the slightest data corruption, and tells the user, and automatically repairs corruption.

                              But every once in a while, you should traverse everything on disk and check everything, yes. Do a fs check.





                              Originally posted by kraftman View Post
                              You can only say ZFS is safer by default. There are ways than can make Ext4 nearly completely safe. Those 'computer science researches' didn't prove this, but they showed it has some mechanisms that can help in fighting silent data corruption.
                              If you make ext4 "nearly completely safe", then it has to be heavily modified, and will be very similar to ZFS design in data detection aspect. And if there are ways to make ext4 nearly completely safe, why dont they do it?

                              The computer science researchers did prove that for every artifical error they injected into the disks, ZFS detected all errors. And also, ZFS repaired all errors, when there were redundancy (for instance, raid). Other computer science researchers also injected artifical errors on common Linux filesystem disks, and the Linux filesystems did not even detect those errors. How can Linux filesystems repair errors they can not even detect? First step is detection.

                              To detect errors, ZFS knows about everything; from RAM down to disk. The whole chain. ZFS has control over everything, ZFS is raid, filesystem, etc in one piece of software. This gives ZFS the ability to do end-to-end checksums: "The data in RAM, is it identical to what was stored on disk?" This comparison requires ZFS to have control over everything.

                              Have you played a game as kid? You whisper a word in one child, he whispers it to the other, etc. The last child says the word loud and everyone giggles, because the starting word and the last word are never the same. You need always to compare the start, to the end - to be sure the data is identical. ZFS does this, compares RAM to disk. End-to-end checksums. This is because ZFS is one piece of code, doing everything.

                              And guess what? Linux has several different layers, one raid layer. One filesystem layer. etc. All these layers does not know about other layers. There are no one to do end-to-end checksums. No one can compare RAM to disk, because there are several independent Linux layers.

                              Maybe you heard about Linux kernel developers mocking ZFS for "rampant violation of layers"? The reason ZFS can do end-to-end checksums, and Linux can not - is because ZFS violates layers! That is the main point of ZFS. And for this, Linux kernel developers did not understand anything of ZFS and said ZFS was a piece of shit, because it violates layers. Well, because ZFS violates layers - and Linux does not - ZFS is superior. Again Linux kernel developers show their ignorance. Because Linux kernel developers complain on the Linux code is "having low quality" - maybe there is a correlation between ignorance and low code quality?

                              ZFS does great things because it violates layers. The ZFS dev team realized the need to violate layers, and discusses this issue in several interviews and documents. Any attempt to clone ZFS will need to violate layers, too. Just like BTRFS - which also violates layers.





                              Question 1) How did you know what computer science researchers say about ZFS? Have you studied the latest research or what? Which research papers have you read?





                              Question 2) You have many times said "It is widely known Oracle wants to kill old, crappy, legacy slowlaris". Can you provide links, or are you just trying to start evil rumours and doing FUD about Solaris? (Which you have confessed earlier).





                              I have never seen that Larry Ellison wants to kill Solaris. He says it is the best Unix out there.

                              On the other hand, IBM has offically said they are going to kill of AIX, the Unix version from IBM. This is not evil rumours nor FUD from me. Here is the link going to IBM executives. Thus, I am not spreading FUD, I speak true and always back up my claims. I have substance in my claims. Check my sources, yourself:
                              http://www.zdnet.co.uk/news/applicat...e-aix-2129537/
                              "The day is approaching when Linux is likely to replace IBM's version of Unix, the company's top software executive said, an indication that the upstart operating system's stature is rising within Big Blue....Asked whether IBM's eventual goal is to replace AIX with Linux, Mills responded, "It's fairly obvious we're fine with that idea...It's the logical successor."





                              Well, I don't care about PR bull and crappy and biased benchmarks against very old Linux systems. System that has 30% slower binaries with highest optimization simply can't be fast, can it? The reality shows slowlaris isn't interesting for Oracle, but they keep it because of old sun (thankfully's dead) customers. As for btrfs being 64bit system that's true and that's very good design choice.
                              It does not matter if a filesystem is a bit slower, as long as it is safe. Would you prefer a slower safe filesystem, or a fast filesystem that might corrupt your data?

                              And regarding performance of ZFS vs BTRFS. Well, ZFS scales better than BTRFS, and ZFS is faster when you start to use many disks. Here is a benchmark of only 16 SSD disks on BTRFS vs ZFS. And we see that out-of-the-box, ZFS is faster than BTRFS. Sure, BTRFS might be faster on a single disk, but Solaris has always targeted big servers, big scalability - and never really cared for a single disk or quad cores. Solaris is for many disks, many cpus, etc. The more disks you have, the better ZFS will be, and the slower BTRFS will be. Same with cpus.
                              http://www.mail-archive.com/linux-bt.../msg05647.html

                              Regarding BTRFS being 64 bit, that is quite bad actually. I dont have the time to redo the calculations again now, but this is reasoning from ZFS developers. They reasoned something like this:

                              Today, we are storing Petabytes of data. CERN are storing petabytes, Facebook, Google, etc. To store PB of data, requires something like 2^62 bits. In a couple of years, all data will be doubled, thus requiring 2^63 bits. And in another few years, it will require 2^64 bits. After that, 64 bit filesystems will not do. Then you need filesystems that can store more data than 64 bits. Maybe ZFS would be a 72bit filesystem? And in a few years, ZFS would need to be increased again. Instead, ZFS developers said, "well, let us make 128 bits, and then we never have to worry anymore". Thus, ZFS is 128 bit and can handle 2^128 bits. 128 bit is enough and will never need to be increased. To fill up 2^128 bit, you need something like more atoms in the whole earth or something similar. If you stapled the entire earth with 4TB disks, 10 meters high everywhere on land and on sea - that would be something like 2^100 bits of storage or something similar. You would need several earths like that, to reach 2^128 bits. Thus, 128 bit filesystems is all that humankind will ever need. Laws of physics say so.

                              Therefore, in some years from now, they need to redesign BTRFS to be more than 64 bits. That is short sighted, and not future proof. This again shows why BTRFS has some bad design choices and why BTRFS is inferior to ZFS. Heck, even one RedHat developer said BTRFS is "broken by design".

                              Comment


                              • #45
                                Originally posted by kebabbert View Post
                                You did? I missed that. Can you please post it again?
                                No, you didn't miss it and no, I won't post it once again.

                                Regarding if there are long pauses between fs checks, no, that does not matter. If you dont do a ZFS fs check in one year (as I have), what happens is that ZFS always detects corruption. Detection is an ongoing process, that never stops. If ZFS detects corruption, ZFS will automatically repair the corruption (if you have redundancy like raid or "copies=2") when you request the data.
                                That's the problem. You've got to have copies to repair file system. The same can be done with Ext4. There are also bugs in zfs, so your data is not completely safe with it.

                                If you make ext4 "nearly completely safe", then it has to be heavily modified, and will be very similar to ZFS design in data detection aspect. And if there are ways to make ext4 nearly completely safe, why dont they do it?
                                Those who want to make it such safe just use proper system configuration and copies (raid). It's not that same file system has to care about data safeness. The same about detection and recovery.
                                And guess what? Linux has several different layers, one raid layer. One filesystem layer. etc. All these layers does not know about other layers. There are no one to do end-to-end checksums. No one can compare RAM to disk, because there are several independent Linux layers.
                                That's no true and I showed you this one, too. Patches were even sent by Oracle btw.
                                Question 1) How did you know what computer science researchers say about ZFS? Have you studied the latest research or what? Which research papers have you read?
                                The one you gave.

                                Question 2) You have many times said "It is widely known Oracle wants to kill old, crappy, legacy slowlaris". Can you provide links, or are you just trying to start evil rumours and doing FUD about Solaris? (Which you have confessed earlier).
                                You spread FUD about Linux, but I say true things about slowlaris. Links are meaningless in this case which is obvious, but marketshare, popularity and Oracle actions clearly shows slowlaris is going to end.

                                I have never seen that Larry Ellison wants to kill Solaris. He says it is the best Unix out there.
                                He says many stupid things. How many unixes are out there, today?

                                And regarding performance of ZFS vs BTRFS. Well, ZFS scales better than BTRFS, and ZFS is faster when you start to use many disks. Here is a benchmark of only 16 SSD disks on BTRFS vs ZFS. And we see that out-of-the-box, ZFS is faster than BTRFS. Sure, BTRFS might be faster on a single disk, but Solaris has always targeted big servers, big scalability - and never really cared for a single disk or quad cores. Solaris is for many disks, many cpus, etc. The more disks you have, the better ZFS will be, and the slower BTRFS will be. Same with cpus.
                                http://www.mail-archive.com/linux-bt.../msg05647.html
                                Such old benchmarks with unstable btrfs version doesn't matter at all.

                                Regarding BTRFS being 64 bit, that is quite bad actually. I dont have the time to redo the calculations again now, but this is reasoning from ZFS developers. They reasoned something like this:

                                Today, we are storing Petabytes of data. CERN are storing petabytes, Facebook, Google, etc. To store PB of data, requires something like 2^62 bits. In a couple of years, all data will be doubled, thus requiring 2^63 bits. And in another few years, it will require 2^64 bits. After that, 64 bit filesystems will not do. Then you need filesystems that can store more data than 64 bits. Maybe ZFS would be a 72bit filesystem? And in a few years, ZFS would need to be increased again. Instead, ZFS developers said, "well, let us make 128 bits, and then we never have to worry anymore". Thus, ZFS is 128 bit and can handle 2^128 bits. 128 bit is enough and will never need to be increased. To fill up 2^128 bit, you need something like more atoms in the whole earth or something similar. If you stapled the entire earth with 4TB disks, 10 meters high everywhere on land and on sea - that would be something like 2^100 bits of storage or something similar. You would need several earths like that, to reach 2^128 bits. Thus, 128 bit filesystems is all that humankind will ever need. Laws of physics say so.
                                64bit is simply enough and will be enough for long time. CERN, Google, Facebook runs Linux not slowlaris, so 64bit is and will be (nothing suggests they plan to replace Linux by system that has 30% slower binaries) enough.

                                Therefore, in some years from now, they need to redesign BTRFS to be more than 64 bits. That is short sighted, and not future proof. This again shows why BTRFS has some bad design choices and why BTRFS is inferior to ZFS. Heck, even one RedHat developer said BTRFS is "broken by design".
                                No, that's simply not true and this sounds like sun's FUD. Red Hat employer was mistaken and if you read discussion you should be aware of this.

                                Comment

                                Working...
                                X