Announcement

Collapse
No announcement yet.

Oracle Plans To Bring DTrace To Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by kraftman View Post
    No, you didn't miss it and no, I won't post it once again.
    You have confessed you FUD sometimes, so you have a track record of making up things.

    How can I know if you are telling the truth now? If you can not post a simple link again, then it looks like FUD. Dont you think?

    I know you have showed links earlier, but none of the links where relevant. In one case you showed a link, where they compared an old 800MHz SPARC Solaris server, vs a dual core Intel 2.4GHz Linux, and your link claimed that Linux is faster than Solaris. You thought that link was good and relevant. I say your link was not relevant. If you install Linux and Solaris on the same hardware - that is interesting and relevant - when we discuss performance of Solaris vs Linux. Thus, your links are not good. This link you showed earlier, can you repost it, or is the link as meaningless as your earlier links?



    That's the problem. You've got to have copies to repair file system.
    What is a problem? ZFS needs redundancy (raid or copies=2) to repair corrupted data. Every filesystem works like that. No filesystem can repair corrupted data without redundancy. If you use a single disk with ext4, then ext4 can not repair corrupted data for the following reasons:
    1) ext4 has no "copies=2" command, ext4 needs another disk to be able fetch missing data. ext4 must have two disks, or more. ZFS can have a single disk.
    2) ext4 has no way of detecting corrupted data or repair it. ext4 does parity calcuations, but no checksum for detecting corrupted data.

    So I dont see it as a problem that ZFS can repair corrupted data, whereas ext4 can not. What is the problem that ZFS can repair corrupted data? Can you explain again?



    The same can be done with Ext4.
    Yes, if you heavily modify ext4, then it can detect and repair corrupted data. Then it will have some mechanism similar to ZFS.



    There are also bugs in zfs, so your data is not completely safe with it.
    This is true. There are bugs in ZFS still. On 31 october recently, ZFS became 10 years old. And still it has bugs. It takes decades to find all bugs in a filesystem. When BTRFS reaches v1.0 in a couple of years, it will take 10 years before most bugs have been ironed out. There have been cases where people lost data with ZFS, yes. So your data is not completely safe with ZFS, no.

    Many people believe that ZFS is the safest alternative on the market today. Not a completely safe, but the safest. ext4 has no data corruption detection at all, so it is not safe at all. I fail to see how "ext4 is near completely safe" - can you describe how, or are FUDing again?



    Those who want to make it such safe just use proper system configuration and copies (raid). It's not that same file system has to care about data safeness. The same about detection and recovery.
    Do you believe that if you use hardware raid, then you get a safe storage solution? That is not true, as I have shown you links earlier. I can repost those links again if you wish. Hardware raid does not have any mechanism to detect nor repair corrupted data. Hw raid does parity calculations if a disk has crashed, to repair the raid again. But that is not checksum against data corruption. hw-raid are not built to handle data corruption.



    That's no true and I showed you this one, too. Patches were even sent by Oracle btw.
    I suspect you talk about the Oracle IDF patches? IDF are not safe. If you look at the spec sheet of Fibre Channel high end disks, that use IDF techniques, the FC disks still have problem with data corruption. Have you looked at those spec sheets? I can post them for you, if you wish to see. They say something like "1 error in every 10^16 bits" - how can read errors happen if IDF is completely safe?



    The one you gave.
    I did not understand this. Can you explain again? I know I posted links and you denied those links existed. I posted the links again and again, and you all you said was "post the research papers you talk about", and I did again and again in several posts. And still, you denied the research papers existed. So, are you know claiming that those research papers I posted, does exist? I dont understand. It seems you accept those papers now? Or are you still denying the research I showed you?



    You spread FUD about Linux,
    If I spread FUD about Linux, can you link to one such FUD post by me? As you know, I only quote Linus T, Andrew M, etc - when they say that Linux is buggy and bloated. That is not lies nor FUD. It is true. Or do you deny the Linux kernel devs said this? Do you want me to repost those interviews and links again? I can do that if you wish.



    but I say true things about slowlaris. Links are meaningless in this case which is obvious, but marketshare, popularity and Oracle actions clearly shows slowlaris is going to end.
    If you have a controversial claim, that Oracle wants to kill Solaris, then you should back it up somehow. If people are not agreeing on what you say, you should be able to reason and argue why you are correct. You should not just say "because I say so" - that is FUD.

    Larry has said officially that he is increasing resources much more than Sun ever had. There will be more developers on Solaris and SPARC cpu, than Sun ever had.
    http://news.cnet.com/8301-13505_3-10...in;contentBody
    In other words, Larry says he will bet heavily on Solaris. You say he is going to kill Solaris. This post shows that you are not correct on this, Solaris will not be killed. Do you agree that Larry is not interested in killing Solaris? Why would Larry say that “…Solaris is overwhelmingly the best open systems operating system on the planet.” if he wants to kill Solaris? No, you are not correct on this. I have showed you numerous links where Larry praises Solaris, and still you say that Larry are going to kill Solaris. Why? Isnt what you do, pure FUD and Trolling?
    http://cuddletech.com/blog/?p=279



    He says many stupid things. How many unixes are out there, today?
    Yes, Larry said stupid things. But he is also one of the richest mans on earth. So not all he says, is stupid? On the other hand, you lie and FUD a lot. You have even confessed you FUD. And some things you have said, are... not the brightest things men have said.



    Such old benchmarks with unstable btrfs version doesn't matter at all.
    Yes it matters, because you said that 64bit BTRFS is faster than ZFS. This links shows that you are not correct. BTRFS is not faster than ZFS. I have proved you are wrong. Even though BTRFS is using 64 bits, it scales bad and in slow when using many disks. BTRFS might be fast on single disk, because the developers mainly targets desktops, not big servers.



    64bit is simply enough and will be enough for long time. CERN, Google, Facebook runs Linux not slowlaris, so 64bit is and will be (nothing suggests they plan to replace Linux by system that has 30% slower binaries) enough.
    Yes, 64 bits will suffice for many years still. But 64 bits are not future proof. CERN today, stores 4 Petabytes on ZFS for long term storage on tier-1 and 2. In a couple of years, CERN will pass 64 bits, then CERN will need more than 64 bits. Maybe that is one of the reasons that CERN prefers ZFS?
    https://blogs.oracle.com/simons/entr..._science_means
    "Having conducted testing and analysis of ZFS, it is felt that the combination of ZFS and Solaris solves the critical data integrity issues that have been seen with other approaches. They feel the problem has been solved completely with the use of this technology. There is currently about one Petabyte of Thumper storage deployed across Tier1 and Tier2 sites. That number is expected to rise to approximately four Petabytes by the end of this summer."



    No, that's simply not true and this sounds like sun's FUD. Red Hat employer was mistaken and if you read discussion you should be aware of this.
    Well, it is true that CERN will need more than 64 bit filesystems when their large LHC collider which costed many billions gets active. When LHC starts to function in a couple of years, LHC will start to generate HUGE amounts of data, according to CERN. There will be many Petabytes.

    If CERN uses 64 bits, then CERN needs to split the data so that the data does not go beyond 2^64 bits. So CERN will have some data pools, with 2^64 bits, and another data pool with 2^64 bits, etc. Thus, there will be several data pools, and it will be difficult to examine all data in different pools. It is then better to use one single data pool, that holds all data because then CERN can run all calculations, without having to split them up. Thus, it is true that BTRFS will not be able to handle big scenarios - which means that BTRFS needs to be redesigned to use more than 64 bits. So, yes, I speak true.

    Regarding the RedHat developer, he said that BTRFS has some issues. That is true, and I do not lie nor FUD about this. Do you want to see the post where he writes this?
    Last edited by kebabbert; 11-06-2011, 06:05 PM.

    Comment


    • #47
      Kebab: ZFS cannot deal with in-RAM corruption - by design. Deal with it.

      And you use Larry Ellison's quote "Solaris is overwhelmingly the best open systems operating system on the planet" for evaluating Solaris' technical status. Am sure you don't have a day job.. and don't wanna know details of your night job.
      Last edited by bluetomato; 11-10-2011, 11:28 AM.

      Comment


      • #48
        Originally posted by bluetomato View Post
        Kebab: ZFS cannot deal with in-RAM corruption - by design. Deal with it.
        Yes, I know that ZFS does not repair data corruption in RAM, nor does ZFS repair data corruption in the CPU's registers, nor in the Graphic Card, nor in the bus, etc. So? Do you expect a FILE system to repair data in the graphics card's RAM? Or repair data in the CPU's register? Or in RAM? What kind of filesystem would that be that can repair data corruption everywhere in a server? No, such a filesystem will not exist. It is the responsibility of a filesystem to correctly handle data for storage on disk, and the responsibility of RAM to correctly handle RAM errors, and the responsibilty of the GPU, to handle errors in GPU, etc.

        I hope you dont believe that any Linux filesystem such as BTRFS corrects errors in the GPU, or in RAM? Are there ANY Linux filesystem that correctly saves data to disk? Research shows that most common Linux filesystems are not safe. There are no research on BTRFS yet, but Kraftman says it is in beta phase so BTRFS should be ruled out. (I showed benches of ZFS vs BTRFS on SSD disks, and Kraftman ruled that bench out because BTRFS is not done yet nor stable)



        And you use Larry Ellison's quote "Solaris is overwhelmingly the best open systems operating system on the planet" for evaluating Solaris' technical status.
        Yes, I dont deny it. It is dark outside right now. I am upgrading to latest Solaris 11 right now. etc. What was your point with such a declarative statement?

        MY point is that I am quoting Larry Ellison on this, because "some people" says that Larry wants to kill off Solaris, because it is slow (slowlaris). Well, it seems that Larry thinks that Solaris is the best. And I have shown benchmarks where Solaris holds several world records today. So, Larry does bet on Solaris, and Solaris is the fastest in the world on some benchmarks. So I have disproved "some people", yes?



        Am sure you don't have a day job.. and don't wanna know details of your night job.
        Oh, I work in finance, at a large world famous company you have heard of. I have a double degree, Masters in computer science and another in Math. Yes, I do have a day job. I dont have a night job, though. Do you work at night or day?
        Last edited by kebabbert; 11-10-2011, 02:22 PM.

        Comment


        • #49
          Re Disk capacities...

          SGI says this about XFS http://oss.sgi.com/projects/xfs/):
          XFS is a full 64-bit filesystem, and thus is capable of handling filesystems as large as a million terabytes.

          2^63 = 9 x 10^18 = 9 exabytes
          CERN expects to need, per kebabbert's link, 57 petabytes of disk and 43 petabytes of tape storage (100 petabytes total).
          That's WELL under the limit of 64bit FS capacity (1/80), per SGI's numbers.

          XFS is one of the things CERN has gone to great lengths to support/use--it ships with SL by default.
          Hence the reference.
          (Plus it's supposedly the fastest FS for huge data files.)

          Comment


          • #50
            Originally posted by Ibidem View Post
            SGI says this about XFS http://oss.sgi.com/projects/xfs/):


            CERN expects to need, per kebabbert's link, 57 petabytes of disk and 43 petabytes of tape storage (100 petabytes total).
            That's WELL under the limit of 64bit FS capacity (1/80), per SGI's numbers.

            XFS is one of the things CERN has gone to great lengths to support/use--it ships with SL by default.
            Hence the reference.
            (Plus it's supposedly the fastest FS for huge data files.)
            Maybe you have missed all the links I posted, that shows that CERN is worried about data corruption? And I showed another link that showed that XFS does not protect against data corruption. Thus, XFS would not be good choice.

            Also, I have read that some vendor (is it RedHat) only supports up to 16TB raid sets with XFS. Is this true or false?

            Comment


            • #51
              Originally posted by kebabbert View Post
              Maybe you have missed all the links I posted, that shows that CERN is worried about data corruption? And I showed another link that showed that XFS does not protect against data corruption. Thus, XFS would not be good choice.

              Also, I have read that some vendor (is it RedHat) only supports up to 16TB raid sets with XFS. Is this true or false?
              Not sure.

              I shouldn't have bothered with all the 'clarifications'; I'd hoped to show that 64bit is FAR from obsolete, and not to advocate some other FS. Hence the title I used (Re disk capacities..., not XFS is better)

              But yes, XFS in itself is rather vulnerable, and maybe CERN has been moving away from relying on it.
              XFS is intended to be used with LVM, which may significantly improve the resilience; it seems zfs wraps volume management up in the FS.

              But if you pay attention, it looks like they're using Solaris as the best fileserver, and not for any actual work. Oracle is attempting to convince them to use Niagaras, but no sign of folding yet...

              As far as whether they'll move to Solaris, the sl mailing list shows several people disgusted with Oracle's hardware support :
              > Approach Oracle reseller to get quote about getting additional drives,
              > only to discover that not only did Oracle EOL the J4400 in Dec 2010,
              > they also EOLed accessories so it is no longer possible to get the
              > drives. EOLing the array itself is fair enough, but the drives too,
              > what a way to treat customers that have invested in their products!
              > Apparently this is the case with quite a lot of "older" Sun hardware
              > not just the J4400.

              If you like Solaris you might check out Nexenta. Runs great on
              SuperMicro hardware (or even Dell + MD3000's).

              We are a fairly big "Sun" shop, but the decision to go with more and
              more Dell/IBM/HP is becoming easier and easier -- especially for
              running Linux.

              Comment


              • #52
                Originally posted by kebabbert View Post
                Maybe you have missed all the links I posted, that shows that CERN is worried about data corruption? And I showed another link that showed that XFS does not protect against data corruption. Thus, XFS would not be good choice.

                Also, I have read that some vendor (is it RedHat) only supports up to 16TB raid sets with XFS. Is this true or false?
                Misrepresentation.
                Originally posted by Red Hat
                The Scalable File System Add-On for Red Hat Enterprise Linux uses the XFS® file system to provide support for file systems that are between 16 terabytes and 100 terabytes in size.
                RH elsewhere documents that XFS supports up to 16 exabytes.Storage Administration Guide: The XFS filesystem

                Yes, it is rather bad about detecting errors.
                But did you notice that I used the title "Re disk capacities..."? Maybe I shouldn't have added the last comment in the post: it was intended to be only proof that a 64-bit FS has more than enough disk space. I had intended to show my logic path as to why I looked at the limits for XFS.

                Comment

                Working...
                X