Announcement

Collapse
No announcement yet.

Tuxera Claims NTFS Is The Fastest File-System For Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • I meant 2007, not 1997 of course. I'm still not used to this whole "new millenium" thing.

    Comment


    • Originally posted by RealNC View Post
      LOL @ all the clueless people here who think that somehow EXT4 has "more features" than NTFS, which happens to actually be one of the most advanced filesystems in this universe. The only issue with it is that it's proprietary.
      I feel so clueless suddenly... pls enlighten us a bit for the superiority of ntfs since until now I feel not jealous at all for not be able to use it at all...

      Comment


      • Originally posted by TheBlackCat View Post
        That article was from 2007, as was the research paper it was based on. The research paper looked at flaws that could compromise security in filesystems at the time, including filesystems like NTFS, XFS, and EXT3 amongst many others. The author then went on to propose a way to improve the EXT3 filesystem specifically so it could avoid these risks. As best as I can tell these improvements were incorporated into the EXT4 filesystem. In fact it seems the improvements were implemented in EXT4 almost immediately, since a talk about then-upcomming EXT4 a few months later discusses the improvements.

        So rather than showing that all filesystems are equal, the article actually suggests that the EXT4 filesystem is superior in terms of data security (unless NTFS also implemented those features, which it might have).
        That PhD thesis (not research paper) looked at only some flaws. It did not do a full check (which would be equivalent of proving mathematically that there are no flaws which is impossible to do today).

        So, fine, some flaws might have been corrected in ext4, but if you read the PhD thesis (which I have) you will see that only some flaws are corrected. Not everyone.

        And besides, hardware raid also has lots of flaws and is not safe either. Probably you knew that as well:
        http://en.wikipedia.org/wiki/RAID#Problems_with_RAID

        In other words, I would not trust on NTFS nor ext4. There is no research on ext4 as I know of, but that does not prove that ext4 is safe.

        Comment


        • Originally posted by kebabbert View Post
          Please calm down.
          I tolerate only those who tolerate.

          Originally posted by kebabbert View Post
          Lots of enterprise Unix sysadmins, say that ACL is much more powerful than ordinary Unix read/right control. There are cases when you need ACL, and when 888 does not cut it.
          Several dozens of universities (thats tens of thousands workstations and slim clients) in Germany used SunOS Solaris with completely standard 888. In case one needs runtime checks - there are selinux and apparmor. I have no idea who uses ACLs, but if it exists - its needed, maybe by some rare huge research institutes with complex group access collisions.

          Originally posted by kebabbert View Post
          Regarding ext3, it does not really protect your data well.
          It does, in full journaling and. in case of ext4, with barriers on, it does. At least at logical level - till physical layer. For physical layer there is SMART paired with backups.

          Originally posted by kebabbert View Post
          SMART does not help.
          I wonder why would they build it in? Self Monitoring and Analysis. I only need Reallocated Sector count and Spindle Start Retries to cover it all.
          If course, FS "could" additinally CRC the data, but thats what happens if you add encryption system on top. Encryption system on Windows - fail, whole internet of users who had system failures (even minimal) lead to inconsistency and complete data loss due to inability to decrypt it back.

          Originally posted by kebabbert View Post
          ext3 and NTFS are equally bad (or good) in protecting your data:
          http://www.zdnet.com/blog/storage/ho...ta-at-risk/169

          "Dr. Prabhakaran found that ALL the file systems [NTFS, ext3, ReiserFS, JFS and XFS] shared
          . . . ad hoc failure handling and a great deal of illogical inconsistency in failure policy . . . such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies."
          Nice, real-life cases vs indian random word-spucking "professor". He should open a bug if he finds something. Failure policy is irrelevant to file system. Failure policy is multistep logic to deal with failures at many levels. FS in only small part and deals only and only with file consistency at load or failures. It CAN be part of failure policy, but it only if YOU build it. The whole heap of text is useless - you set bash script to check smart on drives before mounting(initramfs, init) and you have failure policy. Anyways, if ext detects incorrect shutdown - it recovers journal or rechecks the FS. If detects serious errors, it remounts system as RO. How is this "inconsistent"?

          Comment


          • Originally posted by kebabbert View Post
            And besides, hardware raid also has lots of flaws and is not safe either. Probably you knew that as well:
            http://en.wikipedia.org/wiki/RAID#Problems_with_RAID
            Yes, it is specific to RAID itself and how its hooked up. Mirror unsync's are not uncommon on garbage cheap chipsets (dawicontrol and similar idiots).
            It is not relevant of FS, it does not make ext any less secure.
            But if you happen to trust ntfs any more than ext, oh my - well, its your personal choice )

            Comment


            • "Regarding ext3, it does not really protect your data well."
              Originally posted by crazycheese View Post
              It does, in full journaling and. in case of ext4, with barriers on, it does. At least at logical level - till physical layer. For physical layer there is SMART paired with backups.
              Maybe you did not read the PhD thesis, but I did. There are actually a lot of research on silent data corruption on ext3 and also on hw-raid.

              For instance, physics centre CERN did a study on this, and found out that many of their hardware raid Linux storage servers, showed silently corrupted data:
              http://www.zdnet.com/blog/storage/da...n-you-know/191

              What CERN did, was this: they wrote a special bit pattern on their hw raid storage servers, again and again. And after three weeks, they checked the entire disk and saw that the bit pattern was not correct. Some 1 become 0, and vice versa. The Linux server did not even knew this, and reported no errors. This is called Silent Corruption - the hardware, nor the OS, is aware that some bits have been flipped by random. They believe everything is correct, but it is not correct.






              "SMART does not help."
              Originally posted by crazycheese View Post
              I wonder why would they build it in? Self Monitoring and Analysis. I only need Reallocated Sector count and Spindle Start Retries to cover it all.
              If course, FS "could" additinally CRC the data, but thats what happens if you add encryption system on top.
              Let me ask you two questions:
              A) have you heard about ECC RAM? What is it for? Why does servers use ECC RAM? Do you know why?

              B) Have you ever read the specification sheet of a standard SAS enterprise disk? For instance Cheetah 15000rpm disk:
              http://www.seagate.com/docs/pdf/data...etah_15k_7.pdf
              Read the part about "non recoverable errors"

              Comment


              • Originally posted by kebabbert View Post
                "Regarding ext3, it does not really protect your data well."

                Maybe you did not read the PhD thesis, but I did. There are actually a lot of research on silent data corruption on ext3 and also on hw-raid.
                WHERE????!

                Originally posted by www.zdnet.com/blog/storage/data-corruption-is-worse-than-you-know/191 View Post
                * Disk errors.The wrote a special 2 GB file to more than 3,000 nodes every 2 hours and read it back checking for errors for 5 weeks. They found 500 errors on 100 nodes.
                o Single bit errors. 10% of disk errors.*1*
                o Sector (512 bytes) sized errors. 10% of disk errors.*2*
                o 64 KB regions. 80% of disk errors. This one turned out to be a bug in WD disk firmware interacting with 3Ware controller cards *3* which CERN fixed by updating the firmware in 3,000 drives.
                * RAID errors *4* They ran the verify command on 492 RAID systems each week for 4 weeks. The RAID controllers were spec’d at a Bit Error Rate of 10^14 read/written. The good news is that the observed BER was only about a 3rd of the spec’d rate. The bad news is that in reading/writing 2.4 megabytes of data there were some 300 errors.
                * Memory errors *5*. Good news: only 3 double-bit errors in 3 months on 1300 nodes. Bad news: according to the spec there shouldn’t have been any. Only double bit errors can’t be corrected.
                *1* Platter surface demagnetization errors! SMART detect this.
                *2* Firmware errors! Contact or sue hardware vendor!
                *3* Firmware errors! Same!
                *4* RAID Hardware logic & xfer errors! Same, but for RAID card/controller/cables!
                *5* RAM bit magnetization due to high density! Use ECC RAM, position RAM correctly - follow MB manufacturer recommendation, enclose hardware in grounded cages correctly!

                Where is LINUX EXTx CORRUPTING YOUR DATA HERE?

                Is filesystem DESIGNED to withstand all those errors? Hell, NO.
                It is like blaming Joe from Los Angeles in Fukushima crisis! He is american, and americans delivered parts to Nippon, so he is responsible for nuclear meltdown! He is NOT.
                What is Joe responsible? To support his family and do it well! There is no point in giving every single Joe nuclear physician education to control the reactor either!

                Projected to this "analysis", the file system should only do what filesystem should do - and do it well.

                Detect file corruption - ext does data block and journal checksumming. Ntfs? I know only a way via hacks.
                Prevent fragmentation - ext does this, designed with this priority. Ntfs does not do it and hence - speedups.
                Correctly support operating system security requirements - ext does this.
                Support for file requirements (timedate,name,reservations) - ext does this - and efficiently, unlike ntfs with MFT growing past 12-50% of partition size, without sane mechanism to change it.
                Maintain consistency over power-down/cuts - ext does this and can do full data journaling, where ntfs does only metadata.
                Badblocks - are not applicable to file system job, only in times of floppy disks. Nevertheless, ntfs tries to appy this in 21 century.

                And ext is opensource! Which means it runs everywhere and has no licensing payments - this means ntfs is GARBAGE.
                Ntfs is used ONLY and ONLY for legacy reasons. WHOLE MICROSOFT is built around LEGACY REASONS.
                They flood and occupy market by price damping, set own standards and then they pretty much control EVERYONE.
                THANK YOU CERN, FOR NOT USING MICROCRAP!

                Originally posted by kebabbert View Post
                For instance, physics centre CERN did a study on this, and found out that many of their hardware raid Linux storage servers, showed silently corrupted data:
                http://www.zdnet.com/blog/storage/da...n-you-know/191

                What CERN did, was this: they wrote a special bit pattern on their hw raid storage servers, again and again. And after three weeks, they checked the entire disk and saw that the bit pattern was not correct. Some 1 become 0, and vice versa. The Linux server did not even knew this, and reported no errors. This is called Silent Corruption - the hardware, nor the OS, is aware that some bits have been flipped by random. They believe everything is correct, but it is not correct.
                Yes, AND? Linux server supposed to correct hardware failures? Linux does not feature artificial intelligence. Yet.
                The guys threw HUGE testing at HUGE capacity arrays. Of course errors would show up, but from those, none were of linux or ext origin. Or you have something else to tell?



                Originally posted by kebabbert View Post
                "SMART does not help."
                Of course it does. It reports when first physical sector was remapped or when drive motor starts showing age. Sufficient for the desktop or workstation to replace the drive with the new one.

                Originally posted by kebabbert View Post
                Let me ask you two questions:
                A) have you heard about ECC RAM? What is it for? Why does servers use ECC RAM? Do you know why?

                B) Have you ever read the specification sheet of a standard SAS enterprise disk? For instance Cheetah 15000rpm disk:
                http://www.seagate.com/docs/pdf/data...etah_15k_7.pdf
                Read the part about "non recoverable errors"
                No, my head is only good to eat with.
                Of course I know, happens ECC is only available and built for server mainboards, although unofficially some asus boards seems to support it. In recent time, using ECC does start to make sense, with hi-density memory volumes going 4Gb and up.
                But its manufacturer job to make sure component does not break within its designed usage scenario.

                SATA has many SAS functions in it and is sufficient for desktop usage. SAS is too complex and has operating environment not normally seen on desktop, like 24/7 massively parallel data exchange with very limited error correction time, multi-disk and hotswap support. For example, you do not do SAS with 1000x 1Gb drives at home, you buy one 1Tb drive instead.

                Cheetah is good drive. But too slow vs SSD, too noisy and unreliable vs normal 7,2k. The "non-recoverable bits" part is statistical mean product, many publish it, I guess it is legal requirement.
                Last edited by crazycheese; 06-27-2011, 02:15 PM.

                Comment


                • Originally posted by kebabbert View Post
                  In other words, I would not trust on NTFS nor ext4. There is no research on ext4 as I know of, but that does not prove that ext4 is safe.
                  It seems it's not the case in Linux:

                  http://blogs.oracle.com/linux/entry/...ption_in_linux

                  and this:

                  http://www.betanews.com/article/Orac...ion/1228243294

                  The 2.6.27 Linux kernel got bolstered today by "block I/O data integrity infrastructure" code which is seen by Oracle, the code's contributor, as a first for any operating system.
                  So it seems it was resolved in Linux even before ZFS.
                  Last edited by kraftman; 06-27-2011, 03:52 PM.

                  Comment


                  • Originally posted by kraftman View Post
                    So it seems it was resolved in Linux even before ZFS.
                    So the "block I/O data integrity infrastructure" automagically resolves all data corruption issues? I think not.

                    Originally posted by crazycheese View Post
                    *1* Platter surface demagnetization errors! SMART detect this.
                    Of course I know, happens ECC is only available and built for server mainboards, although unofficially some asus boards seems to support it.
                    But its manufacturer job to make sure component does not break within its designed usage scenario.
                    SMART does not tell you if the sector which you just read is correct or corrupt.
                    ECC memory is supported on all AMD CPUs since socket 754 days, and only recently AMD started to screw consumers by dropping it from their Fusion parts. I think the majority of AM2/AM3/+ mobos support it too.
                    If you read the CERN article, you will notice that apart from the memory/firmware problem, all components worked within their specified error rates.

                    Comment


                    • Originally posted by chithanh View Post
                      So the "block I/O data integrity infrastructure" automagically resolves all data corruption issues? I think not.
                      It's about sillent data corruption.

                      Comment


                      • Originally posted by kraftman View Post
                        It seems it's not the case in Linux:
                        No, I said something like "I dont know any RESEARCH on ext4, but lack of research does not prove that ext4 is safe".

                        You show some Oracle engineers talking about data corruption. You dont show any research. Of course, developers behind ReiserFS, NTFS, ext3 etc are also engineers, and they also tried to make ReiserFS, ext3 and NTFS safe. But they failed according to a PhD thesis. You show similar links: some Oracle engineers saying that they tried to do a filesystem safe. But maybe they also failed?

                        Again: I dont know any research on ext4 - but lack of research does not prove ext4 is safe. You need to provide research that shows that ext4 can handle silent corruption. You show some talks about Oracle engineers saying they want to make Linux safe. Just as Reiser said. But he failed, ReiserFS is not safe, according to PhD thesis.



                        I agree this looks good for Linux. But I would like to see research on this, have the engineers succeeded or did they fail? But until I see research (maybe this solution is really bad? Or is it better than ZFS?) I would definitely use this Oracle solution. Or ZFS. I would avoid everything else if my data is important.



                        Originally posted by kraftman View Post
                        So it seems it was resolved in Linux even before ZFS.
                        ZFS is much older than this. ZFS was officially announced 2004 but was in development several years before.

                        One of your Linux link is from last year, 2010. The other link is almost from 2009 (December 2008). Thus, almost half a decade after Sun talked about ZFS and Silent Corruption, everyone else now today is aware of Silent Corruption and tries to develop solutions to protect against Silent Corruption. But is their solution as good as ZFS?

                        There is recent research on ZFS and Silent Corruption, showing that ZFS protects against all the different silent corruption scenarios the research team tried to provoke:
                        http://www.cs.wisc.edu/wind/Publicat...ion-fast10.pdf
                        Thus, initial research shows ZFS to be much safer than any other solution, because ZFS catched all artifically injected errors.

                        I want to see the same kind of research on your Linux links. But there are no research as I know of. So we have to wait, but until then I would use the Linux solution in your links (and hope that the Linux solution is safe), or I would use ZFS. But it is good that Oracle helps Linux to be safer.

                        Comment


                        • Originally posted by crazycheese View Post
                          WHERE????!
                          I showed you the link, that proves that ext3 is not safe. Nor XFS, JFS, ReiserFS or NTFS is safe. Just read the link I posted for you. In the link there is a PhD thesis that shows that ext3 is not safe. If you look in the link, you can find the PhD thesis.



                          *1* Platter surface demagnetization errors! SMART detect this.
                          *2* Firmware errors! Contact or sue hardware vendor!
                          *3* Firmware errors! Same!
                          *4* RAID Hardware logic & xfer errors! Same, but for RAID card/controller/cables!
                          *5* RAM bit magnetization due to high density! Use ECC RAM, position RAM correctly - follow MB manufacturer recommendation, enclose hardware in grounded cages correctly!

                          Where is LINUX EXTx CORRUPTING YOUR DATA HERE?
                          You know, there are many more errors that those you listed. The problem is that ext3 does not catch all those errors. For instance, *2*, if there are firmware errors, then the filesystem should catch this. ext3 does not. The question is; is ext4 safer? We dont know, there is no research on ext4. But it seems that ext4 is safer.



                          Is filesystem DESIGNED to withstand all those errors? Hell, NO.
                          It is like blaming Joe from Los Angeles in Fukushima crisis! He is american, and americans delivered parts to Nippon, so he is responsible for nuclear meltdown! He is NOT. What is Joe responsible? To support his family and do it well! There is no point in giving every single Joe nuclear physician education to control the reactor either!
                          Yes. ZFS is designed to withstand all those errors, and many more. There is research team of computer scientists that do research on ZFS. Read the paper in my post above.



                          Projected to this "analysis", the file system should only do what filesystem should do - and do it well.
                          As Jeff Bonwick says: "The job of the file system is to make sure that the data you wrote is intact and the data you get from the filesystem, is the same and has not been altered. Funny though, most filesystems can not do this". Jeff Bonwick is the lead architect behind ZFS.



                          Detect file corruption - ext does data block and journal checksumming. Ntfs? I know only a way via hacks.
                          Prevent fragmentation - ext does this, designed with this priority. Ntfs does not do it and hence - speedups.
                          Correctly support operating system security requirements - ext does this.
                          Support for file requirements (timedate,name,reservations) - ext does this - and efficiently, unlike ntfs with MFT growing past 12-50% of partition size, without sane mechanism to change it.
                          Maintain consistency over power-down/cuts - ext does this and can do full data journaling, where ntfs does only metadata.
                          Badblocks - are not applicable to file system job, only in times of floppy disks. Nevertheless, ntfs tries to appy this in 21 century.
                          But still, research shows that ext3 is not safe. Neither is XFS, nor JFS nor ReiserFS. So, the engineers has not succeeded. Their solution is not safe enough.



                          The guys threw HUGE testing at HUGE capacity arrays. Of course errors would show up, but from those, none were of linux or ext origin. Or you have something else to tell?
                          A safe solution should catch all such errors. ZFS does.



                          Of course it does. It reports when first physical sector was remapped or when drive motor starts showing age. Sufficient for the desktop or workstation to replace the drive with the new one.
                          There are cases when SMART is not good enough. For instance, one power supply was bad, some 1 became 0 and vice versa. No one detected this. Except ZFS. Very quick, ZFS detected those errors. SMART did not notice.


                          Of course I know, happens ECC is only available and built for server mainboards,
                          In RAM memory sticks, some 1 might become 0, and vice versa. There are many reasons: power spikes, cosmic radiation, etc:
                          http://en.wikipedia.org/wiki/Dynamic...ror_correction

                          The reason we use ECC, is because ECC protects against some of these errors. The same errors happens to disk drives. For instance Bit Rot, after some years 1 might become 0, vice versa. Bugs in firmware, etc. A safe filesystem should catch all these errors and protect your data. Hardware raid does not protect your data. There is research on that.


                          SATA has many SAS functions in it and is sufficient for desktop usage. SAS is too complex and has operating environment not normally seen on desktop, like 24/7 massively parallel data exchange with very limited error correction time, multi-disk and hotswap support. For example, you do not do SAS with 1000x 1Gb drives at home, you buy one 1Tb drive instead.
                          I am trying to say that even high end safe, Enterprise SAS disks which costs much many says:
                          "every 10^16 bits, there will be errors that is not recoverable".
                          just read the spec sheet and you will see. Every 10^16 bits, there will be some read/write errors that are not recoverable nor repairable by the disk. And commodity SATA disks has much more errors than high end Enterprise server SAS disks.

                          Comment


                          • there are some linux fans near nervous-breakdown ....[*8
                            normal they use linux too much ...i joke a little but as shown the kernel power bug that Phoronix found and solved , current linux is far from perfect .
                            using things from big corporations like intel ,M$ ntfs , and the drivers they build . should be the top priority , at least untill linux is on 80% of all pcs [by now it s 5] , may be that will come . like there are distro , there will be kernels choice with proprietaries patents inside

                            Comment


                            • Originally posted by kebabbert View Post
                              You show some Oracle engineers talking about data corruption. You dont show any research. Of course, developers behind ReiserFS, NTFS, ext3 etc are also engineers, and they also tried to make ReiserFS, ext3 and NTFS safe. But they failed according to a PhD thesis. You show similar links: some Oracle engineers saying that they tried to do a filesystem safe. But maybe they also failed?
                              If the problem is resolved in the block layer it should probably make every native Linux file system safe from the silent data corruption. If the problem wasn't known before the PhD thesis it's obvious the file systems have failed in this matter. According to Oracle's blog post kernel was patched after the thesis. If there's no present thesis then it's matter of believing and conclusions.

                              Again: I dont know any research on ext4 - but lack of research does not prove ext4 is safe. You need to provide research that shows that ext4 can handle silent corruption. You show some talks about Oracle engineers saying they want to make Linux safe. Just as Reiser said. But he failed, ReiserFS is not safe, according to PhD thesis.
                              Like before: there were some changes aimed at the issue. Check this:

                              http://blogs.oracle.com/linux/entry/...ata_corruption

                              There's a white paper about sdc and Linux.

                              I want to see the same kind of research on your Linux links. But there are no research as I know of. So we have to wait, but until then I would use the Linux solution in your links (and hope that the Linux solution is safe), or I would use ZFS. But it is good that Oracle helps Linux to be safer.
                              I like this part.

                              Comment


                              • Originally posted by jcgeny View Post
                                there are some linux fans near nervous-breakdown ....[*8
                                normal they use linux too much ...i joke a little but as shown the kernel power bug that Phoronix found and solved , current linux is far from perfect .
                                using things from big corporations like intel ,M$ ntfs , and the drivers they build . should be the top priority , at least untill linux is on 80% of all pcs [by now it s 5] , may be that will come . like there are distro , there will be kernels choice with proprietaries patents inside
                                There's a bug mainly, because of messed up bios. Ntfs is crap, so I guess nobody will use it on Linux. Maybe when he has to deal with dual boot, because Windows cannot handle Linux file systems (unless you install some third party tool). Linux is far from perfect, but Windows is farthest. I can only imagine how many patents winblows violates.

                                Comment

                                Working...
                                X