Announcement

Collapse
No announcement yet.

Tuxera Claims NTFS Is The Fastest File-System For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by crazycheese View Post
    >>1. No one needs NTFS driver - people usually use it to access stuff on microsht or repair it. With linux box. No need for 10x access.
    >1. You just provided your use case while denying it exists. Fancy.
    It is WINDOWS machines that are repaired.

    >> 2. NTFS permission system is a cumbersome joke! Linux 888 is so simplistic and efficient!
    >2. You're incredibly wrong. My god, I just have no words._
    Oh, I must be so wrong, typing those cacls in cmd or clicking my way through the permissions and be happy it works somehow, when on my linux machines chmod/chown or right click fantastically are efficient. Linux file permissions are DREAM, FACE IT.

    >>3.3. Consider the efficiency - with ext4 I have never ever had to reinstall - the filesystem ALWAYS recovered safely. In ntfs and windows xp times I have been reinstalling it on monthly basis.
    >3. Reinstalling is unrelated to a filesystem maintaining its integrity.
    WOW, what a noob! I tell him NTFS f!cks up my data and is unable to store even metadata properly, while ext3/4 journal everything and he insists it is unrelated.
    Try to defragment and hit reset button!

    >>4. NTFS has badblocks... lols!
    >4.In which you make it obvious that you're a snot-nosed brat with no idea what he's talking about.
    Another "masterpiece" of yours! Badblocks are to be handled ONLY by the device itself.
    1. Badblocks DO NOT belong to filesystem
    2. Drive logic is ony responsible for transparent badblock
    -- detection
    -- recovery
    -- relocation
    3. For gods sake, there is SMART and it is more than enough to handle that.
    One can also use SpinRite or Victoria to detect possibly faulty hardware, but it is unrelated to FS.
    The utility you mentioned is only a simple tool to test each sector by writing and reading from it. It is unrelated to FS.

    Ext does not store USELESS badblock data, unlike NT. Why? Because it IS DRIVE LEVEL. What happens FS marks block as BAD and device simply REMAPS it already? Yes - that "LOGICAL" bad block is now actually USEABLE, because its REMAPPED by DEVICE. Yet NTFS plays dumb-arse, just like FATTY.

    Epiloge: NTFS has been around windows systems, that are USELESS. Ext has been around for decade (since birth of linux kernel?) and is most polished and most universal fs system around. It is not all-in-one FS, hence different FS exist (NILFS, BTRFS, REISER, JFS, XFS), but it is UNIVERSAL and strong. And if you want ext3 access from windows - there is a driver - use it.

    >> You should stop making words now.
    Thou shall sh!t up instead, please?
    Please calm down.

    Lots of enterprise Unix sysadmins, say that ACL is much more powerful than ordinary Unix read/right control. There are cases when you need ACL, and when 888 does not cut it.

    Regarding ext3, it does not really protect your data well. SMART does not help. ext3 and NTFS are equally bad (or good) in protecting your data:
    56% of data loss due to system & hardware problems - OntrackData loss is painful and all too common. Why?


    "Dr. Prabhakaran found that ALL the file systems [NTFS, ext3, ReiserFS, JFS and XFS] shared
    . . . ad hoc failure handling and a great deal of illogical inconsistency in failure policy . . . such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies.
    . . .
    We observe little tolerance to transient failures; . . . . none of the file systems can recover from partial disk failures, due to a lack of in-disk redundancy.




    In a nutshell he found that the all the file systems have

    . . . failure policies that are often inconsistent, sometimes buggy, and generally inadequate in their ability to recover from partial disk failures. "

    Comment


    • To make a long story short.
      My data on NTFS is just as safe as my data on ext4.
      By the way, my disk works OK and I do have backups.

      Comment


      • Originally posted by yogi_berra View Post
        Yeah, awesome joke:

        Researchers at DroneBL have spotted signs of a stealthy router-based botnet worm targeting routers and DSL modems.The worm, called "psyb0t," has been circulating since at least January this year, infecting vulnerable embedded Linux devices such as the Netcomm NB5 ADSL modem (above) and launching denial-of-service attacks on some Web sites.


        http://www.eweek.com/c/a/Security/Th...Botnet-626424/
        You're still giving the same:

        Uses multiple strategies for exploitation, including brute-force username and password combinations
        I'm not interested in trojan horses or something which tries to guess users passwords. Nothing stops people from making thousands of trojan horses for Linux, but the problem with the trojan horses is, you have to execute them somehow. On Windows it was enough to connect to the internet or LAN to get a virus. I don't know how something like this can have a place (maybe broken design, some bug...).

        Comment


        • Originally posted by kebabbert View Post
          Please calm down.

          Lots of enterprise Unix sysadmins, say that ACL is much more powerful than ordinary Unix read/right control. There are cases when you need ACL, and when 888 does not cut it.

          Regarding ext3, it does not really protect your data well. SMART does not help. ext3 and NTFS are equally bad (or good) in protecting your data:
          56% of data loss due to system & hardware problems - OntrackData loss is painful and all too common. Why?


          "Dr. Prabhakaran found that ALL the file systems [NTFS, ext3, ReiserFS, JFS and XFS] shared
          . . . ad hoc failure handling and a great deal of illogical inconsistency in failure policy . . . such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies.
          . . .
          We observe little tolerance to transient failures; . . . . none of the file systems can recover from partial disk failures, due to a lack of in-disk redundancy.




          In a nutshell he found that the all the file systems have

          . . . failure policies that are often inconsistent, sometimes buggy, and generally inadequate in their ability to recover from partial disk failures. "
          That article was from 2007, as was the research paper it was based on. The research paper looked at flaws that could compromise security in filesystems at the time, including filesystems like NTFS, XFS, and EXT3 amongst many others. The author then went on to propose a way to improve the EXT3 filesystem specifically so it could avoid these risks. As best as I can tell these improvements were incorporated into the EXT4 filesystem. In fact it seems the improvements were implemented in EXT4 almost immediately, since a talk about then-upcomming EXT4 a few months later discusses the improvements.

          So rather than showing that all filesystems are equal, the article actually suggests that the EXT4 filesystem is superior in terms of data security (unless NTFS also implemented those features, which it might have).
          Last edited by TheBlackCat; 27 June 2011, 05:23 AM.

          Comment


          • I meant 2007, not 1997 of course. I'm still not used to this whole "new millenium" thing.

            Comment


            • Originally posted by RealNC View Post
              LOL @ all the clueless people here who think that somehow EXT4 has "more features" than NTFS, which happens to actually be one of the most advanced filesystems in this universe. The only issue with it is that it's proprietary.
              I feel so clueless suddenly... pls enlighten us a bit for the superiority of ntfs since until now I feel not jealous at all for not be able to use it at all...

              Comment


              • Originally posted by TheBlackCat View Post
                That article was from 2007, as was the research paper it was based on. The research paper looked at flaws that could compromise security in filesystems at the time, including filesystems like NTFS, XFS, and EXT3 amongst many others. The author then went on to propose a way to improve the EXT3 filesystem specifically so it could avoid these risks. As best as I can tell these improvements were incorporated into the EXT4 filesystem. In fact it seems the improvements were implemented in EXT4 almost immediately, since a talk about then-upcomming EXT4 a few months later discusses the improvements.

                So rather than showing that all filesystems are equal, the article actually suggests that the EXT4 filesystem is superior in terms of data security (unless NTFS also implemented those features, which it might have).
                That PhD thesis (not research paper) looked at only some flaws. It did not do a full check (which would be equivalent of proving mathematically that there are no flaws which is impossible to do today).

                So, fine, some flaws might have been corrected in ext4, but if you read the PhD thesis (which I have) you will see that only some flaws are corrected. Not everyone.

                And besides, hardware raid also has lots of flaws and is not safe either. Probably you knew that as well:


                In other words, I would not trust on NTFS nor ext4. There is no research on ext4 as I know of, but that does not prove that ext4 is safe.

                Comment


                • Originally posted by kebabbert View Post
                  Please calm down.
                  I tolerate only those who tolerate.

                  Originally posted by kebabbert View Post
                  Lots of enterprise Unix sysadmins, say that ACL is much more powerful than ordinary Unix read/right control. There are cases when you need ACL, and when 888 does not cut it.
                  Several dozens of universities (thats tens of thousands workstations and slim clients) in Germany used SunOS Solaris with completely standard 888. In case one needs runtime checks - there are selinux and apparmor. I have no idea who uses ACLs, but if it exists - its needed, maybe by some rare huge research institutes with complex group access collisions.

                  Originally posted by kebabbert View Post
                  Regarding ext3, it does not really protect your data well.
                  It does, in full journaling and. in case of ext4, with barriers on, it does. At least at logical level - till physical layer. For physical layer there is SMART paired with backups.

                  Originally posted by kebabbert View Post
                  SMART does not help.
                  I wonder why would they build it in? Self Monitoring and Analysis. I only need Reallocated Sector count and Spindle Start Retries to cover it all.
                  If course, FS "could" additinally CRC the data, but thats what happens if you add encryption system on top. Encryption system on Windows - fail, whole internet of users who had system failures (even minimal) lead to inconsistency and complete data loss due to inability to decrypt it back.

                  Originally posted by kebabbert View Post
                  ext3 and NTFS are equally bad (or good) in protecting your data:
                  56% of data loss due to system & hardware problems - OntrackData loss is painful and all too common. Why?


                  "Dr. Prabhakaran found that ALL the file systems [NTFS, ext3, ReiserFS, JFS and XFS] shared
                  . . . ad hoc failure handling and a great deal of illogical inconsistency in failure policy . . . such inconsistency leads to substantially different detection and recovery strategies under similar fault scenarios, resulting in unpredictable and often undesirable fault-handling strategies."
                  Nice, real-life cases vs indian random word-spucking "professor". He should open a bug if he finds something. Failure policy is irrelevant to file system. Failure policy is multistep logic to deal with failures at many levels. FS in only small part and deals only and only with file consistency at load or failures. It CAN be part of failure policy, but it only if YOU build it. The whole heap of text is useless - you set bash script to check smart on drives before mounting(initramfs, init) and you have failure policy. Anyways, if ext detects incorrect shutdown - it recovers journal or rechecks the FS. If detects serious errors, it remounts system as RO. How is this "inconsistent"?

                  Comment


                  • Originally posted by kebabbert View Post
                    And besides, hardware raid also has lots of flaws and is not safe either. Probably you knew that as well:
                    Yes, it is specific to RAID itself and how its hooked up. Mirror unsync's are not uncommon on garbage cheap chipsets (dawicontrol and similar idiots).
                    It is not relevant of FS, it does not make ext any less secure.
                    But if you happen to trust ntfs any more than ext, oh my - well, its your personal choice )

                    Comment


                    • "Regarding ext3, it does not really protect your data well."
                      Originally posted by crazycheese View Post
                      It does, in full journaling and. in case of ext4, with barriers on, it does. At least at logical level - till physical layer. For physical layer there is SMART paired with backups.
                      Maybe you did not read the PhD thesis, but I did. There are actually a lot of research on silent data corruption on ext3 and also on hw-raid.

                      For instance, physics centre CERN did a study on this, and found out that many of their hardware raid Linux storage servers, showed silently corrupted data:
                      Many people reacted with disbelief to my recent series on data corruption (see How data gets lost, 50 ways to lose your data and How Microsoft puts your data at risk), claiming it had never happened to them. Really?


                      What CERN did, was this: they wrote a special bit pattern on their hw raid storage servers, again and again. And after three weeks, they checked the entire disk and saw that the bit pattern was not correct. Some 1 become 0, and vice versa. The Linux server did not even knew this, and reported no errors. This is called Silent Corruption - the hardware, nor the OS, is aware that some bits have been flipped by random. They believe everything is correct, but it is not correct.






                      "SMART does not help."
                      Originally posted by crazycheese View Post
                      I wonder why would they build it in? Self Monitoring and Analysis. I only need Reallocated Sector count and Spindle Start Retries to cover it all.
                      If course, FS "could" additinally CRC the data, but thats what happens if you add encryption system on top.
                      Let me ask you two questions:
                      A) have you heard about ECC RAM? What is it for? Why does servers use ECC RAM? Do you know why?

                      B) Have you ever read the specification sheet of a standard SAS enterprise disk? For instance Cheetah 15000rpm disk:

                      Read the part about "non recoverable errors"

                      Comment

                      Working...
                      X