Announcement

Collapse
No announcement yet.

Samsung 980 NVMe SSD Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by stormcrow View Post
    I should point out that Bad Things Happen when Linux encounters full storage devices.
    This is certainly not unique to Linux. Windows will begin to act really strangely and eventually BSOD when the C: fills. Even on the latest greatest Windows Server OS editions. In fact, I'm not aware of *any* operating system that has the ability to gracefully handle the OS disk filling to 100%. Are you? At least Linux supports live growing the root "/" filesystem. Growing the C: in Windows is virtually impossible, requiring this complicated offline procedure where you boot from removable media and run a bunch of archaic command prompt commands.

    Originally posted by stormcrow View Post
    Testing is showing SSD cell charge may not be as stable as MFGs claim.
    I agree, but to be fair, the specification is only 1 year retention without power for consumer SSD's, and it's a mere four weeks for enterprise SSD's! As in, if you keep it in your desk drawer as a backup volume, and you don't plug it in within that timeframe, all your data could be lost and that is considered "in-spec" and therefore not a problem! It seems like a really bad idea to use SSD's for backup or archival purposes, or for systems that remain powered off for long periods of time. They simply aren't designed for that purpose. I have read that temperature also plays a role, with colder storage temps being more conducive to long term integrity. Maybe start storing your offline SSD's in ziplock bags in the freezer??
    Last edited by torsionbar28; 28 March 2021, 01:51 PM.

    Comment


    • #32
      Originally posted by torsionbar28 View Post
      Take a look at the ZoL project, here. This is really the big selling point of ZFS, is that it has the ability to actively identify and correct silent corruption aka "bit flips". Personally I tend to use older hardware purchased second hand on e bay and other places, and like you, have data such as family photos that I intend to keep for many years.

      Regular filesystems cannot detect or correct bit flips. Hence the name "silent" corruption. RAID arrays can detect a change, as the data no longer matches the parity, but they cannot correct it, as they don't know which one is wrong. ZFS can not only determine if a bit flipped, it can also correct it, and is therefore effectively immune to silent corruption. It also has a clever snapshot system and a send/receive feature that makes exporting the snapshots to external media really easy, for backup purposes.

      It does have some drawbacks however. It generally performs slower than regular filesystems due to the overhead of all these safety features. And it's strongly recommended to use a system with ECC memory (i.e. Xeon, EPYC, Opteron) to avoid bit flips that occur in RAM while the data is cached or in-flight. Fortunately, this kind of enterprise grade hardware can be had inexpensively on the second hand market. It doesn't need a ton of horsepower, so a DDR3 era system works perfectly fine. I recently bought a Supermicro motherboard with Opteron CPU, cooler, and 64 GB of Registered ECC memory for $225 on e bay, a pretty good value for this purpose.
      The question is what mechanism routinely checks the files even in case they were not accessed for years.

      Comment


      • #33
        Originally posted by Royi View Post
        The question is what mechanism routinely checks the files even in case they were not accessed for years.
        With ZFS the action is called a "scrub" where it actively traverses the entire filesystem, checking integrity and performing repairs if needed. You schedule the scrub event via cron. The good news with ZFS is that it only needs to evaluate actual data. So if you have a 20 TB volume with only 1 TB of files on it, the scrub only needs to check/repair the 1 TB of files.


        Comment


        • #34
          Is there any NAS (By ASUSTOR / Synology / QNAP) which have such features as well?
          I think they use BTRFS or maybe one of them use ZFS?

          Comment


          • #35
            Originally posted by Royi View Post
            Is there any NAS (By ASUSTOR / Synology / QNAP) which have such features as well?
            I think they use BTRFS or maybe one of them use ZFS?
            The only consumer NAS solution I'm aware of that has ZFS is TrueNAS (formerly FreeNAS). Their "Mini" series looks like a competitor to the smaller Synology/Qnap products.

            Comment


            • #36
              If Samsung is using system RAM instead of onboard, doesn't the speed of the RAM affect the drive's performance? Someone might be using this with DDR3-1333, and another person with DDR4-3600, etc.

              Comment


              • #37
                Originally posted by Royi View Post

                So, Could you share more about your backup routine?
                I have files I want to be sure are well backup (Family Photos / Videos).
                I'm concerned which mechanism could validate them over time (That the HD itself didn't miss a bit flip etc...).
                You may want to look into borgbackup, it compresses, deduplicates and encrypts the data which includes checksums so you would be able to detect bit flips (and potentially repair).

                Comment


                • #38
                  I have no idea what M-Disks are.
                  Also burning is like putting it in a drawer and never know if it will work in 10 years.

                  I want something that once in a while automatically checks for the correctness of data and fix what's needed based on some redundancy.

                  Comment


                  • #39
                    Originally posted by Mez' View Post
                    I have 4 different (SATA 2.5") SSDs of 4 different brands, I use them even for storage now, and 1 M.2 PCIe NVMe SSD in my new laptop.
                    The oldest is around 5-6 years old. None of them have failed (yet, touching wood).
                    It greatly depends on what you do with said SSDs.
                    If you are mostly doing media consumption and/or office work, your SSD will last forever.
                    If you are doing anything write intensive (E.g. heavy development work, CAD/CAM, database development, etc) and/or using them in any type of write amplification RAID (E.g. RAID5/6/50/60) setup you'll be killing these "desktop" SSDs in no time.

                    E.g. I just killed two out of 8 Samsung 850 EVOs after less than two years of active duty on a VM server. (The server was used as a staging servers, hence it used cheap-SATA-SSDs).

                    - Gilboa
                    oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                    oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                    oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                    Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                    Comment


                    • #40
                      Originally posted by Royi View Post

                      The question is what mechanism routinely checks the files even in case they were not accessed for years.
                      with btrfs raid it is also called scrub an can be scheduled as the user wishes.
                      I have a NAS system running with BTRFS for the purpose of keeping family photos and videos. But for true redundancy I'm doing a backup with an external HDD once in a while.

                      Comment

                      Working...
                      X