Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by evil_core View Post
    You can write some data to HDD, put it into drawer for 5-10years
    It's literally true. I had 5x 1 TB HDDs. Initialized them in 2010. When I took them out of service in 2020 (after one final scrub, of course), not a single unrecoverable sector in any of them. And most of the data they held was transferred from an earlier volume, so many of the bits had been written literally 10 years prior. Even after I accidentally knocked one off the table onto a wood floor, it still completed the self-tests without error.

    Contrast this to earlier this year, when I turned on a PC of a work colleague who left a few years prior. The main filesystem contents of the SSD were fine, but until it occurred to me to stop and run fstrim, badblocks was reporting loads of unrecoverable errors. After running fstrim, no more bad blocks. So, that means the failed blocks probably hadn't been written since the factory. I forget the manufacture date of the drive, but it was definitely made < 4 years prior. Micron-branded, so not a junk consumer model (Crucial is their consumer brand, while they use Micron branding to sell into professional & enterprise markets). I think it was one of the first TLC drives.

    Originally posted by evil_core View Post
    HDD should be imediately replaced if there are more than one).
    Somewhere, I read that it's not abnormal to get a couple during RAID initialization with drives > 10 TB.

    Originally posted by evil_core View Post
    I'm not a person that hates SSDs. I even got many 2TB MLC drives and Optanes, and know usage of them (but know also about their limitation and realtively short data retetion)
    I was shocked to see the power-off data retention spec of my Intel Data Center NVMe drive was only 3 months. I know that's highly conservative, but it's also a MLC drive.

    I like how you used to be able to get real specs on Intel SSDs. I guess that ended a few years ago, perhaps when Intel's marketing organization got fully into the disinformation business and stopped acting like a true engineering company.

    Originally posted by evil_core View Post
    But I'm totally against QLC, PLC or other shit, that's thousand times(or even milion) less reliable than MLC, but costs 50% less (both in retail and production costs) So, it's stupidity to buy it IMHO for a reason.
    It's getting really hard to find MLC. Even Samsung moved their Pro line to TLC. Meanwhile, just about all consumer drives that aren't performance-oriented are now QLC.
    Last edited by coder; 13 November 2021, 05:54 PM.

    Comment


    • #62
      Originally posted by sdack View Post
      MLC, TLC and QLC are used for consumer SSDs. SLC is used for enterprise SSDs.
      This is not true. As I said, you can no longer get MLC for consumer drives and enterprise drives are segmented into read-oriented, mixed-workload, and write-oriented. Only the horrendously expensive write-oriented drives still use pseudo-SLC or pseudo-MLC. I don't know for sure what the mixed-workload drives use, but the read-oriented drives are TLC and conceivably even some QLC. For organizations buying enough of these drives, getting the $/bit down is really important.

      Originally posted by sdack View Post
      The data retention of all four is however more than 10 years initially
      I wonder where you read such fiction!

      Even if it was true of early SLC drives, the cells in SSDs have been getting ever smaller. They use better designs, but not enough to completely offset the increase in density.

      Comment


      • #63
        Originally posted by sdack View Post
        Trying to find something tangible for you, here a link, which explains it graphically: https://sbebbb0f7ab6c96f4.jimcontent...ity%20Note.pdf

        You can find more when you search for it. Not sure about this one, which seems to indicate maximum retention times up to 10,000 years *lol* (see figure 3): https://www.macronix.com/Lists/Appli...ND%20Flash.pdf
        Did you check the dates on those? They're describing old SLC and early MLC memory products.

        The second one doesn't even apply to SSDs - they make low-capacity memory chips for use in automotive and industrial equipment. Their highest-capacity chips are only 8 Gbit.

        What's interesting about the first one is that it shows just how much robustness has been lost, as cell sizes have shrunk. Take a close look at the data they present for those 3 memory products, from newest to oldest. Writing their 15 nm MLC cell a mere 3k times drops its retention down to 1 year @ 40 C. In pseudo-SLC mode, this same cell size can withstand 30k writes before reaching the same point. At 24 nm SLC, it takes 100k iterations!

        This is related to the reason you can often still read a 10 year old USB stick or camera CF card, but don't dare count on doing that with the newest SD cards!

        Comment


        • #64
          Originally posted by evil_core View Post
          It seems you are mixing data retention with some metric based on TBW or MTBF.
          Anandtech article stating 10years of new SSD were clearly about MLC (and even more for SLC). MLC cells could be written 10000x at least.
          But then TLC happened, that could be written 3000x each cell, and was called crap by many then.

          But nobody expected QLC(500 P/E cycles) and PLC (50 P/E cycles), then.
          Their cells lifespan is considerably lower, but data retention of QLC is ~2 months. After 6 months you still should be able to recover 99% of data, but some bitrots are unavoidable..

          QLC/PLC (and even TLC) keeps it's data, by rewriting oldest data in background. If you wondered ever why your idling SSD is hot, you know the answer now.
          You are focusing on the negatives and are turning it into a phobia. NAND technology, being much simpler in construction than mechanical storage, is far more predictable than mechanical storage. And because HDDs are far less predictable can one be ignorant about HDDs and happily blame failures on bad luck or whatever you tell yourself.

          You want to start by realizing that NAND has got a long life span, but that it degenerates exponentially. This is what the graphs show you and what you should take away from it (they are on a logarithmic scale in case you have missed it). Factors such as temperature and number of writes bring down the life span fast. So instead of panicking and feeding your phobia do you need to focus on how this works both ways and start using this knowledge to increase the life span.

          You need to cool your SSDs and reduce unnecessary writes. This increases their life span significantly. But when you install an SSD without any cooler near your CPU or GPU, then install Windows onto it with swapping enabled, and use the speed to thrash it with endless writes, then you are certainly doing it wrong. And I am sure many users will do exactly that, thus happily feeding into your phobia with horror stories.

          Unless you want to change the way you think about SSDs / NAND technology can one not help you. I recently removed two HDDs from a system, one being 3 years younger than the other and not holding much data. Yet did they fail with only a couple of months in between. No idea why the younger drive lasted as short as it did, but I sure do not feel happy because I can be ignorant and tell myself that it was merely bad luck or a "Monday morning"-drive or whatever. I prefer to use the knowledge on SSDs to my advantage, keep USB drives and memory cards in a cool place, and reduce unnecessary writes by using appropriate filesystems, mount and kernel options. But crying a river because something does not allow me to be quite as ignorant as before has never been my thing.
          Last edited by sdack; 14 November 2021, 10:39 AM.

          Comment


          • #65
            Originally posted by sdack View Post
            You are focusing on the negatives and are turning it into a phobia. NAND technology, being much simpler in construction than mechanical storage, is far more predictable than mechanical storage. And because HDDs are far less predictable can one be ignorant about HDDs and happily blame failures on bad luck or whatever you tell yourself.
            No, I don't have phobia, but I'm looking realistically at this.
            I've got few 256GB TLC SSD, and even more MLC and Optane drives, I'm using in different setups
            But I would never rely on SSDs only (even in RAID). At least backups must be HDDs (in redudant RAIDs) or tapes.

            I know how HDDs fails. Even from failed ones, you can usually disasemble and recover data (usually problem is with bearings).
            Usually you cannot recover data from failed SSDs (even with professional equipment you've got less than <50% chance)
            It doesn't change the fact that you should got backups, and use RAIDs whenever you can (but sometimes you deal with peple that don't, and believe falsely that data on SSDs are safer...[I had to deal with guys that refused to keep their data on company SMB shares or moved mails from IMAP, because they thrust more their SSDs...than 'cloud' ;-)

            And as I've said earlier, you can usually put HDD on the shelf for 30-40 years, and it still will probably works, and you will be able to read 100% of data. If you that with RAID1 you can be sure(excluding some external accidents, but you can split mirrors in different locations) you will be able to read that data. Try doing it with SSDs(especially QLC).

            New high-capacity USB pendrives and memory cards are uber shit currently, especially bottom line. They are getting hot when connected(idling), and are loosing data after few weeks (I see ongoing files corruption and kids wonder what happens with their movies. I check dmesg and see how this new NANDs are reliable in comparison to old units ;-)

            Comment


            • #66
              Originally posted by evil_core View Post
              And as I've said earlier, you can usually put HDD on the shelf for 30-40 years, and it still will probably works, ...
              My experience is a different one. Leaving an HDD in cold storage can cause the drive motor not to start and the actuator can get stuck in the park position. They also tend to leak a fine oil over time and the seal turns brittle as well. Just storing it in a position different from the one it was originally mounted in is bad (horizontal vs. vertical). Their sensitivity to shock together with their size and mass means that you cannot let anyone near them who has got butterfingers. Not even tapes do I trust to hold data for more than 10-20 years. Anything that is important gets duplicated onto several media and held at different locations to satisfy ISO9000. I am not saying one cannot use HDDs for backup, but there are better options like tape to DVD-, BD- and M-DISC in my opinion.
              Last edited by sdack; 14 November 2021, 05:43 PM.

              Comment


              • #67
                Originally posted by sdack View Post
                I recently removed two HDDs from a system, one being 3 years younger than the other and not holding much data. Yet did they fail with only a couple of months in between. No idea why the younger drive lasted as short as it did, but I sure do not feel happy because I can be ignorant and tell myself that it was merely bad luck or a "Monday morning"-drive or whatever. I prefer to use the knowledge on SSDs to my advantage,
                We're not proposing to use HDDs for desktops. I haven't done that in many years.

                I can't speak for evil_core , but all I'm saying is that HDDs still win in cloud, fileserver, and NAS use cases, where RAID is typically used and GB/$ or cold storage are priorities.

                Comment


                • #68
                  Originally posted by evil_core View Post
                  as I've said earlier, you can usually put HDD on the shelf for 30-40 years, and it still will probably works, and you will be able to read 100% of data.
                  Uh... I 'm pretty sure no one else is saying that! HDDs are not an archival storage medium. The closest thing we've got to that is optical. TBH, I wouldn't trust a HDD to retain data, sitting on a shelf, longer than the warranty period.

                  Comment


                  • #69
                    Originally posted by coder View Post
                    We're not proposing to use HDDs for desktops. I haven't done that in many years.

                    I can't speak for evil_core , but all I'm saying is that HDDs still win in cloud, fileserver, and NAS use cases, where RAID is typically used and GB/$ or cold storage are priorities.
                    Why?
                    You can put both SSDs and HDDs in desktop

                    Modern filesystems allows you to mix them:
                    - ZFS allows to store metadata, small blocks and L2ARC on SDD, while regular data on HDDs
                    - BCacheFS (WIP) is tiered filesystem, allowing making fancy read/write caching, moving data between tiers(according to usage or other criteria, etc)

                    Comment


                    • #70
                      Originally posted by evil_core View Post
                      You can put both SSDs and HDDs in desktop
                      What I do is use a separate fileserver, with HDDs in RAID-6, for backups and cold storage of bulk media that I don't use frequently. That way, I don't have the noise or idle power overhead of HDDs, when I don't need them.

                      At work, we still use HDDs for most of our server-based storage. Whether its VM servers, build archives, our homedir fileserver, or the video surveillance system, HDDs provide sufficient reliability in a RAID, and the best GB/$. Most of our desktops/workstations contain only SSDs.

                      Comment

                      Working...
                      X