Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • sdack
    replied
    Originally posted by evil_core View Post
    And as I've said earlier, you can usually put HDD on the shelf for 30-40 years, and it still will probably works, ...
    My experience is a different one. Leaving an HDD in cold storage can cause the drive motor not to start and the actuator can get stuck in the park position. They also tend to leak a fine oil over time and the seal turns brittle as well. Just storing it in a position different from the one it was originally mounted in is bad (horizontal vs. vertical). Their sensitivity to shock together with their size and mass means that you cannot let anyone near them who has got butterfingers. Not even tapes do I trust to hold data for more than 10-20 years. Anything that is important gets duplicated onto several media and held at different locations to satisfy ISO9000. I am not saying one cannot use HDDs for backup, but there are better options like tape to DVD-, BD- and M-DISC in my opinion.
    Last edited by sdack; 14 November 2021, 05:43 PM.

    Leave a comment:


  • evil_core
    replied
    Originally posted by sdack View Post
    You are focusing on the negatives and are turning it into a phobia. NAND technology, being much simpler in construction than mechanical storage, is far more predictable than mechanical storage. And because HDDs are far less predictable can one be ignorant about HDDs and happily blame failures on bad luck or whatever you tell yourself.
    No, I don't have phobia, but I'm looking realistically at this.
    I've got few 256GB TLC SSD, and even more MLC and Optane drives, I'm using in different setups
    But I would never rely on SSDs only (even in RAID). At least backups must be HDDs (in redudant RAIDs) or tapes.

    I know how HDDs fails. Even from failed ones, you can usually disasemble and recover data (usually problem is with bearings).
    Usually you cannot recover data from failed SSDs (even with professional equipment you've got less than <50% chance)
    It doesn't change the fact that you should got backups, and use RAIDs whenever you can (but sometimes you deal with peple that don't, and believe falsely that data on SSDs are safer...[I had to deal with guys that refused to keep their data on company SMB shares or moved mails from IMAP, because they thrust more their SSDs...than 'cloud' ;-)

    And as I've said earlier, you can usually put HDD on the shelf for 30-40 years, and it still will probably works, and you will be able to read 100% of data. If you that with RAID1 you can be sure(excluding some external accidents, but you can split mirrors in different locations) you will be able to read that data. Try doing it with SSDs(especially QLC).

    New high-capacity USB pendrives and memory cards are uber shit currently, especially bottom line. They are getting hot when connected(idling), and are loosing data after few weeks (I see ongoing files corruption and kids wonder what happens with their movies. I check dmesg and see how this new NANDs are reliable in comparison to old units ;-)

    Leave a comment:


  • sdack
    replied
    Originally posted by evil_core View Post
    It seems you are mixing data retention with some metric based on TBW or MTBF.
    Anandtech article stating 10years of new SSD were clearly about MLC (and even more for SLC). MLC cells could be written 10000x at least.
    But then TLC happened, that could be written 3000x each cell, and was called crap by many then.

    But nobody expected QLC(500 P/E cycles) and PLC (50 P/E cycles), then.
    Their cells lifespan is considerably lower, but data retention of QLC is ~2 months. After 6 months you still should be able to recover 99% of data, but some bitrots are unavoidable..

    QLC/PLC (and even TLC) keeps it's data, by rewriting oldest data in background. If you wondered ever why your idling SSD is hot, you know the answer now.
    You are focusing on the negatives and are turning it into a phobia. NAND technology, being much simpler in construction than mechanical storage, is far more predictable than mechanical storage. And because HDDs are far less predictable can one be ignorant about HDDs and happily blame failures on bad luck or whatever you tell yourself.

    You want to start by realizing that NAND has got a long life span, but that it degenerates exponentially. This is what the graphs show you and what you should take away from it (they are on a logarithmic scale in case you have missed it). Factors such as temperature and number of writes bring down the life span fast. So instead of panicking and feeding your phobia do you need to focus on how this works both ways and start using this knowledge to increase the life span.

    You need to cool your SSDs and reduce unnecessary writes. This increases their life span significantly. But when you install an SSD without any cooler near your CPU or GPU, then install Windows onto it with swapping enabled, and use the speed to thrash it with endless writes, then you are certainly doing it wrong. And I am sure many users will do exactly that, thus happily feeding into your phobia with horror stories.

    Unless you want to change the way you think about SSDs / NAND technology can one not help you. I recently removed two HDDs from a system, one being 3 years younger than the other and not holding much data. Yet did they fail with only a couple of months in between. No idea why the younger drive lasted as short as it did, but I sure do not feel happy because I can be ignorant and tell myself that it was merely bad luck or a "Monday morning"-drive or whatever. I prefer to use the knowledge on SSDs to my advantage, keep USB drives and memory cards in a cool place, and reduce unnecessary writes by using appropriate filesystems, mount and kernel options. But crying a river because something does not allow me to be quite as ignorant as before has never been my thing.
    Last edited by sdack; 14 November 2021, 10:39 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    Trying to find something tangible for you, here a link, which explains it graphically: https://sbebbb0f7ab6c96f4.jimcontent...ity%20Note.pdf

    You can find more when you search for it. Not sure about this one, which seems to indicate maximum retention times up to 10,000 years *lol* (see figure 3): https://www.macronix.com/Lists/Appli...ND%20Flash.pdf
    Did you check the dates on those? They're describing old SLC and early MLC memory products.

    The second one doesn't even apply to SSDs - they make low-capacity memory chips for use in automotive and industrial equipment. Their highest-capacity chips are only 8 Gbit.

    What's interesting about the first one is that it shows just how much robustness has been lost, as cell sizes have shrunk. Take a close look at the data they present for those 3 memory products, from newest to oldest. Writing their 15 nm MLC cell a mere 3k times drops its retention down to 1 year @ 40 C. In pseudo-SLC mode, this same cell size can withstand 30k writes before reaching the same point. At 24 nm SLC, it takes 100k iterations!

    This is related to the reason you can often still read a 10 year old USB stick or camera CF card, but don't dare count on doing that with the newest SD cards!

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    MLC, TLC and QLC are used for consumer SSDs. SLC is used for enterprise SSDs.
    This is not true. As I said, you can no longer get MLC for consumer drives and enterprise drives are segmented into read-oriented, mixed-workload, and write-oriented. Only the horrendously expensive write-oriented drives still use pseudo-SLC or pseudo-MLC. I don't know for sure what the mixed-workload drives use, but the read-oriented drives are TLC and conceivably even some QLC. For organizations buying enough of these drives, getting the $/bit down is really important.

    Originally posted by sdack View Post
    The data retention of all four is however more than 10 years initially
    I wonder where you read such fiction!

    Even if it was true of early SLC drives, the cells in SSDs have been getting ever smaller. They use better designs, but not enough to completely offset the increase in density.

    Leave a comment:


  • coder
    replied
    Originally posted by evil_core View Post
    You can write some data to HDD, put it into drawer for 5-10years
    It's literally true. I had 5x 1 TB HDDs. Initialized them in 2010. When I took them out of service in 2020 (after one final scrub, of course), not a single unrecoverable sector in any of them. And most of the data they held was transferred from an earlier volume, so many of the bits had been written literally 10 years prior. Even after I accidentally knocked one off the table onto a wood floor, it still completed the self-tests without error.

    Contrast this to earlier this year, when I turned on a PC of a work colleague who left a few years prior. The main filesystem contents of the SSD were fine, but until it occurred to me to stop and run fstrim, badblocks was reporting loads of unrecoverable errors. After running fstrim, no more bad blocks. So, that means the failed blocks probably hadn't been written since the factory. I forget the manufacture date of the drive, but it was definitely made < 4 years prior. Micron-branded, so not a junk consumer model (Crucial is their consumer brand, while they use Micron branding to sell into professional & enterprise markets). I think it was one of the first TLC drives.

    Originally posted by evil_core View Post
    HDD should be imediately replaced if there are more than one).
    Somewhere, I read that it's not abnormal to get a couple during RAID initialization with drives > 10 TB.

    Originally posted by evil_core View Post
    I'm not a person that hates SSDs. I even got many 2TB MLC drives and Optanes, and know usage of them (but know also about their limitation and realtively short data retetion)
    I was shocked to see the power-off data retention spec of my Intel Data Center NVMe drive was only 3 months. I know that's highly conservative, but it's also a MLC drive.

    I like how you used to be able to get real specs on Intel SSDs. I guess that ended a few years ago, perhaps when Intel's marketing organization got fully into the disinformation business and stopped acting like a true engineering company.

    Originally posted by evil_core View Post
    But I'm totally against QLC, PLC or other shit, that's thousand times(or even milion) less reliable than MLC, but costs 50% less (both in retail and production costs) So, it's stupidity to buy it IMHO for a reason.
    It's getting really hard to find MLC. Even Samsung moved their Pro line to TLC. Meanwhile, just about all consumer drives that aren't performance-oriented are now QLC.
    Last edited by coder; 13 November 2021, 05:54 PM.

    Leave a comment:


  • evil_core
    replied
    Originally posted by sdack View Post
    I wrote "more than 10 years", meaning, it is not flat 10 years, but it is a minimum of 10 years. I have not seen any chip manufacturer specify a maximum yet, but because it degenerates with the write cycles would such a number have more of a theoretical than practical use. The chip manufacturers may also not be sure about it, because the NAND technology keeps advancing and nobody will be sitting around for 10+ years with each new iteration of the technology just to find out how long the new maximum is. (Of course, they will not literally wait 10+ years, but have measuring methods to get an estimate ...)

    Most drive manufacturers do not care to give you warranties beyond 5 years (including Seagate and their HDDs) and they will only name a minimum with regards to the chips they use, the 5-year warranty, and the drive's capacity. I.e. a 1TB SSD drive with 5-year warranty and 300TB writes means it will hold data for 5 years when one does not exceed 300 write cycles. My oldest SSD, an Intel X25-M, lasted 12 years until it reported an interface error this year.
    It seems you are mixing data retention with some metric based on TBW or MTBF.
    Anandtech article stating 10years of new SSD were clearly about MLC (and even more for SLC). MLC cells could be written 10000x at least.
    But then TLC happened, that could be written 3000x each cell, and was called crap by many then.

    But nobody expected QLC(500 P/E cycles) and PLC (50 P/E cycles), then.
    Their cells lifespan is considerably lower, but data retention of QLC is ~2 months. After 6 months you still should be able to recover 99% of data, but some bitrots are unavoidable..

    QLC/PLC (and even TLC) keeps it's data, by rewriting oldest data in background. If you wondered ever why your idling SSD is hot, you know the answer now.

    Leave a comment:


  • sdack
    replied
    Originally posted by evil_core View Post
    It doesn't make sense that data retention is 10 years initially for all NAND types.
    I wrote "more than 10 years", meaning, it is not flat 10 years, but it is a minimum of 10 years. I have not seen any chip manufacturer specify a maximum yet, but because it degenerates with the write cycles would such a number have more of a theoretical than practical use. The chip manufacturers may also not be sure about it, because the NAND technology keeps advancing and nobody will be sitting around for 10+ years with each new iteration of the technology just to find out how long the new maximum is. (Of course, they will not literally wait 10+ years, but have measuring methods to get an estimate ...)

    Most drive manufacturers do not care to give you warranties beyond 5 years (including Seagate and their HDDs) and they will only name a minimum with regards to the chips they use, the 5-year warranty, and the drive's capacity. I.e. a 1TB SSD drive with 5-year warranty and 300TB writes means it will hold data for 5 years when one does not exceed 300 write cycles. My oldest SSD, an Intel X25-M, lasted 12 years until it reported an interface error this year.

    Trying to find something tangible for you, here a link, which explains it graphically: https://sbebbb0f7ab6c96f4.jimcontent...ity%20Note.pdf

    You can find more when you search for it. Not sure about this one, which seems to indicate maximum retention times up to 10,000 years *lol* (see figure 3): https://www.macronix.com/Lists/Appli...ND%20Flash.pdf
    Last edited by sdack; 13 November 2021, 04:50 PM.

    Leave a comment:


  • evil_core
    replied
    Originally posted by sdack View Post
    You say you do not hate SSDs, but you conveniently brush SLC under the rug and only talk about MLC and QLC, and how it would make SSDs inferior.
    Are you sure that MLC is used in consumer drives today?
    Are you even more sure that any company still makes SLC SSDs for enterprise?

    Last MLC consumer drive, was Samsung 970 Pro. 980 Pro is TLC..
    And i guess that soon(next few years), it would be hard to buy TLC consumer SSD(new with waranty)
    Originally posted by sdack View Post
    MLC, TLC and QLC are used for consumer SSDs. SLC is used for enterprise SSDs. The data retention of all four is however more than 10 years initially and differs in how quickly the retention time degenerates with the number of writes. So does SLC store only a single value in a cell, while MLC, TLC and QLC exploit it and store 2 (MLC), 3 (TLC) and 4 bits (QLC) in a single cell by storing up to 16 (QLC) different voltage levels in a single cell. The lowered data retention is not some accident, but it is done deliberately (a trade-off between endurance and density) and to meet different demands.
    It doesn't make sense that data retention is 10 years initially for all NAND types.
    QLC (16 levels of voltage) per cell, has two disadvantages:
    - higher voltage needed to store data
    - less voltage difference between states(so it automatically shortens data retention drastically)

    I only wonder about Optane data retention comparison (vs SLC, MLC and TLC. QLC/PLC is unimportant joke for me)

    Leave a comment:


  • sdack
    replied
    Originally posted by evil_core View Post
    ... With SSD data retention is like 2years for MLC drives or even worse than two months for QLC drives. ... I'm not a person that hates SSDs. ...
    You say you do not hate SSDs, but you conveniently brush SLC under the rug and only talk about MLC and QLC, and how it would make SSDs inferior.

    MLC, TLC and QLC are used for consumer SSDs. SLC is used for enterprise SSDs. The data retention of all four is however more than 10 years initially and differs in how quickly the retention time degenerates with the number of writes. So does SLC store only a single value in a cell, while MLC, TLC and QLC exploit it and store 2 (MLC), 3 (TLC) and 4 bits (QLC) in a single cell by storing up to 16 (QLC) different voltage levels. The lowered data retention is not some accident, but it is done deliberately (a trade-off between endurance and density) and to meet different demands.
    Last edited by sdack; 13 November 2021, 01:17 PM.

    Leave a comment:

Working...
X