Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • evil_core
    replied
    Originally posted by sdack View Post
    The dumbass is the one who falls for the FUD.

    HDDs contain bad sectors when they come fresh out of the factory. HDDs reserve tracks and withhold their true capacity in order to replace bad sectors automatically and so to offer the advertised capacity. It is known as bad sector management and has been around since the very early days of HDDs. Consumer HDDs reserve fewer tracks while enterprise HDDs reserve more, thereby addressing the different reliability demands of customers.

    HDDs fail, they are not infallible, but are expected to fail. What is and is not corrupted data depends on the error detection algorithm and can for instance also lead to a false positive, meaning, while the data is correct does it get reported by the drive as wrong. Errors can occur on the very first day of operation. In order to increase the reliability of storage systems do we use RAID arrays, which automatically rebuild data and allow for bad HDDs to be replaced without a data loss.

    SSDs behave the same. The early SSDs suffered from technical issues, but also from the inexperience of the manufacturers, who assumed the I/O patterns of HDDs would apply to SSDs, when people started thrashing their SSDs (probably because of the new freedom of higher speeds) and it resulted in failures. The industry has since then adjusted, started using more reliable technology and increased the reserve of spare sectors for SSDs, and resolved the issue.

    SSDs are as safe as HDDs. Of course, it is certainly easier to claim an MTBF of 3m hours for a slow HDD than it is for a fast SSD considering that an HDD sees a lot fewer operations in this time, but some SSD manufacturers now offer similar MTBF rates. SSDs still fail, they are not infallible, just like HDDs are not, and those who need reliable storage will continue to use RAID arrays and not trust in a drive's technology alone.

    HDDs are however slower than SSDs, they use more power, they are noisy and more sensitive to shock, and their time is coming to an end. This is the way of technology.
    Yes, SSD uses less power, its true. Also more shock-proof. they are also much quicker (especially for random access)
    HDDs noise doesn't bother me (I even like it, but you can dumpen it easily. It's much more silent than fans or water-cooling pump).

    About reliability I totally disagree with you. You can write some data to HDD, put it into drawer for 5-10years and expect that data will not be corrupted after getting it out. With SSD data retention is like 2years for MLC drives or even worse than two months for QLC drives.
    I know that you should use backups in any case, and use RAID for avaibility. And in case of SSDs you should back up it to more reliable media, like tape or HDDs (or even better array of them).

    About badsectors, they were usually caused by crappy PSUs (or interrupted write by shutdown-in-the-middle), and were not physical badblocks.
    Badblocks were usually immediate state, between 1 and 0. You could even fix most them by using MHDD under DOS, by using "remove delays".
    I'm not sure about bigger reserve of sector for enterprise drives(I really doubt that, because HDD should be imediately replaced if there are more than one).
    What I know about enterprise drives, is TLER. It forces drive to return URE after few attempts of read of bad sectors, to force RAID controller to read good copy(or recontruct it) from other copy. Consumer drives tries to read bad sector for 30s-1.5min, because they assume there's no other copy (no RAID) to read immediately.

    I'm not a person that hates SSDs. I even got many 2TB MLC drives and Optanes, and know usage of them (but know also about their limitation and realtively short data retetion)
    But for bulk storage, buckups and long term storage RAID of HDDs is a need (at least for backups, you cannot thrust SSD).
    But I'm totally against QLC, PLC or other shit, that's thousand times(or even milion) less reliable than MLC, but costs 50% less (both in retail and production costs) So, it's stupidity to buy it IMHO for a reason.

    Leave a comment:


  • sdack
    replied
    Originally posted by evil_core View Post
    You've got it wrong. There are many dumbasses/NANDzists believing that rotational HDDs should die (not knowing that why they are slower, they are much more reliable/predictable and waaay cheaper) QLC/PLC should be called thing of the past, not HDDs.
    The dumbass is the one who falls for the FUD.

    HDDs contain bad sectors when they come fresh out of the factory. HDDs reserve tracks and withhold their true capacity in order to replace bad sectors automatically and so to offer the advertised capacity. It is known as bad sector management and has been around since the very early days of HDDs. Consumer HDDs reserve fewer tracks while enterprise HDDs reserve more, thereby addressing the different reliability demands of customers.

    HDDs fail, they are not infallible, but are expected to fail. What is and is not corrupted data depends on the error detection algorithm and can for instance also lead to a false positive, meaning, while the data is correct does it get reported by the drive as wrong. Errors can occur on the very first day of operation. In order to increase the reliability of storage systems do we use RAID arrays, which automatically rebuild data and allow for bad HDDs to be replaced without a data loss.

    SSDs behave the same. The early SSDs suffered from technical issues, but also from the inexperience of the manufacturers, who assumed the I/O patterns of HDDs would apply to SSDs, when people started thrashing their SSDs (probably because of the new freedom of higher speeds) and it resulted in failures. The industry has since then adjusted, started using more reliable technology and increased the reserve of spare sectors for SSDs, and resolved the issue.

    SSDs are as safe as HDDs. Of course, it is certainly easier to claim an MTBF of 3m hours for a slow HDD than it is for a fast SSD considering that an HDD sees a lot fewer operations in this time, but some SSD manufacturers now offer similar MTBF rates. SSDs still fail, they are not infallible, just like HDDs are not, and those who need reliable storage will continue to use RAID arrays and not trust in a drive's technology alone.

    HDDs are however slower than SSDs, they use more power, they are noisy and more sensitive to shock, and their time is coming to an end. This is the way of technology.
    Last edited by sdack; 13 November 2021, 11:41 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by evil_core View Post
    You've got it wrong. There are many dumbasses/NANDzists believing that rotational HDDs should die (not knowing that why they are slower, they are much more reliable/predictable and waaay cheaper) QLC/PLC should be called thing of the past, not HDDs.
    Eh, they all have a place in the world.

    I am not looking forward to PLC (5-bits per cell), however. Not only is it a ways down the curve of diminishing returns, but yet slower to write & yet worse data retention.

    I do wish we could get SLC or MLC drives with the latest 3D cell technology. Or that Optane were closer to living up to its original promises.

    Leave a comment:


  • evil_core
    replied
    Originally posted by Markopolo View Post
    I’m very confused by the people hating on the concept of NVMe HHDs like it somehow takes something away from other NVMe drives…
    You've got it wrong. There are many dumbasses/NANDzists believing that rotational HDDs should die (not knowing that why they are slower, they are much more reliable/predictable and waaay cheaper) QLC/PLC should be called thing of the past, not HDDs.

    Leave a comment:


  • Ironmask
    replied
    Originally posted by LinAGKar View Post

    Completely unrelated, but that made me think of JavaScript vs Rust.
    Or Python. Or C macros. Or C++ exceptions. Or C# destructors.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    So have HDDs long initialization times for example. This then has to be taken into account by the hardware, the protocol, and the driver.
    It would be interesting to see an example of where in the NVMe stack anything needs to handle this. At some level, yes. However, it's possible that the block which needs to deal with device initialization is so high up that it's common to NVMe and other storage protocols. It'd be nice if someone familiar with NVMe or that part of the kernel could speak knowledgeably about that.

    As for the protocol, it already handled things like drive arrays over a network connection. So, there was probably already a fair amount of thought put into high-latency and fault-prone connectivity.

    Originally posted by sdack View Post
    Also HDDs are bulky devices, requiring long cables, while SSDs can be connected over shorter lanes. This, too, has to be taken into account and affects signalling.
    One nice thing about NVMe is that it builds on PCIe. And PCIe already has support for cabling. There have been external PCIe switches used for clustering, though I think they never caught on. Have you heard that Thunderbolt has support for embedded PCIe x4 connectivity for a couple revisions, already?

    Originally posted by sdack View Post
    The point is, the more diverse hardware you try to connect to any interface, the more you need to account for the diversity and so it begins to water it down.
    PCIe has done a remarkable job of accommodating a wide variety of device types, over the years, including providing forward & backward compatibility.

    Originally posted by sdack View Post
    To there drag HDDs, for which there is already a dedicated interface (SATA/SAS), to NVMe, which is an interface designed specifically to exploit the unique properties of SSDs (very low latency, high transfer rates),
    That part isn't obvious to me. Sure, NVMe allows for things like namespaces and much deeper queues. It's not as if HDDs didn't already have command queuing, though. Sure, NVMe was motivated by getting the SATA controller out of the way, but I think that doesn't make any less sense for HDDs.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    I stop belittling you once you let the comments of others stand on their own. But as long as you quote comments piece by piece, pull them out of context to create meaningless tangents only to get your word in,
    I quote the way I do so that it's clear what part I'm responding to. If I take something out of context, it's not intentional and it's your right to call that out.

    Originally posted by sdack View Post
    Frankly, you seem to have some inferiority complex when you do this, but I do not mean to judge.
    Feedback noted. Thanks, I guess.

    Originally posted by sdack View Post
    You then have not answered why HDDs needed to be on NVMe. If one can connect an HDD to NVMe was not the question. I am asking specifically about the necessity.
    I'm not exactly a proponent of the move, but my understanding is that relevant parties want to imbue HDDs with features from NVMe, to unify their software stack, and probably also simplify their hardware. I can understand not wanting to go through ratifying another round of updates to SAS and getting all the necessary vendors to roll out those changes in their hardware/firmware/drivers, when those features are already in NVMe.

    BTW, the NVMe spec is now so large they recently had to break it up.


    Originally posted by sdack View Post
    Or we could try to connect HDDs to the DDR5 interface and see if this makes HDDs any faster.
    The one thing we can probably say is that it's not about speed. The parties driving this probably don't use hybrid HDDs, and 12 Gbps SAS is plenty of bandwidth for mechanical hard drives.

    Leave a comment:


  • sdack
    replied
    Originally posted by Markopolo View Post
    I’m very confused by the people hating on the concept of NVMe HHDs like it somehow takes something away from other NVMe drives…
    It does. So have HDDs long initialization times for example. This then has to be taken into account by the hardware, the protocol, and the driver. Also HDDs are bulky devices, requiring long cables, while SSDs can be connected over shorter lanes. This, too, has to be taken into account and affects signalling. Hardware often also has flaws and needs to be corrected in software, the driver, sometimes to the point where it needs lists of known good/bad devices, workarounds and quirks, etc.. The point is, the more diverse hardware you try to connect to any interface, the more you need to account for the diversity and so it begins to water it down. To there drag HDDs, for which there is already a dedicated interface (SATA/SAS), to NVMe, which is an interface designed specifically to exploit the unique properties of SSDs (very low latency, high transfer rates), is asking for trouble while we already know that HDDs will not suddenly get any faster.

    Leave a comment:


  • sdack
    replied
    Originally posted by coder View Post
    ... why can't you just put it forth and let it stand on its own?
    I stop belittling you once you let the comments of others stand on their own. But as long as you quote comments piece by piece, pull them out of context to create meaningless tangents only to get your word in, will nobody respect you. Frankly, you seem to have some inferiority complex when you do this, but I do not mean to judge.

    Back to your other question, why the industry wouldn't be in complete collapse ... Who says that anything has to collapse other than in your wild imagination? Especially in the data storage segment does trust matter the most, and trust is not built in one day. So no, nothing is going to collapse here. However, SSDs are on the rise and present new technical challenges for which a new interface makes sense.

    You then have not answered why HDDs needed to be on NVMe. If one can connect an HDD to NVMe was not the question. I am asking specifically about the necessity. I am sure that one can also connect BD-/DVD-/CD-/MO-/DAT-drives and the C64/VC20 Datsette to NVMe, but where is the necessity for it? And why abandon SATA/SAS? ... Or we could try to connect HDDs to the DDR5 interface and see if this makes HDDs any faster.

    Leave a comment:


  • Markopolo
    replied
    I’m very confused by the people hating on the concept of NVMe HHDs like it somehow takes something away from other NVMe drives…

    Leave a comment:

Working...
X