Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by sdack View Post
    So have HDDs long initialization times for example. This then has to be taken into account by the hardware, the protocol, and the driver.
    It would be interesting to see an example of where in the NVMe stack anything needs to handle this. At some level, yes. However, it's possible that the block which needs to deal with device initialization is so high up that it's common to NVMe and other storage protocols. It'd be nice if someone familiar with NVMe or that part of the kernel could speak knowledgeably about that.

    As for the protocol, it already handled things like drive arrays over a network connection. So, there was probably already a fair amount of thought put into high-latency and fault-prone connectivity.

    Originally posted by sdack View Post
    Also HDDs are bulky devices, requiring long cables, while SSDs can be connected over shorter lanes. This, too, has to be taken into account and affects signalling.
    One nice thing about NVMe is that it builds on PCIe. And PCIe already has support for cabling. There have been external PCIe switches used for clustering, though I think they never caught on. Have you heard that Thunderbolt has support for embedded PCIe x4 connectivity for a couple revisions, already?

    Originally posted by sdack View Post
    The point is, the more diverse hardware you try to connect to any interface, the more you need to account for the diversity and so it begins to water it down.
    PCIe has done a remarkable job of accommodating a wide variety of device types, over the years, including providing forward & backward compatibility.

    Originally posted by sdack View Post
    To there drag HDDs, for which there is already a dedicated interface (SATA/SAS), to NVMe, which is an interface designed specifically to exploit the unique properties of SSDs (very low latency, high transfer rates),
    That part isn't obvious to me. Sure, NVMe allows for things like namespaces and much deeper queues. It's not as if HDDs didn't already have command queuing, though. Sure, NVMe was motivated by getting the SATA controller out of the way, but I think that doesn't make any less sense for HDDs.

    Comment


    • #52
      Originally posted by LinAGKar View Post

      Completely unrelated, but that made me think of JavaScript vs Rust.
      Or Python. Or C macros. Or C++ exceptions. Or C# destructors.

      Comment


      • #53
        Originally posted by Markopolo View Post
        I’m very confused by the people hating on the concept of NVMe HHDs like it somehow takes something away from other NVMe drives…
        You've got it wrong. There are many dumbasses/NANDzists believing that rotational HDDs should die (not knowing that why they are slower, they are much more reliable/predictable and waaay cheaper) QLC/PLC should be called thing of the past, not HDDs.

        Comment


        • #54
          Originally posted by evil_core View Post
          You've got it wrong. There are many dumbasses/NANDzists believing that rotational HDDs should die (not knowing that why they are slower, they are much more reliable/predictable and waaay cheaper) QLC/PLC should be called thing of the past, not HDDs.
          Eh, they all have a place in the world.

          I am not looking forward to PLC (5-bits per cell), however. Not only is it a ways down the curve of diminishing returns, but yet slower to write & yet worse data retention.

          I do wish we could get SLC or MLC drives with the latest 3D cell technology. Or that Optane were closer to living up to its original promises.

          Comment


          • #55
            Originally posted by evil_core View Post
            You've got it wrong. There are many dumbasses/NANDzists believing that rotational HDDs should die (not knowing that why they are slower, they are much more reliable/predictable and waaay cheaper) QLC/PLC should be called thing of the past, not HDDs.
            The dumbass is the one who falls for the FUD.

            HDDs contain bad sectors when they come fresh out of the factory. HDDs reserve tracks and withhold their true capacity in order to replace bad sectors automatically and so to offer the advertised capacity. It is known as bad sector management and has been around since the very early days of HDDs. Consumer HDDs reserve fewer tracks while enterprise HDDs reserve more, thereby addressing the different reliability demands of customers.

            HDDs fail, they are not infallible, but are expected to fail. What is and is not corrupted data depends on the error detection algorithm and can for instance also lead to a false positive, meaning, while the data is correct does it get reported by the drive as wrong. Errors can occur on the very first day of operation. In order to increase the reliability of storage systems do we use RAID arrays, which automatically rebuild data and allow for bad HDDs to be replaced without a data loss.

            SSDs behave the same. The early SSDs suffered from technical issues, but also from the inexperience of the manufacturers, who assumed the I/O patterns of HDDs would apply to SSDs, when people started thrashing their SSDs (probably because of the new freedom of higher speeds) and it resulted in failures. The industry has since then adjusted, started using more reliable technology and increased the reserve of spare sectors for SSDs, and resolved the issue.

            SSDs are as safe as HDDs. Of course, it is certainly easier to claim an MTBF of 3m hours for a slow HDD than it is for a fast SSD considering that an HDD sees a lot fewer operations in this time, but some SSD manufacturers now offer similar MTBF rates. SSDs still fail, they are not infallible, just like HDDs are not, and those who need reliable storage will continue to use RAID arrays and not trust in a drive's technology alone.

            HDDs are however slower than SSDs, they use more power, they are noisy and more sensitive to shock, and their time is coming to an end. This is the way of technology.
            Last edited by sdack; 13 November 2021, 11:41 AM.

            Comment


            • #56
              Originally posted by sdack View Post
              The dumbass is the one who falls for the FUD.

              HDDs contain bad sectors when they come fresh out of the factory. HDDs reserve tracks and withhold their true capacity in order to replace bad sectors automatically and so to offer the advertised capacity. It is known as bad sector management and has been around since the very early days of HDDs. Consumer HDDs reserve fewer tracks while enterprise HDDs reserve more, thereby addressing the different reliability demands of customers.

              HDDs fail, they are not infallible, but are expected to fail. What is and is not corrupted data depends on the error detection algorithm and can for instance also lead to a false positive, meaning, while the data is correct does it get reported by the drive as wrong. Errors can occur on the very first day of operation. In order to increase the reliability of storage systems do we use RAID arrays, which automatically rebuild data and allow for bad HDDs to be replaced without a data loss.

              SSDs behave the same. The early SSDs suffered from technical issues, but also from the inexperience of the manufacturers, who assumed the I/O patterns of HDDs would apply to SSDs, when people started thrashing their SSDs (probably because of the new freedom of higher speeds) and it resulted in failures. The industry has since then adjusted, started using more reliable technology and increased the reserve of spare sectors for SSDs, and resolved the issue.

              SSDs are as safe as HDDs. Of course, it is certainly easier to claim an MTBF of 3m hours for a slow HDD than it is for a fast SSD considering that an HDD sees a lot fewer operations in this time, but some SSD manufacturers now offer similar MTBF rates. SSDs still fail, they are not infallible, just like HDDs are not, and those who need reliable storage will continue to use RAID arrays and not trust in a drive's technology alone.

              HDDs are however slower than SSDs, they use more power, they are noisy and more sensitive to shock, and their time is coming to an end. This is the way of technology.
              Yes, SSD uses less power, its true. Also more shock-proof. they are also much quicker (especially for random access)
              HDDs noise doesn't bother me (I even like it, but you can dumpen it easily. It's much more silent than fans or water-cooling pump).

              About reliability I totally disagree with you. You can write some data to HDD, put it into drawer for 5-10years and expect that data will not be corrupted after getting it out. With SSD data retention is like 2years for MLC drives or even worse than two months for QLC drives.
              I know that you should use backups in any case, and use RAID for avaibility. And in case of SSDs you should back up it to more reliable media, like tape or HDDs (or even better array of them).

              About badsectors, they were usually caused by crappy PSUs (or interrupted write by shutdown-in-the-middle), and were not physical badblocks.
              Badblocks were usually immediate state, between 1 and 0. You could even fix most them by using MHDD under DOS, by using "remove delays".
              I'm not sure about bigger reserve of sector for enterprise drives(I really doubt that, because HDD should be imediately replaced if there are more than one).
              What I know about enterprise drives, is TLER. It forces drive to return URE after few attempts of read of bad sectors, to force RAID controller to read good copy(or recontruct it) from other copy. Consumer drives tries to read bad sector for 30s-1.5min, because they assume there's no other copy (no RAID) to read immediately.

              I'm not a person that hates SSDs. I even got many 2TB MLC drives and Optanes, and know usage of them (but know also about their limitation and realtively short data retetion)
              But for bulk storage, buckups and long term storage RAID of HDDs is a need (at least for backups, you cannot thrust SSD).
              But I'm totally against QLC, PLC or other shit, that's thousand times(or even milion) less reliable than MLC, but costs 50% less (both in retail and production costs) So, it's stupidity to buy it IMHO for a reason.

              Comment


              • #57
                Originally posted by evil_core View Post
                ... With SSD data retention is like 2years for MLC drives or even worse than two months for QLC drives. ... I'm not a person that hates SSDs. ...
                You say you do not hate SSDs, but you conveniently brush SLC under the rug and only talk about MLC and QLC, and how it would make SSDs inferior.

                MLC, TLC and QLC are used for consumer SSDs. SLC is used for enterprise SSDs. The data retention of all four is however more than 10 years initially and differs in how quickly the retention time degenerates with the number of writes. So does SLC store only a single value in a cell, while MLC, TLC and QLC exploit it and store 2 (MLC), 3 (TLC) and 4 bits (QLC) in a single cell by storing up to 16 (QLC) different voltage levels. The lowered data retention is not some accident, but it is done deliberately (a trade-off between endurance and density) and to meet different demands.
                Last edited by sdack; 13 November 2021, 01:17 PM.

                Comment


                • #58
                  Originally posted by sdack View Post
                  You say you do not hate SSDs, but you conveniently brush SLC under the rug and only talk about MLC and QLC, and how it would make SSDs inferior.
                  Are you sure that MLC is used in consumer drives today?
                  Are you even more sure that any company still makes SLC SSDs for enterprise?

                  Last MLC consumer drive, was Samsung 970 Pro. 980 Pro is TLC..
                  And i guess that soon(next few years), it would be hard to buy TLC consumer SSD(new with waranty)
                  Originally posted by sdack View Post
                  MLC, TLC and QLC are used for consumer SSDs. SLC is used for enterprise SSDs. The data retention of all four is however more than 10 years initially and differs in how quickly the retention time degenerates with the number of writes. So does SLC store only a single value in a cell, while MLC, TLC and QLC exploit it and store 2 (MLC), 3 (TLC) and 4 bits (QLC) in a single cell by storing up to 16 (QLC) different voltage levels in a single cell. The lowered data retention is not some accident, but it is done deliberately (a trade-off between endurance and density) and to meet different demands.
                  It doesn't make sense that data retention is 10 years initially for all NAND types.
                  QLC (16 levels of voltage) per cell, has two disadvantages:
                  - higher voltage needed to store data
                  - less voltage difference between states(so it automatically shortens data retention drastically)

                  I only wonder about Optane data retention comparison (vs SLC, MLC and TLC. QLC/PLC is unimportant joke for me)

                  Comment


                  • #59
                    Originally posted by evil_core View Post
                    It doesn't make sense that data retention is 10 years initially for all NAND types.
                    I wrote "more than 10 years", meaning, it is not flat 10 years, but it is a minimum of 10 years. I have not seen any chip manufacturer specify a maximum yet, but because it degenerates with the write cycles would such a number have more of a theoretical than practical use. The chip manufacturers may also not be sure about it, because the NAND technology keeps advancing and nobody will be sitting around for 10+ years with each new iteration of the technology just to find out how long the new maximum is. (Of course, they will not literally wait 10+ years, but have measuring methods to get an estimate ...)

                    Most drive manufacturers do not care to give you warranties beyond 5 years (including Seagate and their HDDs) and they will only name a minimum with regards to the chips they use, the 5-year warranty, and the drive's capacity. I.e. a 1TB SSD drive with 5-year warranty and 300TB writes means it will hold data for 5 years when one does not exceed 300 write cycles. My oldest SSD, an Intel X25-M, lasted 12 years until it reported an interface error this year.

                    Trying to find something tangible for you, here a link, which explains it graphically: https://sbebbb0f7ab6c96f4.jimcontent...ity%20Note.pdf

                    You can find more when you search for it. Not sure about this one, which seems to indicate maximum retention times up to 10,000 years *lol* (see figure 3): https://www.macronix.com/Lists/Appli...ND%20Flash.pdf
                    Last edited by sdack; 13 November 2021, 04:50 PM.

                    Comment


                    • #60
                      Originally posted by sdack View Post
                      I wrote "more than 10 years", meaning, it is not flat 10 years, but it is a minimum of 10 years. I have not seen any chip manufacturer specify a maximum yet, but because it degenerates with the write cycles would such a number have more of a theoretical than practical use. The chip manufacturers may also not be sure about it, because the NAND technology keeps advancing and nobody will be sitting around for 10+ years with each new iteration of the technology just to find out how long the new maximum is. (Of course, they will not literally wait 10+ years, but have measuring methods to get an estimate ...)

                      Most drive manufacturers do not care to give you warranties beyond 5 years (including Seagate and their HDDs) and they will only name a minimum with regards to the chips they use, the 5-year warranty, and the drive's capacity. I.e. a 1TB SSD drive with 5-year warranty and 300TB writes means it will hold data for 5 years when one does not exceed 300 write cycles. My oldest SSD, an Intel X25-M, lasted 12 years until it reported an interface error this year.
                      It seems you are mixing data retention with some metric based on TBW or MTBF.
                      Anandtech article stating 10years of new SSD were clearly about MLC (and even more for SLC). MLC cells could be written 10000x at least.
                      But then TLC happened, that could be written 3000x each cell, and was called crap by many then.

                      But nobody expected QLC(500 P/E cycles) and PLC (50 P/E cycles), then.
                      Their cells lifespan is considerably lower, but data retention of QLC is ~2 months. After 6 months you still should be able to recover 99% of data, but some bitrots are unavoidable..

                      QLC/PLC (and even TLC) keeps it's data, by rewriting oldest data in background. If you wondered ever why your idling SSD is hot, you know the answer now.

                      Comment

                      Working...
                      X