Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by coder View Post
    When did x86 server CPUs get a different socket? I'm pretty sure it was late 2000's.
    In effect, home was always different from enterprise even before 2000; servers were a different architecture altogether (gross oversimplification, granted).

    Comment


    • #42
      Originally posted by uxmkt View Post
      In effect, home was always different from enterprise even before 2000; servers were a different architecture altogether (gross oversimplification, granted).
      Interesting, but note where I said "x86 servers". I am specifically talking about x86 servers and desktops being able to share components. It's not relevant to my point if Sun UltraSparc or some Vax shit used bus cards that were incompatible with PCs.

      Comment


      • #43
        Originally posted by sdack View Post
        We did split them up for a good reason. Why else did you think we did this?
        Really? And what's this "we" business? Were you personally involved in the NVMe standard or why do you seem to be speaking on their behalf?

        Fair questions, I suppose. I believe NVMe was initially devised as a way to attain higher speeds, as well as simplifying both the devices and the host's hardware. While HDDs don't require higher speeds than SATA 3.0 or 12 Gbps SAS, the benefits of simplifying the host + support for newer NVMe features without having to port them to SATA/SAS seem reason enough.

        The reason it wasn't done sooner is that servers now have more PCIe lanes/capacity than before, PCIe switches are cheaper (I think), and the NVMe spec has had many years' worth of features added to it.

        Originally posted by sdack View Post
        HDDs are separate physical devices with inherently long connections and the need for a separate power supply (5v/12v). SSDs do not have this requirement and as already discussed will move only closer to the CPUs in the future. SSDs present an opportunity to lower the power draw and for faster signaling over a dedicated bus. Why drag HDDs into this development?
        None of these represent reasons not to add NVMe support to HDDs, though. Since PCIe is packet-switched, there's not the sort of issues with one device tying up the bus that we had in earlier interconnect standards.

        Originally posted by sdack View Post
        By the way, since you have linked to Seagate's roadmaps, have a look at Nimbus. They are selling 100TB SSDs in 3.5" form factor with a 5-year warranty just like Seagate's current enterprise drives. So much for roadmaps.
        Interesting. Two issues, though.
        1. They don't specify power-off data retention. It's not going to be anywhere near 5 years, making this unsuitable for cold or nearline storage.
        2. Do you have any idea how much they cost? I do, because they conveniently posted up a price list! https://nimbusdata.com/products/exadrive/pricing/
        $40k buys a lot of HDD capacity!!

        What puzzles me most about your apparent position is that if HDDs didn't still have compelling advantages, why wouldn't the industry be in complete collapse? Obviously, it's not. Cloud and hyperscalers continue consuming HDDs at record pace. These guys aren't dumb.

        Also, why do you belittle anyone who disagrees with you? If you think you have a good case, why can't you just put it forth and let it stand on its own?

        Comment


        • #44
          Originally posted by Ironmask View Post
          I watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.
          Proper ECC RAM support will log any correctable errors. If there's an uncorrectable error, you get a log + segfault (or maybe SIGBUS, I forget). I always try to use ECC RAM (and boards + CPUs that support it), even in desktops*. I have better things to do than deal with problems caused by bad RAM.

          * It's a shame that laptops with ECC support seem exceedingly rare.

          Comment


          • #45
            Originally posted by sdack View Post
            The added features for server memory technology then come with extra costs, and in order to cut costs and to offer affordable PCs do PCs simply lack these features.
            Registered DIMMs tend to be a little more expensive and add a little latency. It's the usual sort of tradeoff you see for more scalable technologies.

            Unfortunately, most consumer platforms don't support RDIMMs. To my knowledge, only Intel's workstation CPUs tend to offer both UDIMM and RDIMM support.

            BTW, I once bought a RDIMM before I knew the difference. I was able to ebay it quite easily. I think Opterons required them, luckily for me.

            Comment


            • #46
              Originally posted by MadeUpName View Post
              Some people have needs for more high speed storage larger than what one or two NVMe on the mobo can deliver. Modern NVMe are in the 7GB speed range where as SATA3 is 600MB. If we get a better connector we can have spinning hard drives with large on board caches and larger fast SSDs than what the current NVMe form factor provides for.
              The tradeoff is that U.2 chews up more realestate on motherboards, the cables are bulky and block airflow in your case, and are quite likely going to remain more expensive.

              So, if NVMe completely replaces SATA, I think there would be some real downsides to that. However, I can see your point that it enables bigger drives and HDDs with Optane/NAND cache.

              Comment


              • #47
                I’m very confused by the people hating on the concept of NVMe HHDs like it somehow takes something away from other NVMe drives…

                Comment


                • #48
                  Originally posted by coder View Post
                  ... why can't you just put it forth and let it stand on its own?
                  I stop belittling you once you let the comments of others stand on their own. But as long as you quote comments piece by piece, pull them out of context to create meaningless tangents only to get your word in, will nobody respect you. Frankly, you seem to have some inferiority complex when you do this, but I do not mean to judge.

                  Back to your other question, why the industry wouldn't be in complete collapse ... Who says that anything has to collapse other than in your wild imagination? Especially in the data storage segment does trust matter the most, and trust is not built in one day. So no, nothing is going to collapse here. However, SSDs are on the rise and present new technical challenges for which a new interface makes sense.

                  You then have not answered why HDDs needed to be on NVMe. If one can connect an HDD to NVMe was not the question. I am asking specifically about the necessity. I am sure that one can also connect BD-/DVD-/CD-/MO-/DAT-drives and the C64/VC20 Datsette to NVMe, but where is the necessity for it? And why abandon SATA/SAS? ... Or we could try to connect HDDs to the DDR5 interface and see if this makes HDDs any faster.

                  Comment


                  • #49
                    Originally posted by Markopolo View Post
                    I’m very confused by the people hating on the concept of NVMe HHDs like it somehow takes something away from other NVMe drives…
                    It does. So have HDDs long initialization times for example. This then has to be taken into account by the hardware, the protocol, and the driver. Also HDDs are bulky devices, requiring long cables, while SSDs can be connected over shorter lanes. This, too, has to be taken into account and affects signalling. Hardware often also has flaws and needs to be corrected in software, the driver, sometimes to the point where it needs lists of known good/bad devices, workarounds and quirks, etc.. The point is, the more diverse hardware you try to connect to any interface, the more you need to account for the diversity and so it begins to water it down. To there drag HDDs, for which there is already a dedicated interface (SATA/SAS), to NVMe, which is an interface designed specifically to exploit the unique properties of SSDs (very low latency, high transfer rates), is asking for trouble while we already know that HDDs will not suddenly get any faster.

                    Comment


                    • #50
                      Originally posted by sdack View Post
                      I stop belittling you once you let the comments of others stand on their own. But as long as you quote comments piece by piece, pull them out of context to create meaningless tangents only to get your word in,
                      I quote the way I do so that it's clear what part I'm responding to. If I take something out of context, it's not intentional and it's your right to call that out.

                      Originally posted by sdack View Post
                      Frankly, you seem to have some inferiority complex when you do this, but I do not mean to judge.
                      Feedback noted. Thanks, I guess.

                      Originally posted by sdack View Post
                      You then have not answered why HDDs needed to be on NVMe. If one can connect an HDD to NVMe was not the question. I am asking specifically about the necessity.
                      I'm not exactly a proponent of the move, but my understanding is that relevant parties want to imbue HDDs with features from NVMe, to unify their software stack, and probably also simplify their hardware. I can understand not wanting to go through ratifying another round of updates to SAS and getting all the necessary vendors to roll out those changes in their hardware/firmware/drivers, when those features are already in NVMe.

                      BTW, the NVMe spec is now so large they recently had to break it up.


                      Originally posted by sdack View Post
                      Or we could try to connect HDDs to the DDR5 interface and see if this makes HDDs any faster.
                      The one thing we can probably say is that it's not about speed. The parties driving this probably don't use hybrid HDDs, and 12 Gbps SAS is plenty of bandwidth for mechanical hard drives.

                      Comment

                      Working...
                      X