Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by coder View Post
    Given that you can just use a little PS/2 -> USB adapter dongle, that wouldn't be the only reason.
    I am taking the environmental aspect. So for buying extra dongle vs. choosing a compatible motherboard, choosing a compatible motherboard still wins.

    Comment


    • #32
      Originally posted by coder View Post
      The main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.

      I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.

      Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.
      I agree with you, but this has been the norm for 30+ years.

      In the 90s era of isa and old school ide drives (40 pin cables), scsi was effectively enterprise/server only stuff.

      After that, there was SAS and fiberchannel. Even cpu form factors are different for servers (think epyc and xeon).

      Comment


      • #33
        Am I just undercaffeinated, or are we discussing spinning rust with an NVMe interface?

        Comment


        • #34
          Originally posted by partcyborg View Post
          I agree with you, but this has been the norm for 30+ years.
          I disagree. What I see happening is that desktops and servers actually grew a little closer, before they started moving apart, again.

          For instance, servers did jump on the PCI 66 MHz/64-bit bandwagon (and EISA, before that). However, PCIe did a lot to unwind that.

          Originally posted by partcyborg View Post
          In the 90s era of isa and old school ide drives (40 pin cables), scsi was effectively enterprise/server only stuff.

          After that, there was SAS and fiberchannel.
          SSDs killed the market for high-speed HDDs. And if you're using 7200 RPM or below, there's not so much benefit to using anything other than SATA.

          Originally posted by partcyborg View Post
          Even cpu form factors are different for servers (think epyc and xeon).
          When did x86 server CPUs get a different socket? I'm pretty sure it was late 2000's.

          Sure, registered memory has been a thing, for quite a while. However, workstations often support it. Now, servers are moving towards pushing memory pools onto CXL devices.

          Already, you could open up a OCP box and find very little that could be used in a desktop PC. This situation is set only to get worse.

          Comment


          • #35
            Originally posted by WonkoTheSaneUK View Post
            Am I just undercaffeinated, or are we discussing spinning rust with an NVMe interface?
            Eh, people latch onto the speed mismatch, but can't you just implement it with x1 lane? Or maybe use an independent, second lane for management?

            And it's not like it has to be PCIe 4.0 or 5.0. I'm pretty sure you can even run NVMe at PCIe 1.0 speeds.

            I think the main selling point is that NVMe is under active development and now has more features than SAS or SATA.

            Comment


            • #36
              Originally posted by coder View Post
              If you're talking about discontinuing HDDs in favor of SSDs, we're not there yet. Modern SSDs don't have the data retention span of HDDs. HDDs also have better GB/$ + peak capacity. Why else do you think cloud providers and hyperscalers still buy them?
              Tell us something we do not already know!

              None of your tangents explains why HDDs have to be added to the NVMe spec and cannot stay on SATA and SAS. It will water down the NVMe spec, the drivers, and other parts of the operating systems when two very distinct technologies get tossed together again. We did split them up for a good reason. Why else did you think we did this?

              HDDs are separate physical devices with inherently long connections and the need for a separate power supply (5v/12v). SSDs do not have this requirement and as already discussed will move only closer to the CPUs in the future. SSDs present an opportunity to lower the power draw and for faster signaling over a dedicated bus. Why drag HDDs into this development?

              By the way, since you have linked to Seagate's roadmaps, have a look at Nimbus. They are selling 100TB SSDs in 3.5" form factor with a 5-year warranty just like Seagate's current enterprise drives. So much for roadmaps.
              Last edited by sdack; 11 November 2021, 06:07 PM.

              Comment


              • #37
                Originally posted by coder View Post
                The main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.

                I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.

                Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.
                I watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.

                Comment


                • #38
                  Originally posted by Ironmask View Post
                  I watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.
                  Servers can have 16 and more memory modules socketed, while the average PC only sees 2-4 modules. The risk of a memory failure increases by the number of memory modules and is several times higher for servers than for an average PC. This is why server memory technology allows for error detection, correction, and even hot-swapping so that servers can reach their required uptime target, while PCs themselves have much lower uptime requirements. The added features for server memory technology then come with extra costs, and in order to cut costs and to offer affordable PCs do PCs simply lack these features.

                  To have different technologies for seemingly the same things often comes down to the principle of "divide and conquer" (when it is not stupid competition creating interfaces for the exact same thing ...). By creating separate interfaces for different technologies can we target specific goals and so save cost, lower power consumption, reduce latency, increase speed, simplify protocols, have longer connections, more connectors, and so on.

                  To drag HDD technology into it when we just got away from it has less to do with the advancement of cutting-edge technology, but more with two old "Silicon Valley"-dinosaurs teaming up to retain shareholder value.
                  Last edited by sdack; 11 November 2021, 12:41 PM.

                  Comment


                  • #39
                    Originally posted by Ironmask View Post

                    consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP
                    Completely unrelated, but that made me think of JavaScript vs Rust.

                    Comment


                    • #40
                      Originally posted by coder View Post
                      Why do you say that? It's cheap, it works, and it's absolutely fine for HDDs and even most SSDs.

                      There's no way NVMe cables are going to be a cheap, and I wouldn't be surprised if they weren't as sturdy or reliable, either.
                      Some people have needs for more high speed storage larger than what one or two NVMe on the mobo can deliver. Modern NVMe are in the 7GB speed range where as SATA3 is 600MB. If we get a better connector we can have spinning hard drives with large on board caches and larger fast SSDs than what the current NVMe form factor provides for.

                      Comment

                      Working...
                      X