Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by MadeUpName View Post
    Some people have needs for more high speed storage larger than what one or two NVMe on the mobo can deliver. Modern NVMe are in the 7GB speed range where as SATA3 is 600MB. If we get a better connector we can have spinning hard drives with large on board caches and larger fast SSDs than what the current NVMe form factor provides for.
    The tradeoff is that U.2 chews up more realestate on motherboards, the cables are bulky and block airflow in your case, and are quite likely going to remain more expensive.

    So, if NVMe completely replaces SATA, I think there would be some real downsides to that. However, I can see your point that it enables bigger drives and HDDs with Optane/NAND cache.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    The added features for server memory technology then come with extra costs, and in order to cut costs and to offer affordable PCs do PCs simply lack these features.
    Registered DIMMs tend to be a little more expensive and add a little latency. It's the usual sort of tradeoff you see for more scalable technologies.

    Unfortunately, most consumer platforms don't support RDIMMs. To my knowledge, only Intel's workstation CPUs tend to offer both UDIMM and RDIMM support.

    BTW, I once bought a RDIMM before I knew the difference. I was able to ebay it quite easily. I think Opterons required them, luckily for me.

    Leave a comment:


  • coder
    replied
    Originally posted by Ironmask View Post
    I watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.
    Proper ECC RAM support will log any correctable errors. If there's an uncorrectable error, you get a log + segfault (or maybe SIGBUS, I forget). I always try to use ECC RAM (and boards + CPUs that support it), even in desktops*. I have better things to do than deal with problems caused by bad RAM.

    * It's a shame that laptops with ECC support seem exceedingly rare.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    We did split them up for a good reason. Why else did you think we did this?
    Really? And what's this "we" business? Were you personally involved in the NVMe standard or why do you seem to be speaking on their behalf?

    Fair questions, I suppose. I believe NVMe was initially devised as a way to attain higher speeds, as well as simplifying both the devices and the host's hardware. While HDDs don't require higher speeds than SATA 3.0 or 12 Gbps SAS, the benefits of simplifying the host + support for newer NVMe features without having to port them to SATA/SAS seem reason enough.

    The reason it wasn't done sooner is that servers now have more PCIe lanes/capacity than before, PCIe switches are cheaper (I think), and the NVMe spec has had many years' worth of features added to it.

    Originally posted by sdack View Post
    HDDs are separate physical devices with inherently long connections and the need for a separate power supply (5v/12v). SSDs do not have this requirement and as already discussed will move only closer to the CPUs in the future. SSDs present an opportunity to lower the power draw and for faster signaling over a dedicated bus. Why drag HDDs into this development?
    None of these represent reasons not to add NVMe support to HDDs, though. Since PCIe is packet-switched, there's not the sort of issues with one device tying up the bus that we had in earlier interconnect standards.

    Originally posted by sdack View Post
    By the way, since you have linked to Seagate's roadmaps, have a look at Nimbus. They are selling 100TB SSDs in 3.5" form factor with a 5-year warranty just like Seagate's current enterprise drives. So much for roadmaps.
    Interesting. Two issues, though.
    1. They don't specify power-off data retention. It's not going to be anywhere near 5 years, making this unsuitable for cold or nearline storage.
    2. Do you have any idea how much they cost? I do, because they conveniently posted up a price list! https://nimbusdata.com/products/exadrive/pricing/
    $40k buys a lot of HDD capacity!!

    What puzzles me most about your apparent position is that if HDDs didn't still have compelling advantages, why wouldn't the industry be in complete collapse? Obviously, it's not. Cloud and hyperscalers continue consuming HDDs at record pace. These guys aren't dumb.

    Also, why do you belittle anyone who disagrees with you? If you think you have a good case, why can't you just put it forth and let it stand on its own?

    Leave a comment:


  • coder
    replied
    Originally posted by uxmkt View Post
    In effect, home was always different from enterprise even before 2000; servers were a different architecture altogether (gross oversimplification, granted).
    Interesting, but note where I said "x86 servers". I am specifically talking about x86 servers and desktops being able to share components. It's not relevant to my point if Sun UltraSparc or some Vax shit used bus cards that were incompatible with PCs.

    Leave a comment:


  • uxmkt
    replied
    Originally posted by coder View Post
    When did x86 server CPUs get a different socket? I'm pretty sure it was late 2000's.
    In effect, home was always different from enterprise even before 2000; servers were a different architecture altogether (gross oversimplification, granted).

    Leave a comment:


  • MadeUpName
    replied
    Originally posted by coder View Post
    Why do you say that? It's cheap, it works, and it's absolutely fine for HDDs and even most SSDs.

    There's no way NVMe cables are going to be a cheap, and I wouldn't be surprised if they weren't as sturdy or reliable, either.
    Some people have needs for more high speed storage larger than what one or two NVMe on the mobo can deliver. Modern NVMe are in the 7GB speed range where as SATA3 is 600MB. If we get a better connector we can have spinning hard drives with large on board caches and larger fast SSDs than what the current NVMe form factor provides for.

    Leave a comment:


  • LinAGKar
    replied
    Originally posted by Ironmask View Post

    consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP
    Completely unrelated, but that made me think of JavaScript vs Rust.

    Leave a comment:


  • sdack
    replied
    Originally posted by Ironmask View Post
    I watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.
    Servers can have 16 and more memory modules socketed, while the average PC only sees 2-4 modules. The risk of a memory failure increases by the number of memory modules and is several times higher for servers than for an average PC. This is why server memory technology allows for error detection, correction, and even hot-swapping so that servers can reach their required uptime target, while PCs themselves have much lower uptime requirements. The added features for server memory technology then come with extra costs, and in order to cut costs and to offer affordable PCs do PCs simply lack these features.

    To have different technologies for seemingly the same things often comes down to the principle of "divide and conquer" (when it is not stupid competition creating interfaces for the exact same thing ...). By creating separate interfaces for different technologies can we target specific goals and so save cost, lower power consumption, reduce latency, increase speed, simplify protocols, have longer connections, more connectors, and so on.

    To drag HDD technology into it when we just got away from it has less to do with the advancement of cutting-edge technology, but more with two old "Silicon Valley"-dinosaurs teaming up to retain shareholder value.
    Last edited by sdack; 11 November 2021, 12:41 PM.

    Leave a comment:


  • Ironmask
    replied
    Originally posted by coder View Post
    The main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.

    I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.

    Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.
    I watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.

    Leave a comment:

Working...
X