Originally posted by coder
View Post
Announcement
Collapse
No announcement yet.
NVMe HDD Demoed At Open Compute Project Summit
Collapse
X
-
Originally posted by coder View PostThe main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.
I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.
Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.
In the 90s era of isa and old school ide drives (40 pin cables), scsi was effectively enterprise/server only stuff.
After that, there was SAS and fiberchannel. Even cpu form factors are different for servers (think epyc and xeon).
- Likes 2
Comment
-
Originally posted by partcyborg View PostI agree with you, but this has been the norm for 30+ years.
For instance, servers did jump on the PCI 66 MHz/64-bit bandwagon (and EISA, before that). However, PCIe did a lot to unwind that.
Originally posted by partcyborg View PostIn the 90s era of isa and old school ide drives (40 pin cables), scsi was effectively enterprise/server only stuff.
After that, there was SAS and fiberchannel.
Originally posted by partcyborg View PostEven cpu form factors are different for servers (think epyc and xeon).
Sure, registered memory has been a thing, for quite a while. However, workstations often support it. Now, servers are moving towards pushing memory pools onto CXL devices.
Already, you could open up a OCP box and find very little that could be used in a desktop PC. This situation is set only to get worse.
- Likes 1
Comment
-
Originally posted by WonkoTheSaneUK View PostAm I just undercaffeinated, or are we discussing spinning rust with an NVMe interface?
And it's not like it has to be PCIe 4.0 or 5.0. I'm pretty sure you can even run NVMe at PCIe 1.0 speeds.
I think the main selling point is that NVMe is under active development and now has more features than SAS or SATA.
Comment
-
Originally posted by coder View PostIf you're talking about discontinuing HDDs in favor of SSDs, we're not there yet. Modern SSDs don't have the data retention span of HDDs. HDDs also have better GB/$ + peak capacity. Why else do you think cloud providers and hyperscalers still buy them?
None of your tangents explains why HDDs have to be added to the NVMe spec and cannot stay on SATA and SAS. It will water down the NVMe spec, the drivers, and other parts of the operating systems when two very distinct technologies get tossed together again. We did split them up for a good reason. Why else did you think we did this?
HDDs are separate physical devices with inherently long connections and the need for a separate power supply (5v/12v). SSDs do not have this requirement and as already discussed will move only closer to the CPUs in the future. SSDs present an opportunity to lower the power draw and for faster signaling over a dedicated bus. Why drag HDDs into this development?
By the way, since you have linked to Seagate's roadmaps, have a look at Nimbus. They are selling 100TB SSDs in 3.5" form factor with a 5-year warranty just like Seagate's current enterprise drives. So much for roadmaps.Last edited by sdack; 11 November 2021, 06:07 PM.
- Likes 1
Comment
-
Originally posted by coder View PostThe main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.
I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.
Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.
- Likes 2
Comment
-
Originally posted by Ironmask View PostI watched a pretty interesting talk from an angry sysadmin that consumer PCs should not be servers. He had a couple good arguments but I think his best one was that consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP. Not sure what that talk was named but I wish I could find it.
To have different technologies for seemingly the same things often comes down to the principle of "divide and conquer" (when it is not stupid competition creating interfaces for the exact same thing ...). By creating separate interfaces for different technologies can we target specific goals and so save cost, lower power consumption, reduce latency, increase speed, simplify protocols, have longer connections, more connectors, and so on.
To drag HDD technology into it when we just got away from it has less to do with the advancement of cutting-edge technology, but more with two old "Silicon Valley"-dinosaurs teaming up to retain shareholder value.Last edited by sdack; 11 November 2021, 12:41 PM.
Comment
-
Originally posted by Ironmask View Post
consumer RAM tries to hide faults from you until it outright fails for seemingly no reason, whereas with more server-specific hardware it'll report even the most minor issue so it can be replaced ASAP
- Likes 1
Comment
-
Originally posted by coder View PostWhy do you say that? It's cheap, it works, and it's absolutely fine for HDDs and even most SSDs.
There's no way NVMe cables are going to be a cheap, and I wouldn't be surprised if they weren't as sturdy or reliable, either.
- Likes 1
Comment
Comment