Announcement

Collapse
No announcement yet.

NVMe HDD Demoed At Open Compute Project Summit

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • sdack
    replied
    Originally posted by coder View Post
    If you're talking about discontinuing HDDs in favor of SSDs, we're not there yet. Modern SSDs don't have the data retention span of HDDs. HDDs also have better GB/$ + peak capacity. Why else do you think cloud providers and hyperscalers still buy them?
    Tell us something we do not already know!

    None of your tangents explains why HDDs have to be added to the NVMe spec and cannot stay on SATA and SAS. It will water down the NVMe spec, the drivers, and other parts of the operating systems when two very distinct technologies get tossed together again. We did split them up for a good reason. Why else did you think we did this?

    HDDs are separate physical devices with inherently long connections and the need for a separate power supply (5v/12v). SSDs do not have this requirement and as already discussed will move only closer to the CPUs in the future. SSDs present an opportunity to lower the power draw and for faster signaling over a dedicated bus. Why drag HDDs into this development?

    By the way, since you have linked to Seagate's roadmaps, have a look at Nimbus. They are selling 100TB SSDs in 3.5" form factor with a 5-year warranty just like Seagate's current enterprise drives. So much for roadmaps.
    Last edited by sdack; 11 November 2021, 06:07 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by WonkoTheSaneUK View Post
    Am I just undercaffeinated, or are we discussing spinning rust with an NVMe interface?
    Eh, people latch onto the speed mismatch, but can't you just implement it with x1 lane? Or maybe use an independent, second lane for management?

    And it's not like it has to be PCIe 4.0 or 5.0. I'm pretty sure you can even run NVMe at PCIe 1.0 speeds.

    I think the main selling point is that NVMe is under active development and now has more features than SAS or SATA.

    Leave a comment:


  • coder
    replied
    Originally posted by partcyborg View Post
    I agree with you, but this has been the norm for 30+ years.
    I disagree. What I see happening is that desktops and servers actually grew a little closer, before they started moving apart, again.

    For instance, servers did jump on the PCI 66 MHz/64-bit bandwagon (and EISA, before that). However, PCIe did a lot to unwind that.

    Originally posted by partcyborg View Post
    In the 90s era of isa and old school ide drives (40 pin cables), scsi was effectively enterprise/server only stuff.

    After that, there was SAS and fiberchannel.
    SSDs killed the market for high-speed HDDs. And if you're using 7200 RPM or below, there's not so much benefit to using anything other than SATA.

    Originally posted by partcyborg View Post
    Even cpu form factors are different for servers (think epyc and xeon).
    When did x86 server CPUs get a different socket? I'm pretty sure it was late 2000's.

    Sure, registered memory has been a thing, for quite a while. However, workstations often support it. Now, servers are moving towards pushing memory pools onto CXL devices.

    Already, you could open up a OCP box and find very little that could be used in a desktop PC. This situation is set only to get worse.

    Leave a comment:


  • WonkoTheSaneUK
    replied
    Am I just undercaffeinated, or are we discussing spinning rust with an NVMe interface?

    Leave a comment:


  • partcyborg
    replied
    Originally posted by coder View Post
    The main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.

    I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.

    Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.
    I agree with you, but this has been the norm for 30+ years.

    In the 90s era of isa and old school ide drives (40 pin cables), scsi was effectively enterprise/server only stuff.

    After that, there was SAS and fiberchannel. Even cpu form factors are different for servers (think epyc and xeon).

    Leave a comment:


  • billyswong
    replied
    Originally posted by coder View Post
    Given that you can just use a little PS/2 -> USB adapter dongle, that wouldn't be the only reason.
    I am taking the environmental aspect. So for buying extra dongle vs. choosing a compatible motherboard, choosing a compatible motherboard still wins.

    Leave a comment:


  • coder
    replied
    The main thing that bugs me about this is it's just taking us further down the path of bifurcating consumer and server technologies. Like the SSD ruler form factor and the SXM and OAM module form factors for GPU/compute-accelerators.

    I lament the waning days of being able to scrounge parts from decommission servers and slap them in my home rig. That's how I got a 10 Gig ethernet card datacenter-grade SSD that I'm using.

    Okay, for hard drives, I'd only buy new. However, if the enterprise market branches off to using an interface we don't have in PCs or NAS boxes, then we lose the option to get enterprise-class drives for them. I'm currently running WD Gold-series, in my fileserver.

    Leave a comment:


  • coder
    replied
    Originally posted by billyswong View Post
    Because somebody such as me still have perfectly functional ps/2 keyboards. They works and we aren't throwing them away in any time soon.
    Given that you can just use a little PS/2 -> USB adapter dongle, that wouldn't be the only reason.

    Leave a comment:


  • coder
    replied
    Originally posted by sdack View Post
    Seagate should just produce better SSDs instead of trying to push old technologies, that draw too much power and needed to die yesterday.
    If you're talking about discontinuing HDDs in favor of SSDs, we're not there yet. Modern SSDs don't have the data retention span of HDDs. HDDs also have better GB/$ + peak capacity. Why else do you think cloud providers and hyperscalers still buy them?

    According to technology roadmaps from a couple of the big HDD manufacturers, HDDs are set to keep ahead of SSDs for a while.
    Last edited by coder; 10 November 2021, 10:47 PM.

    Leave a comment:


  • coder
    replied
    Originally posted by Ironmask View Post
    Most motherboards don't have PS/2, but I do still see some high-end modern ones still come with it, which is kind of a shock. Maybe it's a government-required compatibility thing for military hardware? They're still using MS-DOS of all things so it wouldn't surprise me.
    A lot of PS/2 KVM switches are still in use. I was using one that supported VGA + PS/2 or USB, until I finally upgraded to HDMI/USB, about 6 months ago. (of course, I didn't use it to switch video for my main PCs/monitor -- I only used the VGA for machines that mostly ran headless).

    It does occur to me that there are fewer security exploits via PS/2 than USB. Remember that "Bad USB" exploit, from a few years back? Plus, using a PS/2 keyboard & mouse would enable certain security-conscious agencies & businesses to put epoxy in all their PCs' USB ports.
    Last edited by coder; 10 November 2021, 10:40 PM.

    Leave a comment:

Working...
X