Linux 6.13 Rolling Out NVMe 2.1 Support & NVMe Rotational Media

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • elvis
    replied
    Originally posted by davidbepo View Post
    further info on this, biggest HDD i could find is 24TB, 3,5", biggest SSD is 61TB 2,5" U.2, so difference is actually bigger than i anticipated
    Seagate has a 32TB available now:
    Seagate’s Exos M 3+ hard drive, boasting breakthrough 3TB per platter density, delivers extraordinary storage capacity and power efficiency. Engineered on proven technology, it’s crafted to power AI and data-intensive applications in cutting-edge cloud and enterprise environments.


    WD announced 40TB soon:
    Western Digital announced their Q3 2023 earnings on 10/30/2023. View WDC's earnings results, press release, and conference call transcript at MarketBeat.


    These are SMR, but again the application here is mostly for "nearline storage" - i.e.: long term store files, S3 objects, backup data, etc, but with faster access / lower recall latencies than tape. SMR, like spindle, is constantly called out as invalid technology, but the reality is that there's massive commercial demand for this because of its price-to-size ratios.

    Yes, flash media is beginning to exceed spindle in terms of density and maximum size per drive, but the price of those huge flash devices is often an order or two in magnitude larger. Hyperscaler problems are multi-dimensional - how many petabytes can you squeeze into a given space for a given dollar cost and a given wattage draw?

    Flash is absolutely gaining ground here, but spindle still exists precisely because it's cheaper with respect to these specific problems. When those ratios flip, I have no idea. Every time someone announces the upper bounds of what spindle can achieve, some research group somewhere proves them wrong with another bump in density. Who knows where that will end.

    Originally posted by davidbepo View Post
    anyways i reaffirm HDD is a dying tech, but i guess it has long enough where this may be worth it, if only for hyperscalers
    Everything is "dying tech" in the big picture. Spindles will endure as long as they meet all of the price/size/performance/power/density/cost ratios required by their customers. And "customers" look a lot different today, with the explosion of stuff in data centers. "Dying" is always relative, and often non-linear (vinyl records are seeing a resurgence - sometimes old things come back for strange reasons).

    I've spent close to 30 years in this career, and I've learned several times over never to count a particular technology out. Even when you think you've seen the last of it, someone somewhere will come out of the woodwork with enough commercial demand to see things revived back into production life. See languages like COBOL and FORTRAN, architecture like mainframes and IBM POWER, or technology like Infiniband and RDMA. Every time someone announces the death of these, I get another contract keeping these things alive another decade for someone, somewhere, often with lots of expensive commercial support from modern vendors.

    Leave a comment:


  • davidbepo
    replied
    Originally posted by elvis View Post
    Tiered and hierarchical storage are still MASSIVE markets. I'm working for multiple customers who have single-digit-petabytes flash, double-digits-petabytes spindle, and who-knows-how-much tape, and very large management tools that shuffle that data back and forth for end users.

    In the same way that tape has been "dying" for decades, spindle continues to offer a great cost-point for middle tier storage at huge volumes. And again, when you start talking about very large JBOD storage systems (including "zero watt" systems for object storage where individual disks can spin up and down on demand).

    And in terms of commercial volume - these are the biggest hyperscalers. AWS, Azure and GCP are all still buying spindle at enormous scale (and tape too!).

    Looking at consumer devices as a metric for what technology is "dying" is the wrong way to go about things in late 2024. Commercial viability is now very much in the hands of the huge players.
    interesting, now do notice i said dying, not dead, i know about tape still existing on the ultra high capacity level, but HDDs will eventually get squeezed from the bottom by SSDs and from the top by tape, since both are inherently simpler and cheaper techs
    i dont know what the HDD/tape crossover is, but SSD/HDD one is 1TB and its only gonna get higher as flash tech gets cheaper
    also important to note on HDDs dying, while way more expensive, biggest SSDs already have more capacity than biggest HDDs, so it is PURELY a price thing, as SSDs will give higher capacity and WAY higher capacity density(2,5" vs 3,5")

    further info on this, biggest HDD i could find is 24TB, 3,5", biggest SSD is 61TB 2,5" U.2, so difference is actually bigger than i anticipated

    anyways i reaffirm HDD is a dying tech, but i guess it has long enough where this may be worth it, if only for hyperscalers
    Last edited by davidbepo; 26 November 2024, 05:45 PM.

    Leave a comment:


  • elvis
    replied
    Originally posted by davidbepo View Post
    as for the 2nd, introducing a compatibility breaking change to a dying technology isnt exactly the best idea
    Tiered and hierarchical storage are still MASSIVE markets. I'm working for multiple customers who have single-digit-petabytes flash, double-digits-petabytes spindle, and who-knows-how-much tape, and very large management tools that shuffle that data back and forth for end users.

    In the same way that tape has been "dying" for decades, spindle continues to offer a great cost-point for middle tier storage at huge volumes. And again, when you start talking about very large JBOD storage systems (including "zero watt" systems for object storage where individual disks can spin up and down on demand).

    And in terms of commercial volume - these are the biggest hyperscalers. AWS, Azure and GCP are all still buying spindle at enormous scale (and tape too!).

    Looking at consumer devices as a metric for what technology is "dying" is the wrong way to go about things in late 2024. Commercial viability is now very much in the hands of the huge players.

    Leave a comment:


  • billyswong
    replied
    Originally posted by davidbepo View Post

    i know about U.2 but thats a 2,5" form factor, and while such HDDs exist, they dont in capacities where its capacities where SSDs arent just better in every aspect
    I think there isn't any restriction in U.2 drive form factor? HDD vendors definitely can manufacture a 3.5" drive with U.2. Or whatever any other sizes. The device plug specification only imposes a restriction in minimum physical size, not a maximum.

    Leave a comment:


  • mobadboy
    replied
    Originally posted by Joe2021 View Post

    There are many features in CPUs and various subsystems you easily could claim only being relevant for "multi-tenant hosts". But you not being interested in those features doesn't mean nobody is. In fact, many features primarily targeted at "multi-tenant hosts" are in common use today.

    Actually, I am tired of arguing with people insisting that a feature set they have a superficial understanding of at most is not desirable for others. If you are happy with your software stack, fine, nobody is trying to take that from you. But what is your incentive arguing against requests of others?
    welcome to moronix

    "if it isnt useful for my dumb ass outdated useless flow, it shouldnt exist"

    this attitude exists most places, but Reddit r/linux is much much much MUCH MUCH MUCH better

    phoronix is for shitposting the morons who get banned from the various subreddits

    Leave a comment:


  • Joe2021
    replied
    Originally posted by ahrs View Post

    What can they do that LVM or subvolumes, etc, can't do? The main benefit as far as I can see is increased security from the hardware enforcing different isolated boundaries which I guess is useful for multi-tenant hosts where you might not necessarily trust the host kernel to enforce a security boundary correctly in software.
    There are many features in CPUs and various subsystems you easily could claim only being relevant for "multi-tenant hosts". But you not being interested in those features doesn't mean nobody is. In fact, many features primarily targeted at "multi-tenant hosts" are in common use today.

    Actually, I am tired of arguing with people insisting that a feature set they have a superficial understanding of at most is not desirable for others. If you are happy with your software stack, fine, nobody is trying to take that from you. But what is your incentive arguing against requests of others?

    Leave a comment:


  • Espionage724
    replied
    Originally posted by dlq84 View Post

    Why not? NVME support vastly more command queues and is both faster and more efficient over fiber channel. Just to name a couple of reasons.
    Everyone wanting to take a page from systemd and just eating up all duties

    Leave a comment:


  • ahrs
    replied
    Originally posted by Joe2021 View Post

    Yes, I do want more, and you are missing something. I'd suggest to investigate namespaces to find out why they were introduced and what they can actually do.
    What can they do that LVM or subvolumes, etc, can't do? The main benefit as far as I can see is increased security from the hardware enforcing different isolated boundaries which I guess is useful for multi-tenant hosts where you might not necessarily trust the host kernel to enforce a security boundary correctly in software.

    Leave a comment:


  • Joe2021
    replied
    Originally posted by ahrs View Post
    You want more? Most of my installs consist of exactly one partition on legacy BIOS/MBR [...]
    Is there something I'm missing? Are namespaces for VMs or containers? You might as well run some complicated LVM setup instead and do it in software.
    Yes, I do want more, and you are missing something. I'd suggest to investigate namespaces to find out why they were introduced and what they can actually do.

    Leave a comment:


  • davidbepo
    replied
    Originally posted by elvis View Post
    The post directly after yours linked to an excellent YouTube video demonstrating that this has lots of benefits for multi-disk applications.

    When you're rolling out massive drive arrays for S3 storage, ZFS nearline arrays, Ceph clusters, etc, then NVME has a lot of benefits over legacy SATA/SAS in both simplifying the connection points, as well as removing some upper limits to how many devices can be attached without needing more controllers.

    Similarly, hanging lots of rotational disks off a single 6Gbit controller with port multipliers is absolutely a bottleneck. Again, see the video for how simplified PCIE switching and NVME results in simpler hardware and higher speeds.

    The video also talks about what it looks like when every bit of compute and IO is all on the same fabric. For future workloads, having multiple classes of storage, network, GPUs and the like all on the same PCIE/NVME fabric simplifies a lot of problems we have in high end clustering.

    Lots of applications for this beyond what a single drive looks like in 2024. NVME looked pretty silly even for flash when it first arrived, because we couldn't hit those speeds back then. But it was clear that it was a necessary change as things moved forward. Limiting things to today's technology is not how the industry works.


    When has that ever stopped the progression of disk technology (or any technology)? We've had things like SCSI, IDE, SATA, and SAS over the years. Physical connector changes are a natural part of technological evolution.
    ok so there are some valid edge cases, fair enough

    as for the 2nd, introducing a compatibility breaking change to a dying technology isnt exactly the best idea

    Leave a comment:

Working...
X