Linux 6.13 Rolling Out NVMe 2.1 Support & NVMe Rotational Media

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • kobblestown
    Senior Member
    • Apr 2011
    • 198

    #21
    Originally posted by davidbepo View Post
    (1) NVMe requires a new physical connector and form factor over SATA/SAS, also (2)wake me up when an HDD comes close to even touching sata limits, this is just nonsense
    (1) I don't think it necessarilly does. The SATA connector should be good for PCIe 3.0x1. However, this will probably not fly in consumer space because it may create confussion. Although, it should be possible to auto-detect and switch between both. However, this is not for consumer space anyway.

    (2) There are dual actuator drives that get very close to the SATA limit. Sure, SAS 12G can handle those but in the long run, NVME will probably be cheaper. And having to deal with a single technology will make it easier to mix and match different storage technologies over the same backplane/fabric.

    Comment

    • Ademms
      Junior Member
      • Nov 2024
      • 1

      #22
      I'm eager to see the performance improvements and compatibility enhancements that this kernel version brings.
      Have you tested any specific workloads or benchmarks to measure the impact of these changes?

      Comment

      • davidbepo
        Senior Member
        • Nov 2014
        • 936

        #23
        Originally posted by billyswong View Post

        There is a U.2 port+connector for NVMe devices not plugged into motherboards directly. The only issue is lack of drives using that port in the consumer market.
        i know about U.2 but thats a 2,5" form factor, and while such HDDs exist, they dont in capacities where its capacities where SSDs arent just better in every aspect

        Comment

        • davidbepo
          Senior Member
          • Nov 2014
          • 936

          #24
          Originally posted by kobblestown View Post

          (1) I don't think it necessarilly does. The SATA connector should be good for PCIe 3.0x1. However, this will probably not fly in consumer space because it may create confussion. Although, it should be possible to auto-detect and switch between both. However, this is not for consumer space anyway.

          (2) There are dual actuator drives that get very close to the SATA limit. Sure, SAS 12G can handle those but in the long run, NVME will probably be cheaper. And having to deal with a single technology will make it easier to mix and match different storage technologies over the same backplane/fabric.
          ok, fair enough

          Comment

          • davidbepo
            Senior Member
            • Nov 2014
            • 936

            #25
            Originally posted by elvis View Post
            The post directly after yours linked to an excellent YouTube video demonstrating that this has lots of benefits for multi-disk applications.

            When you're rolling out massive drive arrays for S3 storage, ZFS nearline arrays, Ceph clusters, etc, then NVME has a lot of benefits over legacy SATA/SAS in both simplifying the connection points, as well as removing some upper limits to how many devices can be attached without needing more controllers.

            Similarly, hanging lots of rotational disks off a single 6Gbit controller with port multipliers is absolutely a bottleneck. Again, see the video for how simplified PCIE switching and NVME results in simpler hardware and higher speeds.

            The video also talks about what it looks like when every bit of compute and IO is all on the same fabric. For future workloads, having multiple classes of storage, network, GPUs and the like all on the same PCIE/NVME fabric simplifies a lot of problems we have in high end clustering.

            Lots of applications for this beyond what a single drive looks like in 2024. NVME looked pretty silly even for flash when it first arrived, because we couldn't hit those speeds back then. But it was clear that it was a necessary change as things moved forward. Limiting things to today's technology is not how the industry works.


            When has that ever stopped the progression of disk technology (or any technology)? We've had things like SCSI, IDE, SATA, and SAS over the years. Physical connector changes are a natural part of technological evolution.
            ok so there are some valid edge cases, fair enough

            as for the 2nd, introducing a compatibility breaking change to a dying technology isnt exactly the best idea

            Comment

            • Joe2021
              Phoronix Member
              • May 2021
              • 105

              #26
              Originally posted by ahrs View Post
              You want more? Most of my installs consist of exactly one partition on legacy BIOS/MBR [...]
              Is there something I'm missing? Are namespaces for VMs or containers? You might as well run some complicated LVM setup instead and do it in software.
              Yes, I do want more, and you are missing something. I'd suggest to investigate namespaces to find out why they were introduced and what they can actually do.

              Comment

              • ahrs
                Senior Member
                • Apr 2021
                • 557

                #27
                Originally posted by Joe2021 View Post

                Yes, I do want more, and you are missing something. I'd suggest to investigate namespaces to find out why they were introduced and what they can actually do.
                What can they do that LVM or subvolumes, etc, can't do? The main benefit as far as I can see is increased security from the hardware enforcing different isolated boundaries which I guess is useful for multi-tenant hosts where you might not necessarily trust the host kernel to enforce a security boundary correctly in software.

                Comment

                • Espionage724
                  Senior Member
                  • Sep 2024
                  • 326

                  #28
                  Originally posted by dlq84 View Post

                  Why not? NVME support vastly more command queues and is both faster and more efficient over fiber channel. Just to name a couple of reasons.
                  Everyone wanting to take a page from systemd and just eating up all duties

                  Comment

                  • Joe2021
                    Phoronix Member
                    • May 2021
                    • 105

                    #29
                    Originally posted by ahrs View Post

                    What can they do that LVM or subvolumes, etc, can't do? The main benefit as far as I can see is increased security from the hardware enforcing different isolated boundaries which I guess is useful for multi-tenant hosts where you might not necessarily trust the host kernel to enforce a security boundary correctly in software.
                    There are many features in CPUs and various subsystems you easily could claim only being relevant for "multi-tenant hosts". But you not being interested in those features doesn't mean nobody is. In fact, many features primarily targeted at "multi-tenant hosts" are in common use today.

                    Actually, I am tired of arguing with people insisting that a feature set they have a superficial understanding of at most is not desirable for others. If you are happy with your software stack, fine, nobody is trying to take that from you. But what is your incentive arguing against requests of others?

                    Comment

                    • mobadboy
                      Senior Member
                      • Jul 2024
                      • 161

                      #30
                      Originally posted by Joe2021 View Post

                      There are many features in CPUs and various subsystems you easily could claim only being relevant for "multi-tenant hosts". But you not being interested in those features doesn't mean nobody is. In fact, many features primarily targeted at "multi-tenant hosts" are in common use today.

                      Actually, I am tired of arguing with people insisting that a feature set they have a superficial understanding of at most is not desirable for others. If you are happy with your software stack, fine, nobody is trying to take that from you. But what is your incentive arguing against requests of others?
                      welcome to moronix

                      "if it isnt useful for my dumb ass outdated useless flow, it shouldnt exist"

                      this attitude exists most places, but Reddit r/linux is much much much MUCH MUCH MUCH better

                      phoronix is for shitposting the morons who get banned from the various subreddits

                      Comment

                      Working...
                      X