Announcement

Collapse
No announcement yet.

Ubuntu Linux Working On Installer Support For NVMe-over-TCP

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Shuttleworth's interview from the Ubuntu Summit was pretty good, I thought - he's direct about the fact that desktop Linux is a "labor of love", not profit-generating on its own, and they have been trying to build enterprise revenue in part to be able to fund desktop work.

    Comment


    • #12
      Originally posted by kylew77 View Post
      In defense of Danny3, there is no one really working on Desktop Linux, Ubuntu and Red Hat do what pleases the Enterprise most. When you have people paying for support contracts the money talks. So while I UNDERSTAND why Ubuntu is doing this, as someone who runs Linux/Unix on the DESKTOP, it seems like it is a waste of resources.
      Even for desktop, if it allowed me to have a single root partition for multiple computers on the network, I could be interested.

      Comment


      • #13
        Originally posted by kylew77 View Post
        In defense of Danny3, there is no one really working on Desktop Linux, Ubuntu and Red Hat do what pleases the Enterprise most. When you have people paying for support contracts the money talks. So while I UNDERSTAND why Ubuntu is doing this, as someone who runs Linux/Unix on the DESKTOP, it seems like it is a waste of resources.
        To be fair, no one has really worked on Desktop anything from the perspective you lay out. Windows was always an interface designed to try to squeeze some small amount of productivity out of corporate drones. Both Windows and MacOS just fight you to the death on almost any attempts at configuring things to look and work better. Some users claim they "LOVE" MacOS, but that's probably just the people who live in fear of configuring anything themselves. ChromeOS is just an advertising platform and basically spyware, and Windows is joining them as an advertising platform.

        Whatever work is put into GNU/Linux desktops, at least it's for the right reason most of the time. Someone thinks of a way to do things better and tries to implement it. Probably get it wrong as often as not, but at least their their motives are fairly pure.

        Comment


        • #14
          Originally posted by kylew77 View Post
          In defense of Danny3, there is no one really working on Desktop Linux, Ubuntu and Red Hat do what pleases the Enterprise most. When you have people paying for support contracts the money talks. So while I UNDERSTAND why Ubuntu is doing this, as someone who runs Linux/Unix on the DESKTOP, it seems like it is a waste of resources.
          Thank you and that's exactly how I feel!

          Comment


          • #15
            Originally posted by unwind-protect View Post
            How is this different from iSCSI?
            faster and more iops

            Comment


            • #16
              Installer support means it can be kickstarted/automatic or whatever Ubuntu does so a native mount can happen from init.. It's not really about desktop is about bare metal DC or VM with expensive SAN. Ubuntu has a footprint in the world that actually uses direct storage over tcp.

              Comment


              • #17
                Originally posted by Danny3 View Post
                Yet another thing that I don't care about!
                Glad that I don't use Ubuntu as I would probably be upset to see how it wastes resources / time again.
                Troll

                Comment


                • #18
                  Originally posted by unwind-protect View Post
                  How is this different from iSCSI?
                  They operate in a similar way, but NVMe is much faster for NVMe drives, whereas (i)SCSI was designed for HDDs like SATA.

                  Comment


                  • #19
                    If only there was some ubiquitous inexpensive high bandwidth desk-area network fabric that could link several NVME drives between one or more nearby host PCs and a good chip that basically connected NVME drives to such a very-local-LAN/SAN fabric and bridged that also to NVME-over-TCP and ethernet for access over longer distances.

                    These little M.2 NVME drives are pretty good (though apparently not so scaleable wrt. performace & thermal management) but utterly fail to live up to the promise of scalable storage within a typical consumer workstation mid-range desktop because you're not getting more than 2-4 of them even installed due to lack of free slots / PCIE lanes and there's no good way to connect more than epsilon of them externally without
                    reducing your throughput to well under 20% of what the drive is capable of.

                    What, a 40Gbit/s ethernet TCP link would be ~enough to connect ONE moderately high performance NVME drive to a LAN over TCP and without a line-rate bridge chip one would need basically a whole server to house the NVME drive, provide PCIE M.2 access, provide a TCP and I/O stack and run BSD / LINUX, then two 40Gb/s NICs, a switch ... and that's for ONE drive. And the latency would suck.

                    A RAID10 of like 8 of them would be just insanely (LAN) bandwidth limited (ideally 320 Gb/s+) and hugely expensive without a better fabric to just extend PCIE like function / bandwidth "at scale" (i.e. beyond a single USB4 / TB link) and bridge directly to the NVME's PCIE interface.

                    It doesn't seem unreasonable to have a cheap / easy ~200+ Gb/s expansion pipe fabric to connect things in the same desktop environment but
                    USB's far from fulfilling that role in the next 2 desktop generations when you can barely get more than 1-2 not even full performance USB3-C ports on a typical host motherboard.

                    Consumers really have it rough these days we get these really cool technologies "out there" in server land (powerful GPUs, NPUs, server class CPUs, power / chassis / motherboard management stuff that works, SANs, 100Gb+ NICs, useful numbers of PCIE lanes / slots / NVME interfaces, ECC, more than 256GBy RAM easily added, ...) but the SMB desktops we're basically disabled in every way so we can't even meaningfully
                    use more than a couple / few NVMEs / DIMMs / GPUs / PCIE x16 slots / Gb/s NIC / ...

                    Comment

                    Working...
                    X