Announcement

Collapse
No announcement yet.

What Excites Me The Most About The Linux 4.12 Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by grok View Post
    Thanks for the answers, although NVMe external storage would require Thunderbolt. That's where it gets more hairy I think, it's rather different from USB.
    Tunderbolt is now a USB-C alternate mode, just like the displayport and HDMI modes. So theoretically a USB-C can support it (if the device with the host port has wired the Displayport and PCIe lanes to support its functionality, anyway.). https://en.wikipedia.org/wiki/USB-C#...specifications

    Admittedly you are 100% justified by the fact that the marketing for the whole thing is a major clusterfuck.

    And even there you can read posts about HDMI adapters not properly working. I'm willing to attribute it to growing pains. It's hard to reason about though (is it a straight USB-C to HDMI adapter that uses USB-C alternate HDMI mode? Is it a Displayport to HDMI adapter that uses USB-C alternate Displayport mode? which one do you need? can you get super 4k 60Hz HDR coffee-drip mode?)
    Also cables, to use the alternate modes you usually need better/active/more expensive USB-C cables. Same port, different cables. Plenty of fun.

    Comment


    • #32
      Originally posted by flux242 View Post
      Ya all have big fat servers with stacks of hdds right? Because bfq doesn't bring any throughput improvements for the ssd drives.
      Ummmmmmmmm....... the main reason for bfq is not freezing user interface or other active processes while there is a massive I/O usage, not increasing throughput.

      Comment


      • #33
        Originally posted by starshipeleven View Post
        Ummmmmmmmm....... the main reason for bfq is not freezing user interface or other active processes while there is a massive I/O usage, not increasing throughput.
        without increasing the throughput the overall time in the queue stays the same. Reordering io queries to serve fastest ones first decreases the overall time in the queue a bit of course but tbh, buying an ssd that could transfer at 1.5GiB instead of 500MiB per second is much more viable option. So if your hobby is to compile darn kernel in a loop 24/7 in the background then go for faster disks

        Comment


        • #34
          Originally posted by flux242 View Post
          without increasing the throughput the overall time in the queue stays the same.
          What part of " the main reason for bfq is not freezing user interface or other active processes while there is a massive I/O usage, not increasing throughput." you fail to understand?

          Some processes get shafted, some other processes get prioritized. Because on a PC the user-facing programs should always be prioritized to avoid having them lag.

          tbh, buying an ssd that could transfer at 1.5GiB instead of 500MiB per second is much more viable option.
          Which is expensive, not supported everywhere (Sata 3 goes at 600 MB/s, eMMC won't get near that for a while), and it is still going to be pointless whenever you work with or transfer files to a crap media like usb drives or external HDDs.

          And is also solving software issues with hardware, so it's more like procrastinating, a few years down the line when programs fully use the 1.5GB bandwith you'll still get the same freezes.

          So if your hobby is to compile darn kernel in a loop 24/7 in the background then go for faster disks
          That's mostly a CPU load, I can do that fine already (actually recompiling a LEDE firmware from source with make -j10 , takes a few hours on my ivy bridge xeon) without causing particular issues, and I'm using HDDs with btrfs on my workstation where I compile it.

          EDIT: here an example of I/O load http://www.hecticgeek.com/2012/11/bf...kloads-ubuntu/

          As you can see, throughput is not better, but program startup happens like 5-10 times faster while there is a data transfer underway.
          Last edited by starshipeleven; 07-02-2017, 08:31 AM.

          Comment


          • #35
            Originally posted by starshipeleven View Post
            What part of " the main reason for bfq is not freezing user interface or other active processes while there is a massive I/O usage, not increasing throughput." you fail to understand?

            Some processes get shafted, some other processes get prioritized. Because on a PC the user-facing programs should always be prioritized to avoid having them lag.

            Which is expensive, not supported everywhere (Sata 3 goes at 600 MB/s, eMMC won't get near that for a while), and it is still going to be pointless whenever you work with or transfer files to a crap media like usb drives or external HDDs.

            And is also solving software issues with hardware, so it's more like procrastinating, a few years down the line when programs fully use the 1.5GB bandwith you'll still get the same freezes.

            That's mostly a CPU load, I can do that fine already (actually recompiling a LEDE firmware from source with make -j10 , takes a few hours on my ivy bridge xeon) without causing particular issues, and I'm using HDDs with btrfs on my workstation where I compile it.

            EDIT: here an example of I/O load http://www.hecticgeek.com/2012/11/bf...kloads-ubuntu/

            As you can see, throughput is not better, but program startup happens like 5-10 times faster while there is a data transfer underway.
            I absolutely agree with the part I bolded. So true.

            EDIT: The problem is not so much how much bandwidth a device has as a total, it's how much bandwidth is being used vs how much bandwidth needs to be available. If you only need 15mb/s and that much is available then it'll be fine. But if you have a situation where 500mb/s needs to be available and only 15 is available, then you can plainly see how that would cause latency to exhibit.
            Last edited by duby229; 07-02-2017, 08:44 PM.

            Comment

            Working...
            X