Announcement

Collapse
No announcement yet.

Microsoft Joins Open Invention Network With Its 60,000+ Patents

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Weasel View Post
    Doesn't changing the "buffer size" affect all caching on the system, and not just one disk? Or what do you use to change the kernel's buffer size for a specific disk?
    It was one of the kernel "vm"(virtual memory?) parameters, it's been over a year since I looked into it. It's a global setting, unfortunately not per disk, my original message about this stuff on this thread was about how it'd be nice if the kernel got a per disk variant so that perf didn't need to be given up. So while it works great as a fix for the issue I was having, my internal writes lose thoroughput.

    Comment


    • #82
      Originally posted by caligula View Post
      Sounds like an issue with your applications then.
      Uhh ok dude.. What part of "I have experienced this problem with various USB sticks(possibly only USB 2.0 ones or the USB 2.0 port), various machines with differing hardware(systems that cost thousands to build so hardly what I'd say is cheap), various distros/DE(definitely recall KDE and Gnome, Arch and Ubuntu), problem doesn't exist on Windows, problem is fixed when reducing the kernel I/O buffer via relevant vm parameter" don't you get?

      I've used Dolphin and Nautilus to do the file transfers, I recall Gnome being fast with the progress bar, then grinding to a halt for hours around the 90%+ mark. KDE would complete the transfer progress visually, but the USB would refuse to eject for hours as it was still copying. I've only used dd with CLI frequently, I rarely transfer files to external media via CLI unless I'm unable to boot into a DE due to something in my system breaking.

      I use FAT32 for boot too, I'm not talking about read performance, I'm talking specifically about external USB stick media, more than likely only over USB 2.0 and noticeable with large file transfers(a file is 1GB+).

      Ejecting via command line has the same problem with declaring the media still busy for hours. If I force remove it, the data is corrupted, not sure what you're trying to say here, you have to wait a long ass time for the writes to finish when experiencing this, unless you wrote the data on Windows or with the kernel adjustments. It's great that you're rarely if ever experiencing that and can claim it's due to hardware quality or software issue, but as stated widespread results online about it, if it works fast on Windows or with kernel adjustments, or a linux filesystem, then something else about it is wrong, it can clearly perform writes fine in other situations, just not by default.

      Ok? So no crappy desktop(I've found it a problem on low and high end systems, and my phone, I've had the low end $100 one, and high end $1,000+ using internal flash not sd card, still had MTP issues(but I can only recall those with KDE Dolphin, so it might be a KIO issue there).

      Comment


      • #83
        Originally posted by polarathene View Post

        Uhh ok dude.. What part of "I have experienced this problem with various USB sticks(possibly only USB 2.0 ones or the USB 2.0 port), various machines with differing hardware(systems that cost thousands to build so hardly what I'd say is cheap), various distros/DE(definitely recall KDE and Gnome, Arch and Ubuntu), problem doesn't exist on Windows, problem is fixed when reducing the kernel I/O buffer via relevant vm parameter" don't you get?
        You don't seem to realize what Linux is doing. When you copy data to USB flash, Windows starts it immediately. Linux doesn't start any actual data transfer at all. It uses huge ass buffers on modern machines. The Linux FS buffer can be bigger than the total size of your USB media. Thus all the data fits in RAM.

        I've used Dolphin and Nautilus to do the file transfers, I recall Gnome being fast with the progress bar, then grinding to a halt for hours around the 90%+ mark. KDE would complete the transfer progress visually, but the USB would refuse to eject for hours as it was still copying.
        Again, stop using those shitty class 2 flash. I don't know how to spell this for you. Linux using huge buffers does not decrease filesystem performance, it actually improves it because you can do all IOPS in RAM and only write a huge sequential chunk on the actual disk. Which is the best way to write if you need to write fast and minimize block level wear. The only problem is, this write is maximally delayed. If you write 4000 MB of data using 1 MB/s write speed, it will take ~4000 seconds = ~70 minutes. If you used Windows, the write might start 5-10 minutes earlier so the eject operation would take only 60-65 minutes. Pure win.

        If you want to know how I avoided the stated problems, I spend some time studying various USB stick brands and performance. I also benchmarked few USB3 SD card readers. There was enormous variation between brands. For example, the reader in my $1500 notebook is connected via PCIe, but it's still slower than the $5 USB3 dongle from Transcend: http://www.deal-dx.com/deal-dx/viewi...ack-128gb.html

        Now I have fast sticks and fast readers. I also use dedicated USB hosts. If I mount the 64 GB DSLR SD card (Transcend USB3 SD dongle) and move some random stuff around, the write speed will be around 75-90 MB/s. I can do this all via GUI without any issues. Writing a 4000 MB image takes 45-60 seconds.

        Take a look at these results and note how write speed varies between 13 and 92 MB/s even when using quality brands: https://www.cameramemoryspeed.com/re...d-reader-rdf5/

        Ejecting via command line has the same problem with declaring the media still busy for hours.
        It's basic math. Writing n bytes of data on a media with a write throughput of s B/s takes n/s seconds.

        Comment


        • #84
          Originally posted by caligula View Post
          You don't seem to realize what Linux is doing. When you copy data to USB flash, Windows starts it immediately. Linux doesn't start any actual data transfer at all. It uses huge ass buffers on modern machines. The Linux FS buffer can be bigger than the total size of your USB media. Thus all the data fits in RAM.
          Uhh...? I talked about that buffer several times in prior posts, even in the quoted text I state reducing it will fix the issue. That quoted text was in response to you claiming it was just cheap/shit quality usb stick at fault or application specific. Default buffer size as mentioned in prior posts is half RAM iirc(there are a few other params that affect the behaviour beyond that).

          Copying data to the buffer is fast. The problems I cited was transfer speed from then on went to a snails pace and how the DE's(Gnome and KDE) transfer progress UI / notifications behaved.


          Originally posted by caligula View Post
          Again, stop using those shitty class 2 flash. I don't know how to spell this for you. Linux using huge buffers does not decrease filesystem performance, it actually improves it because you can do all IOPS in RAM and only write a huge sequential chunk on the actual disk. Which is the best way to write if you need to write fast and minimize block level wear. The only problem is, this write is maximally delayed. If you write 4000 MB of data using 1 MB/s write speed, it will take ~4000 seconds = ~70 minutes. If you used Windows, the write might start 5-10 minutes earlier so the eject operation would take only 60-65 minutes. Pure win.
          And I don't know how to spell this out to you... the issue I described could take 4 hours or more to transfer 1GB of data before I could eject the media without data corruption from the transfer failing to complete, it'd happen in less than a minute or so with the buffer adjustment or via Windows. Why the write behaved this way is beyond me, I completely understand what you're trying to say in how the buffer is meant to be beneficial, but it most definitely was not for the entire 1GB in memory, smaller chunks like 15MB then flushing that though? Great!

          Maybe it is "shitty class 2" flash, but it's not a great default experience and many Windows users I know would have an issue with something like that, it'd be one more thing to put them off using linux as that ain't something they'd want to spend time looking into how to resolve(beyond buying better quality media as that's not usually a valid option at the moment you're trying to copy something).

          Originally posted by caligula View Post
          If you want to know how I avoided the stated problems, I spend some time studying various USB stick brands and performance. I also benchmarked few USB3 SD card readers. There was enormous variation between brands. For example, the reader in my $1500 notebook is connected via PCIe, but it's still slower than the $5 USB3 dongle from Transcend: http://www.deal-dx.com/deal-dx/viewi...ack-128gb.html

          Now I have fast sticks and fast readers. I also use dedicated USB hosts. If I mount the 64 GB DSLR SD card (Transcend USB3 SD dongle) and move some random stuff around, the write speed will be around 75-90 MB/s. I can do this all via GUI without any issues. Writing a 4000 MB image takes 45-60 seconds.

          Take a look at these results and note how write speed varies between 13 and 92 MB/s even when using quality brands: https://www.cameramemoryspeed.com/re...d-reader-rdf5/


          It's basic math. Writing n bytes of data on a media with a write throughput of s B/s takes n/s seconds.
          Mate, that's great and all but I don't think it's a fair expectation to expect everyone to invest time in looking into hardware like that for every purchase, many users(at least in Windows and macOS land) are going to go I need X and go to a store and buy it, for the more expensive items, sure they might justify some more time towards their investment, but USB media? Far less likely, storage enthusiast or someone who needs fast media such as SD cards for photography might be the kind that bother with that. I've had enough fatigue / time-sink from doing this with other areas of hardware especially when linux compatibility is a concern, last item would have been a wifi dongle(802.11ac, went with a MT76 chipset since that'll have mainilne support for usb adapters in 4.19).

          I can avoid the problem just fine by using a different FS or adjusting the vm kernel params to reduce the buffer size. I can't always get top quality USB media as it's not always mine, eg someone wants me to copy over some data to their usb device. I let them enjoy their preferred OS and don't bother them about Linux, but they know I'm a fan of it, and the people that have had me transfer to external media with this issue happening just makes Linux look like shite from their view(perhaps more than they already perceived it) and me a bit embarrassed / stupid looking because their OS handles it fine when I didn't know why or how to resolve it on mine at the time.

          Comment


          • #85
            polarathene I never had that problem using XFCE. Maybe stop using shitty DEs or force flush it from command line before ejecting. Use the "sync" command on a terminal. There's no reason to decrease the "buffer size".

            If you have SysRq enabled you can even do Alt+SysRq+S to sync, which is harmless, just be careful not to press something else by mistake.

            Comment


            • #86
              I have a similar experience as polarthene.

              In KDE the progress bar/copy operation will be reported as finished while the actuall data are still being transfered.

              Most likely for KDE (any DE maybe?), once the data have been copied to the buffer on the way to the USB stick, as far as the DE is concerned the data "are out of their hands", so you get the situation were the DE reports the data as copied while the data are still being transfered.

              However, I can't say that I have problems with the speed though. Although the reporting might be messed up, still the actual speed of transfer, the actual time is takes for the data to be on the stick (irrespectivly of what the progress bar shows) , I would say most of the times is fast/non problematic. I have had more problems with windows copying slowly to USB sticks.

              Comment


              • #87
                Originally posted by Platon View Post
                Most likely for KDE (any DE maybe?), once the data have been copied to the buffer on the way to the USB stick, as far as the DE is concerned the data "are out of their hands", so you get the situation were the DE reports the data as copied while the data are still being transfered.
                Exactly.

                However, I can't say that I have problems with the speed though. Although the reporting might be messed up, still the actual speed of transfer, the actual time is takes for the data to be on the stick (irrespectivly of what the progress bar shows) , I would say most of the times is fast/non problematic. I have had more problems with windows copying slowly to USB sticks.
                This is the result of having decent flash media. The way Linux does it might confuse people using GUI desktops, but in a terminal, the 'sync' command won't stop blocking before everything has been written to the disk. I think if the DE doesn't show this actual progress, something is broken. I've tested Gnome, LXDE (pcmanfm), and XFCE (Thunar) and they all seem to signal when the write has finished. For example if you eject the drive, it might stay visible until the write finishes, then fully disappears and the only way to remount is to disconnect the drive and plug it in again. That's logical.

                How Linux works here isn't some esoteric special case for USB flash. For example if you have an array of SMR hard drivers, you can easily hear that the drives will continue to operate long after some file transfer finished according to the GUI. I don't find this problematic. Then again, I'm fully aware that flashing SD cards will be slow with those $5 cheap class 2 cards.

                Comment


                • #88
                  You guys ought to use disk monitoring, I personally love gkrellm for this and observing network activity as well. Then you easily see when Linux is "still writing" with sync on your flash. And it's compact: it holds 10 minutes of history in its graph, unlike most shitty monitors who show only last 10 sec.

                  Comment


                  • #89
                    Originally posted by Mike Frett View Post

                    Interesting you say that since there actually are companies trying to find ways to patent Water and Air.
                    I never knew if to laugh or to cry when I hear about what these kind of companies are trying to do.

                    Comment


                    • #90
                      Originally posted by polarathene View Post
                      I've seen it on both Gnome and KDE, depending on file size it wasn't always instant, but it'd copy faster than it should, often reach around 90% or so and be stuck there for a really long time on Gnome, and on KDE it'd claim the transfer as finished(but trying to eject would refuse due to being busy).
                      I know it's an old post, but it looks like the confusion persisted. If eject/umount says "target is busy" it actually means that some application still has a file from that device open. Now, what can be even more confusing is that a directry is also treated as a file, so if an application simply has the current directory set somewere inside the stick's mount point it will prevent unmounting that device.

                      So what probably happened in your case is that you opened the file manager and copied the file(s) then left it open showing the contents of the removable drive. This was enough to prevent eject/umount from working and you can find out what application is keeping the filesystem busy with the command `lsof <mount point>` or `lsof <device>`. Someting similar happens on Windows as well, but there they handle the common case of leaving explorer showing the contents of the removable device better by automaticaly killing it. I'm pretty sure Plasma's device notifier applet also does this with Dolphin lately.

                      Once all such applications are closed or changed to another current directory the eject/umount command will no longer complain, just trigger writing the data from buffers to disk and wait for it to complete.

                      Comment

                      Working...
                      X