Announcement

Collapse
No announcement yet.

Linspire Is Back From The Dead In 2018

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by oiaohm View Post
    RSS setting and QOS settings in the i21x enternet controller need to be adjusted.
    QOS doesnt do a thing if the network is otherwise idle.
    Originally posted by oiaohm View Post
    This is exactly the effect you see when you have QOS and RSS settings wrong in network card for the load you are attempting to run. Something interesting run the SSH at the same time as the network file system and watch it collapse deeper then go change the controllers settings and notice the problem disappear.
    If the accumulated bandwidth is a fraction of the availabe, then there is alot amiss and default Linux is borked, and Windows rightfully walks over it (assumed thats true).
    Originally posted by oiaohm View Post
    Simple thinks like background service checking for updates on default i21x network card settings for Windows on Linux results in kicking the heck out a different protocol traffic patterns.
    Nope Network is idle-
    Originally posted by oiaohm View Post
    <b>If you still dont get what I mean, try describing how mounting a network drive works in the Gnome3 Desktop, and which software compananets are involved.</b>
    So you are talking about the gvfs backend.



    The smb-get issue. Reason why to direct mount and avoid it.


    Samba documentation does not recommend using what gvfs uses.
    That smb-get is the first usefull information. Only:

    - I already mount directly using this ftab:
    '//192.168.0.2/public /media/WDCloud cifs user=none,password=none,rw,users,uid=1000,vers=3,n oauto 0 0'
    - speed went from 20MB/s to 60MB/s. I already mentioned that I tinkered, and I tried every protocol the Network drive supports (NFS, AUFS, CIFS) and many mount options
    - thats still half of the speed you get in windows.
    - other PCs with some Realtek NICs running Linux show the same issue.
    - the same PC with the the on-board I217-V shows the same issue.

    Originally posted by oiaohm View Post
    You do have Smb4K that does work.

    You have 3 main methods of mount a cifs under Linux. cifs in Linux kernel and smbclient that does perform both automatically set the correct block size and are both recommend by samba project documentation. smbget that gfs_smb uses that you have to manually set the blocksize and by samba documentation was only recommend to be used for testing. smb4k from the kde world in fact works as per samba documentation.
    As you see, I use cifs with a generous 1MB transaction size.

    //192.168.0.2/public on /media/WDCloud type cifs (rw,nosuid,nodev,relatime,vers=3,cache=strict,user name=none,domain=,uid=1000,forceuid,gid=1000,force gid,addr=192.168.0.2,file_mode=0755,dir_mode=0755, soft,nounix,serverino,mapposix,rsize=1048576,wsize =1048576,echo_interval=60,actimeo=1)

    Originally posted by oiaohm View Post
    Add in the network card you have and you have double levels of trouble. I was thinking cifs by linux kernel or smbclient performing badly that would be your network card.

    Gfs smb peforming badly can be not setting a blocksize and the fact it using a testing interface smbget instead of one of the interfaces for production and being testing interface smbget lacks automatic block size adjustment and this is because you want throw random block sizes at servers and see if it breaks in testing.

    The reality is number of parts that effect performance of network share smb/cifs mounting is insanely minimal they are all part of the samba project. Just does not help that gfs uses smbget that was developed for testing not for performance.

    Yes using smbget directly you see the same performance levels as what gfs serves up all the other levels of crap there does not make a rats difference to performance.

    This is another problem project like samba built interfaces for testing and production then you have graphical developer decide to use the interface for testing then wonder why they have performance issues.

    I love how it comes the stack is complex that is why there is a performance problem when that is absolutely not the case. If you were using a proper interface you would not have performance problem or using the right values into smbget that gfs does not do without manual end user work.
    Might be partially because (I assume) we both dont have english as first language, but I can`t follow the point you are trying to make.
    Fact is, that good supported hardware with popular Linux desktops sucks at really simple things like transferring files over a network share. First part is the smbget issue (totally in-transparent by the way), and with that solved the performance is not where it should be.

    And to finally, to put everything you said about this being some obscure low-level issues in context, copying files via commandline cp is around 100 MB/s.
    Thats benchmarks vs. every-day desktop use. (and no, this really is faster and not some creative maths).

    Comment

    Working...
    X