Announcement

Collapse
No announcement yet.

Transmission 2.80 Offers Up Various Changes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • c117152
    replied
    Originally posted by Rexilion View Post
    I did not expect that. Did you run multiple instances with seperate users?
    It was a while back but from what I remember a group of users called trans1 trans2... trans12 so probably yes.

    Originally posted by Rexilion View Post
    Code:
    gebruiker@Delta:~$ lsof -i -a -p $(pgrep transmission)
    COMMAND     PID      USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
    transmiss 28001 gebruiker   17u  IPv4 3945341      0t0  TCP *:56070 (LISTEN)
    transmiss 28001 gebruiker   18u  IPv4 3945342      0t0  TCP *:50111 (LISTEN)
    transmiss 28001 gebruiker   19u  IPv4 3945344      0t0  UDP *:50111
    The 56070 port is for transmission's http viewer. Or are you implying other ports used for connecting to other clients? Perhaps you need to tweak the TCP stack (yes, this would not be optimal).
    Each connected peer establishes another socket on another port so under real world usage it builds up. On my personal home machine I've seen 3 active torrents reaching well over 150 connections. Now, on paper the hardware should be fine, in reality even mid-end switches won't handle something like this let alone the low end ones.

    Originally posted by Rexilion View Post
    So I guess that HTTP and FTP are used so much since there are layer 2 switches for those... Makes sense.
    I suppose. I never bothered with the whys

    Originally posted by Rexilion View Post
    Did you try to use kernel throttling? (tc). Throttling the amount of peers will not reduce congestion, only overhead (I think).
    The l2 hardware was the second QoS step. The throttling was there first. But that was the signal guy's job so I didn't bother going into to much details there...

    Originally posted by Rexilion View Post
    If "peer in p2p means a seperate lan ip", why not assign multiple ip's to a single NIC and try that? Maybe that is a solution. I'm genuinely surprised that setup is not working...
    tap1 tap2 tap3... Yeah. I think that was done as well. Not sure what went wrong there though. It was a while back...

    Originally posted by Rexilion View Post
    I guess that samba and ftp servers are more developed to this regard. You just found out the hard way :/
    It was fun Besides it was an important learning experience. I always was a big proponent of the *nix's small programs working well philosophy. But seeing it break apart made me realize something wasn't quite working right there.
    Eventually I've learned about Plan9 and how this sort of issues were addressed from the bottoms up in the kernel, protocols and even user land. It's the reason I'm fine with systemd. Sure it's not *nix like sysvinit, but *nix isn't all bed of roses either so maybe an out of the box solution is just what it takes.

    Anyhow, I'm sure one day someone will build on either bittorrent or some other p2p protocol an enterprise solution. GIT vs. CVS is sorta the same idea so it's not too unlikely...

    Leave a comment:


  • Rexilion
    replied
    Originally posted by c117152 View Post
    Sadly it seems the protocol itself doesn't lend to multiple instances running in the same machine. We've tried different loads and setups including foregoing all firewalls and security for the sake of testing. But, the clients just misbehaved and at rare cases even seg faulted. This wasn't just transmission but other implementations as well.
    I did not expect that. Did you run multiple instances with seperate users?

    Originally posted by c117152 View Post
    At first, there were forwarding issues since each instance needs to open a whole lot of ports and just doesn't scale over 2-5 clients depending on the number of active torrents. Already some gigabyte (expensive at the time) cards and switches were unboxed since the overhead started to build up across the lan. This was already in the works so it didn't raise the red light.
    Code:
    gebruiker@Delta:~$ lsof -i -a -p $(pgrep transmission)
    COMMAND     PID      USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
    transmiss 28001 gebruiker   17u  IPv4 3945341      0t0  TCP *:56070 (LISTEN)
    transmiss 28001 gebruiker   18u  IPv4 3945342      0t0  TCP *:50111 (LISTEN)
    transmiss 28001 gebruiker   19u  IPv4 3945344      0t0  UDP *:50111
    The 56070 port is for transmission's http viewer. Or are you implying other ports used for connecting to other clients? Perhaps you need to tweak the TCP stack (yes, this would not be optimal).

    Originally posted by c117152 View Post
    Then, we had some initial success with layer 2 switches to favor http and ftp per machine and establish quotas. But much of the network wasn't l2 so cost was unacceptable.
    So I guess that HTTP and FTP are used so much since there are layer 2 switches for those... Makes sense.

    Originally posted by c117152 View Post
    Since we're not an open network, we decided to try and manage each client's machine separately instead. This already is something most companies wouldn't do. Never the less, we tried down regulating and limiting individual clients in each machine; To say, 10 peers per torrent and 50 overall connections and other figures. But, it either bottlenecked the network to the point that nothing was downloading and you couldn't even surf the web, or it killed the torrenting specifically.
    Did you try to use kernel throttling? (tc). Throttling the amount of peers will not reduce congestion, only overhead (I think).

    Originally posted by c117152 View Post
    I even clearly remember the one setup that seemed to work, only the CIFS turned out unusable... Just weirdness all around. And mind you we weren't just doing just the usual amateurish protocol analyses. One of the guys was an ex signal processing dev and he run all sort of strange voodoo trying to figure out what was wrong. He's actually the one that said that the "peer in p2p means a separate lan ip". When I suggested a single daemon on a dedicated machine for the lan-wan with FTP-like user privileges. The decision was against it since the advantage versus FTP was not applicable for such a small firm. Obviously, I ignored the "decision" and went ahead to try and set it up only to discover that while many FTP daemons had their own optional user management, the bittorrent daemons did not.
    If "peer in p2p means a seperate lan ip", why not assign multiple ip's to a single NIC and try that? Maybe that is a solution. I'm genuinely surprised that setup is not working...

    Originally posted by c117152 View Post
    TL;DR: Been there, failed that. Like Samba and the various FTP servers (ProFTPD comes to mind), virtual users are necessary for more complex networks. It's the same pros\cons, and it's just as necessary. I suspect there is a way to still pull off a couple of daemons on the same physical machine, but I'm not convinced it will scale with firewalling and security without some real hardware costs. Still, I'd like to see a working setup...
    I guess that samba and ftp servers are more developed to this regard. You just found out the hard way :/

    Leave a comment:


  • deanjo
    replied
    Originally posted by Kivada View Post
    It's still too simple, I'll stick to Vuze even though it's written in Java... It's still the only torrent app that doesn't completely oversimplify things.
    You might want to try ktorrent. It's pretty heavy on the features.

    Leave a comment:


  • Kivada
    replied
    It's still too simple, I'll stick to Vuze even though it's written in Java... It's still the only torrent app that doesn't completely oversimplify things.

    Leave a comment:


  • c117152
    replied
    Originally posted by Rexilion View Post
    Why not use a seperate daemon for each user? Unix has user seperation already, best not to re?mplement this again in the daemon.
    Sadly it seems the protocol itself doesn't lend to multiple instances running in the same machine. We've tried different loads and setups including foregoing all firewalls and security for the sake of testing. But, the clients just misbehaved and at rare cases even seg faulted. This wasn't just transmission but other implementations as well.

    At first, there were forwarding issues since each instance needs to open a whole lot of ports and just doesn't scale over 2-5 clients depending on the number of active torrents. Already some gigabyte (expensive at the time) cards and switches were unboxed since the overhead started to build up across the lan. This was already in the works so it didn't raise the red light.

    Then, we had some initial success with layer 2 switches to favor http and ftp per machine and establish quotas. But much of the network wasn't l2 so cost was unacceptable.

    Since we're not an open network, we decided to try and manage each client's machine separately instead. This already is something most companies wouldn't do. Never the less, we tried down regulating and limiting individual clients in each machine; To say, 10 peers per torrent and 50 overall connections and other figures. But, it either bottlenecked the network to the point that nothing was downloading and you couldn't even surf the web, or it killed the torrenting specifically.

    I even clearly remember the one setup that seemed to work, only the CIFS turned out unusable... Just weirdness all around. And mind you we weren't just doing just the usual amateurish protocol analyses. One of the guys was an ex signal processing dev and he run all sort of strange voodoo trying to figure out what was wrong. He's actually the one that said that the "peer in p2p means a separate lan ip". When I suggested a single daemon on a dedicated machine for the lan-wan with FTP-like user privileges. The decision was against it since the advantage versus FTP was not applicable for such a small firm. Obviously, I ignored the "decision" and went ahead to try and set it up only to discover that while many FTP daemons had their own optional user management, the bittorrent daemons did not.

    TL;DR: Been there, failed that. Like Samba and the various FTP servers (ProFTPD comes to mind), virtual users are necessary for more complex networks. It's the same pros\cons, and it's just as necessary. I suspect there is a way to still pull off a couple of daemons on the same physical machine, but I'm not convinced it will scale with firewalling and security without some real hardware costs. Still, I'd like to see a working setup...

    Leave a comment:


  • Rexilion
    replied
    Originally posted by c117152 View Post
    I want per-user downloads that can't be stopped by another unprivileged user. (NOT DONE)
    Why not use a seperate daemon for each user? Unix has user seperation already, best not to re?mplement this again in the daemon.

    Originally posted by c117152 View Post
    I want per-user hidden downloads so one user won't be able to see what the other is downloading if unprivileged to do so. (NOT DONE)
    Same argument as above.

    Originally posted by c117152 View Post
    I want per user storage and bandwidth quotas. (NOT DONE)
    Linux has per user quota support (have never used it though). I think you can also use per use bandwidth throttling using tc and iptables (mark user packets).

    Originally posted by c117152 View Post
    It really all comes down to transmission not really fitting in with the multi-user environment. The way torrent protocol works, you only want one client per machine. Likely even per IP.
    I definitly would not ever want to have the same www browser process being used by several different users.

    Originally posted by c117152 View Post
    I suspect it's the real reason why companies had troubles adopting torrents as a distribution model for files. They just can't set it up like they would an FTP server with multiple users.
    I disagree, with the reasons mentioned above.

    Originally posted by c117152 View Post
    I personally blame the distributions and not the devs in this one. If they were to package it as a daemon by default so when someone "sudo apt-get transmission", he will have a working daemon and a transmission group he only needs to add himself to, the devs would have started hearing more about the multi user concerns.
    It's not how it's packaged, it's how people use it.

    Leave a comment:


  • stqn
    replied
    I?ve never had a problem with Transmission?s GUI (Gtk) freezing, even on a low power Atom computer. However the new version has ?Remove the most frequent thread locks in libtransmission? in the release notes; maybe that will help.

    Leave a comment:


  • Antartica
    replied
    For multiuser, use multiple cli instances with nohup...

    Originally posted by c117152 View Post
    I want per-user downloads that can't be stopped by another unprivileged user. (NOT DONE)
    I want per-user hidden downloads so one user won't be able to see what the other is downloading if unprivileged to do so. (NOT DONE)
    I want per user storage and bandwidth quotas. (NOT DONE)
    I have many of your concerns, but I'm a happy user of transmission (v1.52). Its just a matter os using individual transmission-cli for every download, and launching them with nohup. For example (with upload limit of 20kB/s, $port has a non-used non-privileged port for use in this download):

    cd /path/to/new/download/dir
    nohup transmission-cli --port $port -u 20 -w "/path/to/new/download/dir" --finish /home/user/bin/touch-finished.sh /path/to/file.torrent 2>> nohup.out >>nohup.out

    After hacking some scripts it can be very convenient (this describes my current setup):
    1. Having a "watched" directory to save the .torrent files for automatic download (or a file with the suffix .magnet with the magnet link; firefox can be configured to call a script to do so when clicking a magnet)
    2. A script in crontab running every 5 minutes scans that directory to start new torrent with transmission-cli and nohup
    3. Every is keped tidy; there is a download directory with a subdirectory for every torrent, inside that directory is the nohup.out with the download stats and the torrent contents
    4. An utility script to list current download status (mixes a ps and does a tail of the nohup.out of every active downlaod)
    5. Another utility script to stop seeding a certain torrent

    Benefits:
    - You can stop a download killling the corresponding transmission-cli process
    - You can pause downloads sending a SIGSTOP and resume with SIGCONT
    - The scripts have "inteligence" as to when stop seeding (for example, after seeded 2 times the data size of 3 days)
    - The crontab script can restart a torrent if it is not "running" and it doesn't have finished downloading (this restarts downloads after a computer reboot)

    So if you are a power user, transmission-cli is sufficient for resolving those problem (although it may be non-optimal for magnets, as it may degrade the performance of PEX as every torrent is "independent").

    Leave a comment:


  • c117152
    replied
    Originally posted by mark45 View Post
    To me transmission sucks because the gui main loop often blocks on the torrent I/O which makes it randomly non-responsive while the down/upload speed drops, it's the only bittorrent client I have this problem (for years). So I try to touch the GUI as rarely as possible.
    The daemon uses (naturally non-blocking) RPC sockets to communicate with a variety of GUIs: Qt, GTK... Python, C, Pascal...
    This is the one I currently use:

    It's in C and uses GTK.

    I used to use this one which is more feature complete:

    It's possibly better for most use cases, but it's in Pascal so I can't play around it as much as I like.

    Most of these GUIs are more feature rich then the default provided one. The ones I've mentioned are also mostly less buggy. You can even use different GUIs at the same time from the same, or another machine.

    Leave a comment:


  • mark45
    replied
    To me transmission sucks because the gui main loop often blocks on the torrent I/O which makes it randomly non-responsive while the down/upload speed drops, it's the only bittorrent client I have this problem (for years). So I try to touch the GUI as rarely as possible.

    Leave a comment:

Working...
X