Announcement

Collapse
No announcement yet.

Transmission 2.80 Offers Up Various Changes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Kivada View Post
    It's still too simple, I'll stick to Vuze even though it's written in Java... It's still the only torrent app that doesn't completely oversimplify things.
    You might want to try ktorrent. It's pretty heavy on the features.

    Comment


    • #12
      Originally posted by c117152 View Post
      Sadly it seems the protocol itself doesn't lend to multiple instances running in the same machine. We've tried different loads and setups including foregoing all firewalls and security for the sake of testing. But, the clients just misbehaved and at rare cases even seg faulted. This wasn't just transmission but other implementations as well.
      I did not expect that. Did you run multiple instances with seperate users?

      Originally posted by c117152 View Post
      At first, there were forwarding issues since each instance needs to open a whole lot of ports and just doesn't scale over 2-5 clients depending on the number of active torrents. Already some gigabyte (expensive at the time) cards and switches were unboxed since the overhead started to build up across the lan. This was already in the works so it didn't raise the red light.
      Code:
      gebruiker@Delta:~$ lsof -i -a -p $(pgrep transmission)
      COMMAND     PID      USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
      transmiss 28001 gebruiker   17u  IPv4 3945341      0t0  TCP *:56070 (LISTEN)
      transmiss 28001 gebruiker   18u  IPv4 3945342      0t0  TCP *:50111 (LISTEN)
      transmiss 28001 gebruiker   19u  IPv4 3945344      0t0  UDP *:50111
      The 56070 port is for transmission's http viewer. Or are you implying other ports used for connecting to other clients? Perhaps you need to tweak the TCP stack (yes, this would not be optimal).

      Originally posted by c117152 View Post
      Then, we had some initial success with layer 2 switches to favor http and ftp per machine and establish quotas. But much of the network wasn't l2 so cost was unacceptable.
      So I guess that HTTP and FTP are used so much since there are layer 2 switches for those... Makes sense.

      Originally posted by c117152 View Post
      Since we're not an open network, we decided to try and manage each client's machine separately instead. This already is something most companies wouldn't do. Never the less, we tried down regulating and limiting individual clients in each machine; To say, 10 peers per torrent and 50 overall connections and other figures. But, it either bottlenecked the network to the point that nothing was downloading and you couldn't even surf the web, or it killed the torrenting specifically.
      Did you try to use kernel throttling? (tc). Throttling the amount of peers will not reduce congestion, only overhead (I think).

      Originally posted by c117152 View Post
      I even clearly remember the one setup that seemed to work, only the CIFS turned out unusable... Just weirdness all around. And mind you we weren't just doing just the usual amateurish protocol analyses. One of the guys was an ex signal processing dev and he run all sort of strange voodoo trying to figure out what was wrong. He's actually the one that said that the "peer in p2p means a separate lan ip". When I suggested a single daemon on a dedicated machine for the lan-wan with FTP-like user privileges. The decision was against it since the advantage versus FTP was not applicable for such a small firm. Obviously, I ignored the "decision" and went ahead to try and set it up only to discover that while many FTP daemons had their own optional user management, the bittorrent daemons did not.
      If "peer in p2p means a seperate lan ip", why not assign multiple ip's to a single NIC and try that? Maybe that is a solution. I'm genuinely surprised that setup is not working...

      Originally posted by c117152 View Post
      TL;DR: Been there, failed that. Like Samba and the various FTP servers (ProFTPD comes to mind), virtual users are necessary for more complex networks. It's the same pros\cons, and it's just as necessary. I suspect there is a way to still pull off a couple of daemons on the same physical machine, but I'm not convinced it will scale with firewalling and security without some real hardware costs. Still, I'd like to see a working setup...
      I guess that samba and ftp servers are more developed to this regard. You just found out the hard way :/

      Comment


      • #13
        Originally posted by Rexilion View Post
        I did not expect that. Did you run multiple instances with seperate users?
        It was a while back but from what I remember a group of users called trans1 trans2... trans12 so probably yes.

        Originally posted by Rexilion View Post
        Code:
        gebruiker@Delta:~$ lsof -i -a -p $(pgrep transmission)
        COMMAND     PID      USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
        transmiss 28001 gebruiker   17u  IPv4 3945341      0t0  TCP *:56070 (LISTEN)
        transmiss 28001 gebruiker   18u  IPv4 3945342      0t0  TCP *:50111 (LISTEN)
        transmiss 28001 gebruiker   19u  IPv4 3945344      0t0  UDP *:50111
        The 56070 port is for transmission's http viewer. Or are you implying other ports used for connecting to other clients? Perhaps you need to tweak the TCP stack (yes, this would not be optimal).
        Each connected peer establishes another socket on another port so under real world usage it builds up. On my personal home machine I've seen 3 active torrents reaching well over 150 connections. Now, on paper the hardware should be fine, in reality even mid-end switches won't handle something like this let alone the low end ones.

        Originally posted by Rexilion View Post
        So I guess that HTTP and FTP are used so much since there are layer 2 switches for those... Makes sense.
        I suppose. I never bothered with the whys

        Originally posted by Rexilion View Post
        Did you try to use kernel throttling? (tc). Throttling the amount of peers will not reduce congestion, only overhead (I think).
        The l2 hardware was the second QoS step. The throttling was there first. But that was the signal guy's job so I didn't bother going into to much details there...

        Originally posted by Rexilion View Post
        If "peer in p2p means a seperate lan ip", why not assign multiple ip's to a single NIC and try that? Maybe that is a solution. I'm genuinely surprised that setup is not working...
        tap1 tap2 tap3... Yeah. I think that was done as well. Not sure what went wrong there though. It was a while back...

        Originally posted by Rexilion View Post
        I guess that samba and ftp servers are more developed to this regard. You just found out the hard way :/
        It was fun Besides it was an important learning experience. I always was a big proponent of the *nix's small programs working well philosophy. But seeing it break apart made me realize something wasn't quite working right there.
        Eventually I've learned about Plan9 and how this sort of issues were addressed from the bottoms up in the kernel, protocols and even user land. It's the reason I'm fine with systemd. Sure it's not *nix like sysvinit, but *nix isn't all bed of roses either so maybe an out of the box solution is just what it takes.

        Anyhow, I'm sure one day someone will build on either bittorrent or some other p2p protocol an enterprise solution. GIT vs. CVS is sorta the same idea so it's not too unlikely...

        Comment

        Working...
        X