Announcement

Collapse
No announcement yet.

OpenSSH 6.5 Rolls In New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenSSH 6.5 Rolls In New Features

    Phoronix: OpenSSH 6.5 Rolls In New Features

    There's a major new release out today of OpenSSH...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    i'm curious how performance compares to accelerated aes. i was surprised that an i3 without aes acceleration could still do gigabit with scp. then i tried rsync, and it was like 25 megabytes/sec

    Comment


    • #3
      Its funny how just reading the word 'elliptic' raises one's suspicions these days.

      Comment


      • #4
        Originally posted by mercutio View Post
        i'm curious how performance compares to accelerated aes. i was surprised that an i3 without aes acceleration could still do gigabit with scp. then i tried rsync, and it was like 25 megabytes/sec
        Well, AES was designed to be really lightweight and fast (it is even supported by some smart cards and such embed super small scale hardware).

        When doing SCP with small files that can hold inside the linux' disk cache buffer, you're basically doing memory-to-memory copies, and the network speed is you main bottle neck (AES is light enough to not be a major bottleneck, disk speed doesn't come into play).

        On the other hand, Rsync has complex binary diff and error checking functionality built-in. That means that the files will get read several times from the disk (for the diff) and rsync will wait for the write before reading one last time (for error checking). That means that the disk speed is now the limiting factor.

        Most modern architecture can handle AES at decent speed quite well, though AES hardware acceleration becomes significant at server-like loads (where there are a lot of concurrent encrypted connections).
        If you still have some very old architectures around you might want to experiment with blowfish. It used to be a bit faster on pentium3's back then, in my experience.

        Comment


        • #5
          Originally posted by DrYak View Post
          Well, AES was designed to be really lightweight and fast (it is even supported by some smart cards and such embed super small scale hardware).

          When doing SCP with small files that can hold inside the linux' disk cache buffer, you're basically doing memory-to-memory copies, and the network speed is you main bottle neck (AES is light enough to not be a major bottleneck, disk speed doesn't come into play).

          On the other hand, Rsync has complex binary diff and error checking functionality built-in. That means that the files will get read several times from the disk (for the diff) and rsync will wait for the write before reading one last time (for error checking). That means that the disk speed is now the limiting factor.

          Most modern architecture can handle AES at decent speed quite well, though AES hardware acceleration becomes significant at server-like loads (where there are a lot of concurrent encrypted connections).
          If you still have some very old architectures around you might want to experiment with blowfish. It used to be a bit faster on pentium3's back then, in my experience.
          for some reason rsync seems slow even if the destination files don't exist at all though. i just realised i may be usign something other than aes aready, i dunno what scp is using (i have both dropbear client and ssh client), but i'm using dropbear on my server, and it seems dropbear added supported earlier than the linux openssh support.

          disks can easily saturate gigabit ethernet. now i want to try scp over infiniband between newer cpus it seems arch doesn't have openssh 6.5 yet though

          Comment


          • #6
            Originally posted by mercutio View Post
            for some reason rsync seems slow even if the destination files don't exist at all though.
            Yup, it does read again the segment of data written, and checksum them (md5 or md4, I can't rememeber exactly).
            If there's a write error, SCP won't notice. It only relies on SSH/SSL to guarantee the integrity of the encryption channel.
            If there's a write error, the checksumming of rsync DOES detect it. Rsync checksums and controls everything (but that means that the data has to do a round-trip to the disk, slowing the process down).

            Originally posted by mercutio View Post
            disks can easily saturate gigabit ethernet.
            Someone is having a really big parallel RAID...

            Originally posted by mercutio View Post
            now i want to try scp over infiniband between newer cpus
            Someone definitely has too much bandwidth at his disposition...

            Comment


            • #7
              Originally posted by DrYak View Post
              Yup, it does read again the segment of data written, and checksum them (md5 or md4, I can't rememeber exactly).
              If there's a write error, SCP won't notice. It only relies on SSH/SSL to guarantee the integrity of the encryption channel.
              If there's a write error, the checksumming of rsync DOES detect it. Rsync checksums and controls everything (but that means that the data has to do a round-trip to the disk, slowing the process down).



              Someone is having a really big parallel RAID...



              Someone definitely has too much bandwidth at his disposition...
              haha i have ssd raid, and a single hard-disk. the single hard-disk can easily saturate gigabit. the ssd raid can't quite keep up with ddr infiniband. ddr infiniband is actually surprisingly cheap for back-to-back computers at short distance - about $100 USD or so to do. an example of such a card is:


              then you need a cx4 cable. like:


              even if it does a read afterwards it should be fast. i'm using zfs, so there's some checksumming of data anyway. the thing is rsync is 100% cpu bound on the slower computer. (at the time it had 3 hard-disk raid10 md with one missing disk, which can still do 100mb/sec read -- it has 3 hard-disks now and is up to about 450mb/sec, but i'm still sure it was over 100mb/sec by a reasonable amount)

              rsync is a handy tool, but i would like to see the cpu usage come down somehow. also on large datasets it tends to take ages to get started.

              Comment

              Working...
              X