Announcement

Collapse
No announcement yet.

The 2019 Laptop Performance Cost To Linux Full-Disk Encryption

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The 2019 Laptop Performance Cost To Linux Full-Disk Encryption

    Phoronix: The 2019 Laptop Performance Cost To Linux Full-Disk Encryption

    I certainly recommend that everyone uses full-disk encryption for their production systems, especially for laptops you may be bringing with you. In over a decade of using Linux full-disk encryption on my main systems, the overhead cost to doing so has fortunately improved with time thanks to new CPU instruction set extensions, optimizations within the Linux kernel, and faster SSD storage making the performance penalty even less noticeable. As it's been a while since my last look at the Linux storage encryption overhead, here are some fresh results using a Dell XPS laptop running Ubuntu with/without LUKS full-disk encryption.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It's good to see that the impact on normal daily use is minimal, as is also my experience, except for that the initial lag of opening (bigger) files is often noticeably larger. SQLite is always an exception in these benchmarks, in the sense that it's very sensitive to the setup it's running on. I wonder why that is

    Comment


    • #3
      I have to disagree with the statement that the power consumption is not significantly affected. Tasks can take a lot longer to complete, so even if the encryption doesn't make the system draw more (mili) amperes per second from the battery, the total power consumption of the task still increases a lot if the running time increases a lot.

      Comment


      • #4
        Hm, those results seem a bit weird. If using AES XTS, and with AESNI being present and enabled, there shouldn't be much of a performance impact IMO.

        Comment


        • #5
          I would very much like to see an article on the performance of SSDs with full disk encryption enabled over time without TRIM being enabled (this is the default configuration). The reason for this being that with encryption enabled, you can’t enable TRIM without giving up some of the security benefits of FDE.

          I suppose it’s up to the user to decide if the information leaked is of any importance to them when deciding whether or not to enable TRIM on their disks with FDE.

          Some SSDs (Solid State Drive) implement TRIM command which is used to inform the disk (block device) that sectors are no longer used. If you...



          Comment


          • #6
            Originally posted by antnythr View Post
            I would very much like to see an article on the performance of SSDs with full disk encryption enabled over time without TRIM being enabled (this is the default configuration). The reason for this being that with encryption enabled, you can’t enable TRIM without giving up some of the security benefits of FDE.

            I suppose it’s up to the user to decide if the information leaked is of any importance to them when deciding whether or not to enable TRIM on their disks with FDE.

            Some SSDs (Solid State Drive) implement TRIM command which is used to inform the disk (block device) that sectors are no longer used. If you...


            I would like to know that too, since Ubuntu and Fedora* also enable TRIM by default when the system is encrypted by the installer.
            * Fedora doesn't set a periodic TRIM by default, you only need to enable the fstrim.timer service.

            Comment


            • #7
              Debian unfortunately still didn't implement full encryption (without unencrypted boot partition) in the installer. And doing it manually is a major pain, especially if you partition things in advance (I prefer that, to avoid adding swap and to use custom offsets which helps some SSDs), and then try make the installer unlock encrypted partitions and install on them.

              See https://bugs.debian.org/cgi-bin/bugr...cgi?bug=814798

              Unencrypted boot + encrypted rest using Debian installer's partitioner is easy to setup though.
              Last edited by shmerl; 14 March 2019, 01:52 PM.

              Comment


              • #8
                What about using OPAL?
                What drawbacks does it have?

                Comment


                • #9
                  Originally posted by lucasbekker View Post
                  I have to disagree with the statement that the power consumption is not significantly affected. Tasks can take a lot longer to complete, so even if the encryption doesn't make the system draw more (mili) amperes per second from the battery, the total power consumption of the task still increases a lot if the running time increases a lot.
                  I prepared to disagree as the load could be borne by the disk controller, but after thinking a bit more about it, you are right, since it's more CPU time that's being used

                  However, regarding performance, I expect it to not make any difference if you have enough RAM to cache whatever you're working on in encrypted form. This can especially be seen on the second "Timed Linux Kernel Compilation" graph, where only one part of the first test is more costly.

                  The SQLite test likely comes down to it having write fences (syncing to disk after a read or write). I wonder how it would compare if eatmydata (from libeatmydata) was used to disable fsync. It might also be woth it to increase commit interval, disable journaling, and toy with different filesystems.

                  All of this comes at the expense of data integrity, of course.

                  I wonder if in the future we couldn't use some homomorphic encryption scheme to perform operations on the encrypted data, and decode it only when needed, or maybe have hardware-backed encrypted memory which could share its key with the disk's.

                  Side note: I recently installed ubuntu on a friend's computer, and it looks like the installer doesn't offer any luks on lvm options, so we had to do it manually. I wonder what's the performance penalty of this, but I would expect it to be minimal compared to plain luks.

                  shmerl ah yes, we also tried to encrypt /boot, but stopped short of it, due to the installer spitting errors --or was it GRUB after the fact? -- and a lack of time (IIRC).
                  I did so on my Arch system, leaving only the secureboot-signed kernel image in the EFI partition. At this point, my biggest concern lies with the initramfs. Any tips on signing that one?
                  Last edited by M@yeulC; 14 March 2019, 01:54 PM.

                  Comment


                  • #10
                    Originally posted by sandy8925 View Post
                    Hm, those results seem a bit weird. If using AES XTS, and with AESNI being present and enabled, there shouldn't be much of a performance impact IMO.
                    Roughly what I expected based on practical experience with using encrypted disks on laptops (Windows and Linux). There's a few different variables that affect practical throughput performance. One of the biggest is which drive interface you're using. It's been my experience that, in general, SATA drives suffer considerably less performance impact because the maximum throughput on those buses is usually less than the actual processing performance of the CPU's hardware encrypt/decrypt support. Spinning rust is generally even less impacted than SSD. Again, because the CPU can run the instructions faster than the drive can transfer data. The bottleneck there becomes the number of threads the CPU can handle with the acceleration. I'm not sure about the most current Intel CPUS, but traditionally they could only handle one encryption thread at a time while the Ryzens can handle 2 at once.

                    This changes with NVME drives because they can handle greater throughput and apparently it's a higher throughput than what the CPU can process. The CPU is once again the bottleneck that it generally wasn't with SATA only systems, so you see a slowdown on a system that's using NVMe drives even with hardware based de/encryption support.

                    Comment

                    Working...
                    X