Announcement

Collapse
No announcement yet.

Cloudflare Improving Linux Disk Encryption Performance - Doubling The Throughput

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    The thing I was hoping to see in the cloudflare blog post, but it seems they skipped, was what would be the effect of reducing, but not necessarily eliminating the number of async queuing layers. The dm-crypt queuing code was written when the Crypto API was still synchronous. The Crypto API has been updated to be asynchronous by adding its own internal queuing.

    What amount of benefit would be realized by just removing the queuing in the dm-crypt code, instead of removing it in both layers?

    Comment


    • #12
      Originally posted by swagg_boi View Post
      Maybe this is a silly question... Why disk encryption on a server? I thought this was a protection for "data at rest", e.g. a powered off laptop, unmounted disk, etc
      Because the moment someone malicious takes a drive out of the server, or takes the whole server itself, any data it contains becomes "data at rest". Server disk encryption protects this data against the theft of a drive or server. It also protects the data in the event the hardware gets lost during a move. It also protects the data after the server has been decommissioned and disposed of. For all of these reasons, it has become mandated by law in certain industries.

      Comment


      • #13
        Originally posted by Veerappan View Post
        The thing I was hoping to see in the cloudflare blog post, but it seems they skipped, was what would be the effect of reducing, but not necessarily eliminating the number of async queuing layers. The dm-crypt queuing code was written when the Crypto API was still synchronous. The Crypto API has been updated to be asynchronous by adding its own internal queuing.

        What amount of benefit would be realized by just removing the queuing in the dm-crypt code, instead of removing it in both layers?
        If you read through, there's a point in the process where that's what they did. There was a small improvement, but nothing like what you would expect if it were limited by crypto overhead.

        Comment


        • #14
          Originally posted by willmore View Post

          If you read through, there's a point in the process where that's what they did. There was a small improvement, but nothing like what you would expect if it were limited by crypto overhead.
          Huh, thought I had gone through the whole thing. Must've unconsciously skimmed a bit.

          Edit: I think we just read it differently. From what I got out of it, they modified both the dm-crypt AND Crypto API code to remove queuing and make everything synchronous, although they left a threading/queuing toggle in their dm-crypt modifications. But it seems like in the vast majority of the time, their modified code uses synchronous cipher implementations whenever possible.

          I was wondering about the opposite case. Leave Crypto API async, but remove the queuing from dm-crypt. I didn't see any numbers where they left Crypto API async, but modified dm-crypt to remove its queuing..
          Last edited by Veerappan; 25 March 2020, 04:10 PM.

          Comment


          • #15
            Originally posted by anarki2 View Post
            "Doubling The Throughput" sounds fishy to say the least, coz that's only possible if FDE at least halves throughput, which I seriously doubt.
            Depends on what type of drive you are using:

            Originally posted by Cloudflare
            For the purpose of this post we will use the fastest disks available out there - that is no disks.

            Comment


            • #16
              Originally posted by archsway View Post

              Depends on what type of drive you are using:
              In which case it's exactly what I thought it was: they didn't double throughput but halve overhead instead, completely different things. In any case, it must be single-digit percentage increase in real-world actual throughput, since the performance hit is already so small.

              Comment


              • #17
                Originally posted by anarki2 View Post

                In which case it's exactly what I thought it was: they didn't double throughput but halve overhead instead, completely different things. In any case, it must be single-digit percentage increase in real-world actual throughput, since the performance hit is already so small.
                So, are you saying they should not have done it?

                Comment


                • #18
                  Originally posted by caligula View Post

                  So, are you saying they should not have done it?
                  ? I said nothing close to that. I said this is a clickbait article, not more, not less.

                  Comment


                  • #19
                    That sounds nice, hopefully it will be mainlined soon™
                    My 3800x caps out at around 2.1gb/s. So PCIe4 would be kind of pointless ^^

                    Comment


                    • #20
                      Originally posted by willmore View Post
                      Anyone got a way to email the author? I'm curious how their changes effect ARM. I don't believe they have the same FPU context issues for their crypto instructions. Then again the ARM crypto modules may be labled differently.
                      Couldn't find an email, your best bet is asking on the Disqus comment section on the Cloudflare blog:
                      In this post, we will investigate the performance of disk encryption on Linux and explain how we made it at least two times faster for ourselves and our customers!


                      or on his twitter:

                      Comment

                      Working...
                      X