Announcement

Collapse
No announcement yet.

Fedora 26 Planning To Enable TRIM/Discard On Encrypted Disks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by aht0 View Post
    Has anyone had issues on Linux using SSD's and TRIM. I mean "personally had issues"?
    All of my drives have issues. My Samsung 840 Evo is blacklisted in the Linux kernel following a firmware upgrade to "improve" performance and my old OCZ Agility 2 handles discard support very poorly.

    Double checked that my Samsung 840 Evo is still a problem:
    Code:
    dmesg | grep -i trim -C1
    [    1.379438] ata8.00: supports DRM functions and may not be fully accessible
    [    1.379963] ata8.00: disabling queued TRIM support
    [    1.379965] ata8.00: ATA-9: Samsung SSD 840 EVO 250GB, EXT0DB6Q, max UDMA/133
    --
    [    1.380218] ata8.00: supports DRM functions and may not be fully accessible
    [    1.380662] ata8.00: disabling queued TRIM support
    [    1.380668] ata8.00: configured for UDMA/133
    --
    [   52.359735] ata8.00: supports DRM functions and may not be fully accessible
    [   52.360116] ata8.00: disabling queued TRIM support
    [   52.360260] ata8.00: supports DRM functions and may not be fully accessible
    [   52.360642] ata8.00: disabling queued TRIM support
    [   52.360644] ata8.00: configured for UDMA/133

    Comment


    • #12
      Originally posted by aht0 View Post
      Has anyone had issues on Linux using SSD's and TRIM. I mean "personally had issues"?
      Yeah I have, it gave me a rash on my inner thigh once. I thought it was something else at first, but a stack trace revealed SSD TRIM to be the root cause.

      Comment


      • #13
        Originally posted by 2bluesc View Post
        All of my drives have issues. My Samsung 840 Evo is blacklisted in the Linux kernel following a firmware upgrade to "improve" performance...
        I have the same drive TRIM works fine. the only problem is after an update the drive advertised supporting queued TRIM while it only supported non-queued TRIM.
        it's only blacklisted in kernel for NCQ TRIM.
        most drives out there only support non-queued TRIM which blocks other IO commands while TRIM command is being issued.
        The blocking of IO causes bad performance which is why the 'discard' mount option should be avoided on most drives and instead you should set fstrim to run weekly which is what most distros do by default.

        Comment


        • #14
          Originally posted by 2bluesc View Post

          All of my drives have issues. My Samsung 840 Evo is blacklisted in the Linux kernel following a firmware upgrade to "improve" performance and my old OCZ Agility 2 handles discard support very poorly.
          This is exactly why I haven't made the leap to SSD's yet. I'm still 100% on platter drives. Just like any new technology, compatibility, features, and interoperability are going to be all over the board at first, until the industry settles down and standardizes some more. After all, there was a time when an Ethernet card from one vendor was compatible ONLY with an ethernet switch from the same vendor.

          Comment


          • #15
            Originally posted by torsionbar28 View Post

            This is exactly why I haven't made the leap to SSD's yet. I'm still 100% on platter drives. Just like any new technology, compatibility, features, and interoperability are going to be all over the board at first, until the industry settles down and standardizes some more. After all, there was a time when an Ethernet card from one vendor was compatible ONLY with an ethernet switch from the same vendor.
            Forgive my asking, but how is a SSD with non-working trim worse than a HDD?

            Comment


            • #16
              Originally posted by bug77 View Post

              Forgive my asking, but how is a SSD with non-working trim worse than a HDD?
              It's a great question, there are a number of reasons. One is write durability. Most consumer grade SSD's are rated for 120 TB (or less) of lifetime writes, and come with short 1 or 2 year warranties. A quality enterprise SATA HDD like a Western Digital RE4 is rated for 550 TB per year, with a 5 year warranty. No comparison there, the HDD is far superior to the consumer SSD in this metric. I run a lot of VM's on my workstation for testing and development, and I've measured my average daily writes at around 80 GB, which will wear out a consumer SSD in no time. Plus I tend to get 5 years or more of use out of my hardware before upgrading - I'm not one of those upgrade junkies that's always buying new stuff for the sake of buying new stuff.

              Yes there are enterprise SSD's out there that provide enhanced durability and longer warranty, but they cost an arm and a leg. For personal use, they aren't practical. Enterprise HDD however is cheap and affordable. Right now, I can get 2 TB enterprise HDD for $129. A 2 TB enterprise SSD is nearly $2000. The price/performance ratio just isn't there yet for me to make the leap.

              Another reason is the firmware, and associated bugs and features. HDD firmware is mature and stable, I haven't had a need to update firmware on an HDD in many many years. A lot of consumer SSD's have immature firmware, with the vendor releasing multiple updates per year. Performing a firmware update is a risk, and as 2bluesc mentioned, a newer release can break compatibility or have unintended consequences.

              To be clear, all my platter drives are either 10k Velociraptors, or WD/HGST enterprise class drives. I don't have any cheapo consumer grade HDD's. Once the enterprise grade SSD's come down in price, I'll give them a serious look.

              One final consideration is the drive interface. Right now, 2.5" consumer drives all use SATA, or if you have a very new mobo, you'll have an M.2 slot that can do NVMe. The newest SSD's are so fast, it's silly to put them on a SATA bus. But you're limited to one, or maybe two, NVMe drive interfaces. It's a transition time right now. I'm waiting for when NVMe becomes the standard, and you can hook 6 or 8 of these NVMe drives up to a regular motherboard. Hopefully by then the enterprise SSD pricing will have come way down, and that'll be the sweet spot for me to buy in.
              Last edited by torsionbar28; 21 January 2017, 03:59 PM.

              Comment


              • #17
                Originally posted by torsionbar28 View Post

                It's a great question, there are a number of reasons. One is write durability. Most consumer grade SSD's are rated for 120 TB (or less) of lifetime writes, and come with short 1 or 2 year warranties. A quality enterprise SATA HDD like a Western Digital RE4 is rated for 550 TB per year, with a 5 year warranty. No comparison there, the HDD is far superior to the consumer SSD in this metric. I run a lot of VM's on my workstation for testing and development, and I've measured my average daily writes at around 80 GB, which will wear out a consumer SSD in no time. Plus I tend to get 5 years or more of use out of my hardware before upgrading - I'm not one of those upgrade junkies that's always buying new stuff for the sake of buying new stuff.

                Yes there are enterprise SSD's out there that provide enhanced durability and longer warranty, but they cost an arm and a leg. For personal use, they aren't practical. Enterprise HDD however is cheap and affordable. Right now, I can get 2 TB enterprise HDD for $129. A 2 TB enterprise SSD is nearly $2000. The price/performance ratio just isn't there yet for me to make the leap.

                Another reason is the firmware, and associated bugs and features. HDD firmware is mature and stable, I haven't had a need to update firmware on an HDD in many many years. A lot of consumer SSD's have immature firmware, with the vendor releasing multiple updates per year. Performing a firmware update is a risk, and as 2bluesc mentioned, a newer release can break compatibility or have unintended consequences.

                To be clear, all my platter drives are either 10k Velociraptors, or WD/HGST enterprise class drives. I don't have any cheapo consumer grade HDD's. Once the enterprise grade SSD's come down in price, I'll give them a serious look.

                One final consideration is the drive interface. Right now, 2.5" consumer drives all use SATA, or if you have a very new mobo, you'll have an M.2 slot that can do NVMe. The newest SSD's are so fast, it's silly to put them on a SATA bus. But you're limited to one, or maybe two, NVMe drive interfaces. It's a transition time right now. I'm waiting for when NVMe becomes the standard, and you can hook 6 or 8 of these NVMe drives up to a regular motherboard. Hopefully by then the enterprise SSD pricing will have come way down, and that'll be the sweet spot for me to buy in.
                I hear you about durability. In fact I try to call out SSD reviews that don't talk about durability, especially since the number of p/e cycles goes down with the manufacturing process. And I so keep most of my stuff on HDDs as well (I bought my 3TB drive about 4 years ago and the 500GB I can't remember when). But that didn't sop me from buying SSDs of OSes and games. Fwiw, MTBF of SSDs is already higher than that of HDDs, but I'm not sure how useful this is to begin with, so I don't think this is a big deal.

                Also I'd like to argue that SATA is not really the bottleneck you think it is. Random 4k writes are about the same, whether you use SATA or NVMe. NVMe has a distinct advantage in sequential operations, but on a home computer you don't enough of those to matter. And if you do them enough on a NVMe drive, it will heat up and throttle. Imho, most of the NVMe advantage is only on paper, as far as typical home usage is concerned.

                On another note, instead of running that many VMs, maybe you could try docker images (on SSD) and point the storage folders to storage you're more comfortable with?

                Comment


                • #18
                  Originally posted by molletts View Post
                  Hmm, my gut feeling is that TRIM shouldn't be used for encrypted volumes - it will leak information about the content of the volume. Ideally, it shouldn't be possible for an attacker to determine where the actual data is on the volume - unused space should be filled with something indistinguishable from encrypted data (such as other data, encrypted using a randomly-generated key).
                  ideally disks should work fast and that leaked information is not useful

                  Comment


                  • #19
                    Originally posted by mgmartin View Post
                    The general information I've seen is enabling trim on dm-crypt devices is still a major security concern: See arch wiki pages here and here. I'd want to verify these concerns are all addressed before I'd trust this and expose possible leaks.
                    those concerns are imaginary

                    Comment


                    • #20
                      Originally posted by illwieckz View Post
                      Enabling TRIM on an encrypted device create holes on the device revealing where are the sensible data.
                      no, trim only reveals where non-sensible (erased) data is located. everything else is still random
                      Originally posted by illwieckz View Post
                      Also, TRIM must be avoided if you hide an encrypted volume inside a visible encrypted volume: the visible one will destroy the hidden one at TRIM time.
                      if trim will destroy it, then normal use of filesystem without trim will also destroy it

                      Comment

                      Working...
                      X