Announcement

Collapse
No announcement yet.

Solidigm P44 Pro Linux Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by stormcrow View Post
    For use cases like this you want to be using a RAM drive, not solid state storage whether it's enterprise grade or not if a standard hard drive isn't fast enough.
    SSD works fine. And SSDs have better endurance than HDDs.

    Black Magic Design provides a Speed Test tool that you can use to determine whether your SSD is fast enough for various formats and resolutions.



    I think it's included in their "Desktop Video" download package, which is available for Mac, Windows, or Linux:



    Here's the result that Storage Review got for the P44, on Windows 11:


    https://www.storagereview.com/review...ssd-review-2tb
    Last edited by coder; 26 December 2022, 04:02 PM.

    Comment


    • #12
      Depends on manufacturer of the SSD, but generally the "Enterprise" SSDs are just identical to the consumer SSDs NAND-wise + controllerwise. The main difference:
      - "Enterprise" SSDs have a capacitor buffered power suply / DRAM-Cache, so in case of a power failure, the DRAM-Cache is not lost but written to the NAND
      - The firmware is "unlocked" meaning, you can configure block-size presented to the OS, can configure the size of the SLC area (normal TLC cells used as SLC). Some even allow "zoned namespaces", where the SLC / TLC / MLC area can be configured and is host/os-visible.. This is a pure firmware feature.

      Other than that, the "enterprise" ssds are pretty much similar to the consumer ones.

      The main difference, why you can´t use consumer SSDs in a server is the missing capacitor to flush the DRAM-Cache during a power-failure. As you don´t want to loose data in that case.

      Comment


      • #13
        Originally posted by stormcrow View Post
        At this point I'm less concerned about performance and benchmark results than I am about the real world endurance of the cells.

        Clarification: Some SSDs don't refresh the cell charge when they're supposed to be holding data leading to charge loss over longer periods of time (thanks to parasitic drains and the like - no electrical system is 100% efficient or isolated). That and the occasional firmware and controller bugs can lead to data loss. Performance doesn't mean a hell of a lot if you lose data.
        This.
        I also recommend to use tmpfs where suitable to save the SSD some write cycles. MLC seems to be a good compromise between data integrity and affordability (SLC rules but is freaking expensive).

        I've seen flash media (sticks mostly, though) that died without reason or cause and any warning. 4 or 8 GB Sticks that went into read-only after 80 MiB written, all of a sudden (probably spare sectors used up or something and controller deciding to go ROM?). Media, that was sold as "new" to me, that already had data on it ("chipdisc" or disc on module from China... yeah, I know, but it was a tempting offer. Like 4 GB MLC IDE-DoM for 20 Euros, usually these DoMs are unreasonably expensive).

        Of course, best practice is to have (rotating) backups, but then, people might back up data these days on flash media and leave it for 10 years and then have bits flipped. But after all, optical media might deteriorate (though Verbatim/TaioYuden media should be very robust) or get scratched, and one days the media is still fine (these m-Discs) but no reading device, data formats change,... whatnot. HDDs also aren't entirely safe, tapes, ... floppies ... well... and paper or carving in stone. Probably the most robust thing, but slow and low capacity. Ah. it's a pain with archival and backups sometimes.


        For SSDs/flash get MLC or SLC if you can afford. They all are fast. (okay, unless there seems to be a strange FW bug like in these Samsungs here)
        Stop TCPA, stupid software patents and corrupt politicians!

        Comment


        • #14
          I was impressed by SK p41 platinum/solidigm p44 pro and was looking to get 2TB version.
          ​​​​​​There was a problem to get one for reasonable price and shipping.
          Fortunately, the SN850X 2TB is "just" 169$ today on Amazon.
          At current pricing, it's a clear winner.

          Comment


          • #15
            Originally posted by Adarion View Post
            MLC seems to be a good compromise between data integrity and affordability (SLC rules but is freaking expensive).
            When is the last time you shopped for a MLC drive? 5 years ago, maybe? By the point M.2 NVMe drives got introduced, the industry was already well into the TLC transition.

            The good news is that TLC has gotten better than when it was first introduced, but now we have QLC to worry about.

            Originally posted by Adarion View Post
            people might back up data these days on flash media and leave it for 10 years and then have bits flipped.
            Or perhaps as little as 1 year (or less)!

            Originally posted by Adarion View Post
            But after all, optical media might deteriorate (though Verbatim/TaioYuden media should be very robust)
            So far, all the CDs and DVDs I burned 15-20 years ago are still good, but I also tried to use quality media and the lowest burn speed possible.

            You also have to make sure the media is pristine, before burning. Any dust or finger prints will cast a shadow.

            There's a way to get checksum stats from some optical drives, but the last time I did it was using a freeware program that relied on a specific feature of Lite-On drives. Plextor had their own app for doing error scans using their drives.

            Originally posted by Adarion View Post
            HDDs also aren't entirely safe, tapes, ... floppies ... well... and paper or carving in stone. Probably the most robust thing, but slow and low capacity. Ah. it's a pain with archival and backups sometimes.
            The worst backup is the one you don't do. But yeah, use a media with some longevity. HDDs can reasonably retain data for 3-5 years, just sitting in a drawer. Use archival-quality optical, for anything you really don't want to lose.​

            Comment


            • #16
              this is digital data we're talking about

              the best backup is not the one in the most indestructible single media, but the one done in redundant media

              if your data is in a single media it's not even a backup, it's just your storage

              i've never lost a file after I started placing all of it in one big drive that's used routinely and rsyncing it to one other removeable big drive

              Comment


              • #17
                I replied to this once, but it got withheld (due to too many links, I'm sure) and I guess the forum mods are on holiday. So, I'll try again and reply in pieces.

                Originally posted by Spacefish View Post
                Depends on manufacturer of the SSD, but generally the "Enterprise" SSDs are just identical to the consumer SSDs NAND-wise + controllerwise. The main difference:
                Some data center-grade drives support far higher endurance. For example, take the TLC-based Solidigm D7-P5620, which is designed for "mixed" workloads (roughly equal reads vs. writes). It has an endurance rating of 65.4 PBW[1]. Now, compare that with something like a TLC-based Samsung 990 Pro - it's rated at only 1.2 PBW[2]. As they're both TLC, a lot of the difference is due to over-provisioning. Most consumer drives will rate even worse, due to the use of QLC.

                References:
                1. https://www.solidigm.com/products/da...l#configurator
                2. https://www.samsung.com/us/computing...2t0b-am/#specs

                Comment


                • #18
                  (continued)

                  Another big difference is that server SSDs typically use a massive amount idle power. At 5 W, that D7-5620 idles hotter than a 7200 RPM hard drive, and 20 W of active power is about 2.5x what a HDD would produce. Again, the top Samsung consumer drive uses far less: 0.055 W at idle; 5.5 W to 8.5 W active.

                  Why the difference? Once reason is tail latency. Data center drives are optimized for QoS, which means they remain highly responsive in all circumstances. The initial login tests from StorageReview's respective reviews show the Samsung 990 Pro peaks at 71 kIOPS with a latency of 441 microseconds[3]. In contrast, the D7-P5520 peaks at 485 kIOPS with a latency of only 198 microseconds[4]. I'd include the graphs inline, but StorageReview seems to block that, but do check out the links!

                  References:

                  Comment


                  • #19
                    (continued)

                    Next, let's look at sustained writes. Modern SSDs buffer writes at lower density, then go back and rewrite the data with a higher number of bits per cell. That's why sustained write speeds drops off a cliff:

                    We see that the 990 Pro quickly drops to about 1.4 GB/s. Even the best among them can't sustain more than about 1.8 GB/s. In contrast, the D7 P5620 claims a sustained sequential write speed of 3.7 GB/s, for the entire drive span[1].

                    A further difference is in features like error correction (i.e. support for T10-PI metadata).​

                    Comment


                    • #20
                      (continued)

                      Originally posted by Spacefish View Post
                      Other than that, the "enterprise" ssds are pretty much similar to the consumer ones.
                      Even looking at Solidigm's read-oriented D3-S4520 SATA drive, it idles at 1.4 W and supports 36.5 PBW[5]. Both of which suggest there will be quantifiable differences to a consumer drive -- similar to what we've seen above, if lower in magnitude.​

                      References

                      (end)

                      Comment

                      Working...
                      X