Announcement

Collapse
No announcement yet.

ADATA XPG SX6000: Benchmarking A ~$50 USD 128GB NVMe SSD On Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • ADATA XPG SX6000: Benchmarking A ~$50 USD 128GB NVMe SSD On Linux

    Phoronix: ADATA XPG SX6000: Benchmarking A ~$50 USD 128GB NVMe SSD On Linux

    While solid-state drives have generally been quite reliable in recent years and even with all the benchmarking I put them through have had less than a handful fail out of dozens, whenever there's a bargain on NVMe SSDs, it's hard to resist. The speed of NVMe SSDs has generally been great and while it's not a key focus on Phoronix (and thus generally not receiving review samples of them), I upgrade some of the server room test systems when finding a deal. The latest is trying an ADATA XPG SX6000 NVMe SSD I managed to get for $49.99 USD.

    http://www.phoronix.com/vr.php?view=25865

  • #2
    Think it would be beneficial if you could extend NVMe tests specifically to include temperature monitoring (nvme smart-log /dev/nvmeX | grep Temperature), especially these lower end chips have some real troubles when overheating under real world conditions (especially when placed on board right next to the graphics card). I've noticed this with few that I owned over the years (Intel n600, samsung 960, MP500) - after buying EKWB heatsinks my problems just disappeared (it takes about a 30-60s of sequential writes to start throttling, this is even more noticeable in raid configuration). I often hit this when coping around VM images from main storage (SATA SSD) to NVME for faster operation.

    Unfortunately sometimes it comes down to limitations of nvme controller as well, but bunch of chips have some real trouble moving around big chunks of data (such as VM images, raw video) even though this might be the reason why people will buy NVMe SSDs in the first place.

    Not quite sure any of this happens in a typical open bench scenario with integrated GPU (or idle dedicated GPU), but it might be worth looking into.

    Edit: Case in point (960 *evo*, positioned right next to GTX1080 on board, minimal AF)
    Code:
    [email protected] ~ % dd if=/dev/zero of=/home/tomas/test bs=4k count=3276800 status=progress
    13114826752 bytes (13 GB, 12 GiB) copied, 10 s, 1.3 GB/s
    3276800+0 records in
    3276800+0 records out
    13421772800 bytes (13 GB, 12 GiB) copied, 10.2814 s, 1.3 GB/s
    [email protected] ~ % dd if=/dev/zero of=/home/tomas/test bs=4k count=104857600 status=progress
    45357633536 bytes (45 GB, 42 GiB) copied, 216 s, 210 MB/s^C
    11152693+0 records in
    11152693+0 records out
    45681430528 bytes (46 GB, 43 GiB) copied, 216.863 s, 211 MB/s
    
    tpruzina tomas # nvme smart-log /dev/nvme0 | grep temp
    temperature                         : 79 C
    Edit2: Actually, result above seems bit too bad, probably ran out of clean blocks at some point since I don't use discard (online TRIM).
    Last edited by tpruzina; 01-15-2018, 01:02 PM.

    Comment


    • #3
      Originally posted by tpruzina View Post
      Think it would be beneficial if you could extend NVMe tests specifically to include temperature monitoring (nvme smart-log /dev/nvmeX | grep Temperature), especially these lower end chips have some real troubles when overheating under real world conditions (especially when placed on board right next to the graphics card). I've noticed this with few that I owned over the years (Intel n600, samsung 960, MP500) - after buying EKWB heatsinks my problems just disappeared (it takes about a 30-60s of sequential writes to start throttling, this is even more noticeable in raid configuration). I often hit this when coping around VM images from main storage (SATA SSD) to NVME for faster operation.

      Unfortunately sometimes it comes down to limitations of nvme controller as well, but bunch of chips have some real trouble moving around big chunks of data (such as VM images, raw video) even though this might be the reason why people will buy NVMe SSDs in the first place.

      Not quite sure any of this happens in a typical open bench scenario with integrated GPU (or idle dedicated GPU), but it might be worth looking into.

      Edit: Case in point (960 *evo*, positioned right next to GTX1080 on board, minimal AF)
      Code:
      [email protected] ~ % dd if=/dev/zero of=/home/tomas/test bs=4k count=3276800 status=progress
      13114826752 bytes (13 GB, 12 GiB) copied, 10 s, 1.3 GB/s
      3276800+0 records in
      3276800+0 records out
      13421772800 bytes (13 GB, 12 GiB) copied, 10.2814 s, 1.3 GB/s
      [email protected] ~ % dd if=/dev/zero of=/home/tomas/test bs=4k count=104857600 status=progress
      45357633536 bytes (45 GB, 42 GiB) copied, 216 s, 210 MB/s^C
      11152693+0 records in
      11152693+0 records out
      45681430528 bytes (46 GB, 43 GiB) copied, 216.863 s, 211 MB/s
      
      tpruzina tomas # nvme smart-log /dev/nvme0 | grep temp
      temperature : 79 C
      Edit2: Actually, result above seems bit too bad, probably ran out of clean blocks at some point since I don't use discard (online TRIM).
      PTS does have NVMe temperature reporting, just forgot to turn it on prior to starting this test.
      Michael Larabel
      http://www.michaellarabel.com/

      Comment


      • #4
        "[ I ]...have had less than a handful fail out of dozens..."

        That seems to me to be a thoroughly miserable failure rate. A "...handful..." out of "...dozens..."? I hope you don't consider what you just wrote to be a glowing endorsement of SSDs. I"m absolutely CERTAIN the SSD people don't.

        Comment


        • #5
          Originally posted by danmcgrew View Post
          "[ I ]...have had less than a handful fail out of dozens..."

          That seems to me to be a thoroughly miserable failure rate. A "...handful..." out of "...dozens..."? I hope you don't consider what you just wrote to be a glowing endorsement of SSDs. I"m absolutely CERTAIN the SSD people don't.
          x2, that's partly why I'm still using spinning hard drives in a number of machines, enterprise grade WD RE series. The consumer SSD's just don't last when subjected to even moderately heavy i/o workloads. I've had several whose lifespan was measured in mere months. They overheat and throttle, and they die an early death. Enterprise grade SSD's are quite expensive still unfortunately.

          Comment


          • #6
            Too bad no Intel 600p was shown. It was one of the first to dispel the myth that NVMe -> fast. I wonder how they compare.

            Comment


            • #7
              Originally posted by coder View Post
              Too bad no Intel 600p was shown. It was one of the first to dispel the myth that NVMe -> fast. I wonder how they compare.
              I only have one 600p SSD, a drive I had bought, and busy in another rackmounted system so not easy for testing.
              Michael Larabel
              http://www.michaellarabel.com/

              Comment

              Working...
              X