Samsung 870 EVO Linux Performance Benchmarks

Written by Michael Larabel in Storage on 29 January 2021. Page 1 of 4. 23 Comments

For those continuing to rely on SATA 3.0 storage, last week Samsung introduced the 870 EVO as their latest solid-state drive in the very successful EVO line-up. For those curious about the Linux performance of the Samsung 870 EVO or wanting to run your own side-by-side benchmarks against the data in this article, here is a review looking at the Samsung 870 EVO 500GB SSD.

The Samsung 870 EVO is the successor to the very popular 860 EVO SATA 3.0 SSD. The Samsung 870 EVO series makes use of the company's MKX "Metis" controller and LPDDR4 DRAM, and 128-layer TLC flash memory. The initial 870 EVO line-up includes the 250GB model at $39 USD, the 500GB model at $69, 1TB at $129, and the flagship 2TB model at $249. All of these drives are rated for 530 MB/s sequential writes, 560 MB/s sequential reads, 98k IOPS random reads, and 88k IOPS random writes.

Samsung backs the 870 EVO line-up with a five-year warranty and an endurance rating of 150TB TBW per 250GB capacity. As Samsung still doesn't supply us with storage for Linux benchmarking, the unit being tested today was a Samsung 870 EVO 500GB model purchased retail given we are always curious about new storage products and always in need of additional storage for the dozens of different benchmarking systems.

Compared to today's high-end NVMe SSDs, it's hard to get very excited about new consumer SATA 3.0 SSDs, but for this round of benchmarking are results from several consumer Serial ATA drives on hand as well as using a Corsair Force MP600 2TB drive for showing the premium performance that's possible today in the NVMe space. The SATA drives tested were the new Samsung 870 EVO 500GB while the other drives were the 870 QVO 1TB, 860 EVO 500GB, and 850 EVO 250GB.

Samsung 870 EVO

Via the Phoronix Test Suite from an AMD Zen 3 test bed running Ubuntu 20.10 with the Linux 5.11 kernel a range of storage benchmarks were conducted. The EXT4 file-system was used for all testing and on all the SATA 3.0 drives MQ-DEADLINE was the default I/O scheduler.

Related Articles