Announcement

Collapse
No announcement yet.

Samsung 870 EVO Linux Performance Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Samsung 870 EVO Linux Performance Benchmarks

    Phoronix: Samsung 870 EVO Linux Performance Benchmarks

    For those continuing to rely on SATA 3.0 storage, last week Samsung introduced the 870 EVO as their latest solid-state drive in the very successful EVO line-up. For those curious about the Linux performance of the Samsung 870 EVO or wanting to run your own side-by-side benchmarks against the data in this article, here is a review looking at the Samsung 870 EVO 500GB SSD.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Basically the same as 860 EVO, but cheaper. At $250 MSRP, a 2TB variant could replace my last mechanical drive.

    Comment


    • #3
      Originally posted by bug77 View Post
      Basically the same as 860 EVO, but cheaper. At $250 MSRP, a 2TB variant could replace my last mechanical drive.
      I have 6 4TB WD spinners in a RAID 6 sitting in my server that I need to upgrade to SSD but we aren't quite there from a price point yet. We are getting closer. I got a good deal on a TB of NVMe over the holidays that I am using as a cache for the spinners. That helped quite a bit. Maybe by the end of the year the hardware shortages will have sorted it's self out and the prices will move downward some more.

      I am surprised they haven't come up with a good solution going forward for people that want to stick a bunch of SSD in a computer chassis that are capable of of more that 600MB/s. Basically some thing to fill the void that SATA 4 would have filled. NVMe ports and cables like SATA has would be boss. Then again a 16TB m.2 NVMe for a reasonable would be pretty great also. :-)

      Comment


      • #4
        I'd keep the spinning rust for long lived static data/files. New research into how SSDs, regardless of interface, store data shows many drives never refresh the bin charge holding information unless it's been changed. The implication is that over time static data may become difficult or impossible to retrieve because the charge drops below controller detection levels. The time before this begins to occur varies between drive manufacturers. Since the bin is marked as in use, TRIM won't fix the problem. Only a forced rewrite of the data with external utilities will refresh the charge. I won't be moving my backups off ZFS RAID spinning rust because of this oversight by SSD mfgs unless/until this problem is verifiably resolved by the major brands.

        Source is Steve Gibson of GRC (Gibson Research, grc.com and the Security Now podcast) in his explanations of what's going on in the development of free (as in beer) utilities in the lead up to the release of SpinRite 6.1.

        Comment


        • #5
          Originally posted by stormcrow View Post
          I'd keep the spinning rust for long lived static data/files. New research into how SSDs, regardless of interface, store data shows many drives never refresh the bin charge holding information unless it's been changed. The implication is that over time static data may become difficult or impossible to retrieve because the charge drops below controller detection levels. The time before this begins to occur varies between drive manufacturers. Since the bin is marked as in use, TRIM won't fix the problem. Only a forced rewrite of the data with external utilities will refresh the charge. I won't be moving my backups off ZFS RAID spinning rust because of this oversight by SSD mfgs unless/until this problem is verifiably resolved by the major brands.

          Source is Steve Gibson of GRC (Gibson Research, grc.com and the Security Now podcast) in his explanations of what's going on in the development of free (as in beer) utilities in the lead up to the release of SpinRite 6.1.
          If I get an SSD I would do a daily rsync to a hard drive... I always feared SSDs would fail quicker and more drastically than hard drives...

          Comment


          • #6
            Explain SATA and the 2.5" formfactor for me please. I've since long moved to m.2.
            (I'm not saying m.2 is superior though).

            But:
            Formfactor. More expensive.
            Added controller. On both sides. More expensive.
            Interface, more cumbersome. Less bandwidth.

            Beside the obvious premium $ manufacturers like to add, why isn't SATA for standard system drive usage dead yet?
            What am I missing (obviously something?)

            Comment


            • #7
              Originally posted by milkylainen View Post
              why isn't SATA for standard system drive usage dead yet?
              What am I missing (obviously something?)
              My PC doesn't have an M.2 NVMe slot. I don't have any M.2 NVMe adapters (internal or external). So, I bought a 2.5" SATA SSD which I can use internally and externally via my USB3 adapter.

              Comment


              • #8
                Originally posted by milkylainen View Post
                Explain SATA and the 2.5" formfactor for me please. I've since long moved to m.2.
                (I'm not saying m.2 is superior though).

                But:
                Formfactor. More expensive.
                Added controller. On both sides. More expensive.
                Interface, more cumbersome. Less bandwidth.

                Beside the obvious premium $ manufacturers like to add, why isn't SATA for standard system drive usage dead yet?
                What am I missing (obviously something?)
                Backwards compatibility.

                The form factor is barely more expensive - they both use a short PCB. SATA 2.5" adds a box around it. You could argue that M.2 is actually more expensive due to requiring a higher quality PCB for signal integrity and smaller components due to size.

                There are controllers on both sides in NVMe as well - the PCIe root hub and the controller on the SSD itself which is same as SATA. You could argue that for Intel platform a chipset with SATA controllers is required, but in their case the lanes used can be reconfigured for something else (HSIO can be configured for either SATA or PCIe on them). AMD side does have SATA on the CPU itself.

                The interface has both up- and downsides. PCIe is limited in number of available lanes, for the consumer market it usually means one x4 link from the CPU and the rest is multiplexed by the chipset. For SATA the standard is 6 available slots, and adding more devices is relatively easy due to them usually using x1 PCIe lanes. Because of this the total capacity that's available via SATA is greater than NVMe. What is more you can easily buy a 8TB SATA SSD making the total capacity even higher.
                NVMe has the speed and latency advantage over AHCI, but it requires special support from the BIOS/UEFI. Many older motherboards have problems booting from them, for example. New hardware rarely has those problems tho.

                NVMe has been the standard system drive even in prebuilts like Dell Optiplex for some time now, but they still have SATA for expansion due to only having one M.2 slot.

                To sum it up - they both have their uses.

                Comment


                • #9
                  Originally posted by milkylainen View Post
                  Explain SATA and the 2.5" formfactor for me please. I've since long moved to m.2.
                  (I'm not saying m.2 is superior though).

                  But:
                  Formfactor. More expensive.
                  Added controller. On both sides. More expensive.
                  Interface, more cumbersome. Less bandwidth.

                  Beside the obvious premium $ manufacturers like to add, why isn't SATA for standard system drive usage dead yet?
                  What am I missing (obviously something?)
                  I've yet to see consumer NASes that have NVMe bays instead of sata. Mainly because they have low power atom CPUs which don't have nearly the pcie bandwith required

                  Comment


                  • #10
                    Originally posted by MadeUpName View Post
                    I am surprised they haven't come up with a good solution going forward for people that want to stick a bunch of SSD in a computer chassis that are capable of of more that 600MB/s. Basically some thing to fill the void that SATA 4 would have filled. NVMe ports and cables like SATA has would be boss. Then again a 16TB m.2 NVMe for a reasonable would be pretty great also. :-)
                    I believe SATA Express (a cabled port with 2 PCIe lanes that is backwards compatible with SATA) was intended to be the interface standard between SATA and M.2, but there doesn't seem to be enough demand for something faster than SATA and allows more storage capacity than M.2 to encourage motherboard makers to include SATA Express ports and storage manufacturers to make SATA Express drives. There's also U.2 but that's only been adopted by the enterprise market.

                    Comment

                    Working...
                    X