Announcement

Collapse
No announcement yet.

Testing The First PCIe Gen 5.0 NVMe SSD On Linux Has Been Disappointing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Testing The First PCIe Gen 5.0 NVMe SSD On Linux Has Been Disappointing

    Phoronix: Testing The First PCIe Gen 5.0 NVMe SSD On Linux Has Been Disappointing

    This past week saw the first two consumer PCIe 5.0 NVMe solid-state drives released to retail: the Gigabyte AORUS Gen5 10000 and the Inland TD510. I've been testing the Inland TD510 2TB Gen 5 NVMe SSD the past few days. While in simple I/O testing it can hit speeds almost up to 10,000 MB/s reads and writes, for more complex workloads it quickly dropped against popular PCIe Gen 4.0 NVMe SSD options. In my testing thus far of this first consumer Gen5 NVMe SSD it's left me far from impressed.


  • #2
    Originally posted by phoronix View Post
    While the sequential read and write performance was looking great, the random read and write performance with FIO IO_uring was disappointing. Random reads were just slightly faster than a WD_BLACK SN850 while random writes were slower than the tested PCIe Gen4 SSDs.
    Same issue now as with the SN850 when you first reviewed that. You're still running FIO as a single job, kinda making this a single core CPU benchmark of handling I/O-requests instead of benchmarking the SSD. Try splitting the QD over multiple jobs (numjobs) and things will improve.
    Last edited by thulle; 05 March 2023, 03:03 PM.

    Comment


    • #3
      Before you run your bechmarks on SSDs, try to fill them up with random data first. A samsung ssd I bought a few years back showed read speeds of 3GB/s out of the box, but when filled with random data next, the same test showed ~700MB/s. It seems most modern ssd controllers employ compression and erased flash (0xff's everywhere) is highly compressible, making it easy for the controller to saturate the pci interface.

      Comment


      • #4
        Originally posted by mlau View Post
        A samsung ssd I bought a few years back showed read speeds of 3GB/s out of the box, but when filled with random data next, the same test showed ~700MB/s. It seems most modern ssd controllers employ compression and erased flash (0xff's everywhere) is highly compressible, making it easy for the controller to saturate the pci interface.
        That's not compression, that's the drive using unused/trimmed space as SLC-cache. Fill it up and there's less space to use as cache..

        edit: wait, read speed? That sounds like the drive is busy shuffling data internally from an earlier benchmark, due to being full..

        Comment


        • #5
          Michael

          Typo on page 1 "Microsoft Wineows" should be "Microsoft Windows".

          Comment


          • #6
            Is there any difference if you set the drive to native 4l instead of 512b?

            Comment


            • #7
              Originally posted by JEBjames View Post
              Michael

              Typo on page 1 "Microsoft Wineows" should be "Microsoft Windows".
              Maybe he meant the upgraded version Non-Microsoft Wine-ows.

              Comment


              • #8
                Does Linux has an equivalent to DirectStorage 1.1?

                Comment


                • #9
                  Originally posted by NeoMorpheus View Post
                  Does Linux has an equivalent to DirectStorage 1.1?
                  Not yet: https://www.reddit.com/r/pcgaming/co...eb2x&context=3

                  Comment


                  • #10
                    I use a SATA SSD which is plenty fast and I don't quite understand people's obsession with newer PCIe Gen X SSDs. How often to you copy gigabytes of data? What for do you need this speed?

                    Comment

                    Working...
                    X