Announcement

Collapse
No announcement yet.

Samsung 950 PRO M.2 NVM Express SSD

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic Samsung 950 PRO M.2 NVM Express SSD

    Samsung 950 PRO M.2 NVM Express SSD

    Phoronix: Samsung 950 PRO M.2 NVM Express SSD

    The latest piece of hardware I've been playing around with at Phoronix is Samsung's V-NAND SSD 950 PRO M.2 NVM Express SSD. Assuming you are running a modern Linux distribution, this M.2 PCI-E NVMe SSD can offer blazing fast performance.

    http://www.phoronix.com/vr.php?view=23240

  • torsionbar28
    replied
    Originally posted by LeJimster View Post
    Lastly, I would like to know what is going on with the U.2 spec Intel brought forward as a replacement for SATA. Everyone seems to be ignoring it, which is fine if they want to boycott Intel... But we need something to replace SATA and SATA Express wasn't the answer, if not U.2... What?
    U.2 isn't intended as a SATA replacement. It's intended as a SAS replacement. It implements NVMe using the SAS physical connector, but with the addition of a lot more pins. This enables OEM's to create enclosures housing large numbers of these drives, for use in the datacenter. U.2, like SAS, is also "dual port" for redundancy.

    The consumer version is M.2 NVMe, which implements the same 4x PCIe lanes as U.2, but M.2 is not dual ported.

    Leave a comment:


  • drSeehas
    replied
    Originally posted by LeJimster View Post
    ... Lastly, I would like to know what is going on with the U.2 spec Intel brought forward as a replacement for SATA. Everyone seems to be ignoring it, which is fine if they want to boycott Intel... But we need something to replace SATA and SATA Express wasn't the answer, if not U.2... What?
    U.2 is mainly used in the enterprise not consumer market.
    There are U.2 OEM drives from Samsung too:
    http://www.samsung.com/semiconductor...d/MZQLV960HCHP
    http://www.samsung.com/semiconductor...d/MZQLV480HCGR
    There is also a 1.92 TB version, but I haven't seen it yet.

    Leave a comment:


  • dweigert
    replied
    The best use case for something like this is not your boot/root drive, but for internal DB usage, especially if you have BLOBs in them. In addition, with the advent of using these with OpenFabric in a cluster, and data deduplication, you may be able to dispense with EMC or HDS arrays entirely for some applications (Ceph for instance). I'd love to see cards that you can plug multiple of these devices into, and use an PCI-e 3.x 16 lane slot to do real FAST I/O

    Leave a comment:


  • pal666
    replied
    Originally posted by bug77 View Post
    The original assertion was that, while the 950Pro looks much better on paper, it doesn't look so hot in the real world/benchmarks.
    i talked about benchmark from this article. i agree that not all specs are sufficiently higher, i just don't understand how exactly it translates into these benchmark results. maybe they become cpu bound in the kernel, maybe there is something wrong with aio which is preventing it from using maximum queue depth... and this benchmark has sequential read test which also isn't sufficiently faster with nvme
    Last edited by pal666; 06-01-2016, 05:57 AM.

    Leave a comment:


  • bug77
    replied
    Originally posted by pal666 View Post
    the only random read test in article was using aio, which would be pointless with qd1
    The original assertion was that, while the 950Pro looks much better on paper, it doesn't look so hot in the real world/benchmarks. My assertion was that specs are based on use cases that you're more likely to find on servers than on the desktop (large sequential access, high queue depth), hence the difference.

    And about that QD1. With SATA you get 30-40MB/s. With NVMe you can get 50MB/s or a bit more. A mechanical HDD won't break the 1MB/s barrier. That's why you can feel the difference between a HDD and SSD, but not between a SATA and NVMe SSD. Of course NVMe is faster (and I expect it grow even faster, in time), but you won't be able to tell outside of benchmarks.

    Leave a comment:


  • pal666
    replied
    Originally posted by bug77 View Post
    Yeah, because anything other than QD1 matters on the desktop. /s
    the only random read test in article was using aio, which would be pointless with qd1

    Leave a comment:


  • bug77
    replied
    Originally posted by pal666 View Post
    850-evo/MZ-75E250BW 97 000 IOPS qd32
    950-pro/MZ-V5P256BW 270 000 IOPS (Thread 4) qd32
    Yeah, because anything other than QD1 matters on the desktop. /s

    Leave a comment:


  • DonQ
    replied
    Originally posted by Med_ View Post
    I have one of those and it is great. There is one drawback if you use it as your boot drive (this is the only storage device I have in my computer in my case, so little choice), grub cannot boot on it as it does not recognize NVMe drives. I lost half a day and a part of my sanity trying to debug the issue. In the end I used systemd-boot and it worked fine.
    I have a SM951 that I am using as a boot drive. I already had Debian running on another SSD. I copied my Debian to the SM951, installed GRUB on it and all was well. Ubuntu 14.4 installation would not see the drive but Ubuntu 16.04 saw it no problem.

    Leave a comment:


  • edwaleni
    replied
    I run mine on a Addonics PCIe X4 to M.2 riser card. No heatsinks needed once I got it up off the planar. As far as U.2 format goes, its really a work in progress. Problem is the trace distance from the planar or riser card to the actual NVMe SSD is too long at times and suffers from signal loss, which is seen as something slower in benchmarks. IMHO, the use of mini-SAS connectors to extend the M.2 slot to the U.2 connector is a huge fubar, and will probably be replaced by something more rational.

    We benched the Intel 750 NVMe cards and while they perform pretty well, they still don't have enough caching for 4K sustained writes, especially in heavy database activities. But we are still talking orders of magnitude of performance over an AHCI SSD in a typical 2 socket Xeon platform.

    People have belly ached about poor NVMe performance in certain systems because in their particular rig their PCIe lane budget was already consumed by other devices, or forced their graphics card into 8x due to arbitration.

    Leave a comment:

Working...
X