Ceph Cluster Hits 1 TiB/s Using AMD EPYC Genoa + NVMe Drives
While the new PCIe Gen5 NVMe SSDs may feel fast with pushing 11~12k MB/s sequential reads and writes, a Ceph storage cluster has just broken the 1 TiB/s threshold.
Mark Nelson wrote on the Ceph blog Friday how Clyso has managed to deliver a 1 TiB/s storage cluster. This cluster ended up being comprised of 68 Dell PowerEdge R6615 servers each with an AMD EPYC 9454P "Genoa" processor, 192GB of DDR5 memory, dual 100GbE networking, and ten Dell enterpise NVMe drives. Each Dell PowerEdge server was running Ubuntu 20.04 LTS and using Ceph from the upstream Debian packages.
Those interested in how the scalability challenges were overcome and other issues along the way, it's an interesting read over on Ceph.io for learning about this interesting -- and very speedy -- Ceph cluster. This is believed to be the first time a Ceph cluster has achieved 1 TiB/s and the fastest single-cluster Ceph results.
Mark Nelson wrote on the Ceph blog Friday how Clyso has managed to deliver a 1 TiB/s storage cluster. This cluster ended up being comprised of 68 Dell PowerEdge R6615 servers each with an AMD EPYC 9454P "Genoa" processor, 192GB of DDR5 memory, dual 100GbE networking, and ten Dell enterpise NVMe drives. Each Dell PowerEdge server was running Ubuntu 20.04 LTS and using Ceph from the upstream Debian packages.
Those interested in how the scalability challenges were overcome and other issues along the way, it's an interesting read over on Ceph.io for learning about this interesting -- and very speedy -- Ceph cluster. This is believed to be the first time a Ceph cluster has achieved 1 TiB/s and the fastest single-cluster Ceph results.
8 Comments