Announcement

Collapse
No announcement yet.

Benchmarking The First RISC-V Cloud Server: Scaleway EM-RV1 Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Benchmarking The First RISC-V Cloud Server: Scaleway EM-RV1 Performance

    Phoronix: Benchmarking The First RISC-V Cloud Server: Scaleway EM-RV1 Performance

    Scaleway by way of their Scaleway Labs group recently launched the Elastic Metal RV1 (EM-RV1) as the world's first RISC-V servers available in the cloud. These RISC-V cloud servers are built around the T-Head 1520 SoC and are an interesting way to explore the RISC-V architecture and/or otherwise make use of RISC-V for CI/CD deployments or other testing purposes. In this article are some benchmarks showing the RISC-V EM-RV1 performance against Intel and AMD x86_64 Linux.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Unless the entire shebang has a power consumption of 5W I cannot understand who would be interested in it.

    But then at 5W you can have SnapDragon 8 Gen 3 (and much faster Nuvia based Qualcomm ARM cores are incoming) which will destroy it.

    Comment


    • #3
      The more "advancements" I see with RISC-V, the less I'm convinced it'll be taking over ARM any time soon, let alone x86-64. It might be fine for microcontrollers, embedded devices, or as a medium for PCIe devices, but for any real workloads, 2W per core doesn't matter if there are CPUs out there with similar power consumption per-core but are several orders of magnitude faster.

      Comment


      • #4
        Im curious why the eMMC storage instead of more common server hardware.

        Comment


        • #5
          Originally posted by S.Pam View Post
          Im curious why the eMMC storage instead of more common server hardware.
          I'd guess for keeping hardware costs down and also maintaining the minimal footprint for allowing very dense deployment.
          Michael Larabel
          https://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by S.Pam View Post
            Im curious why the eMMC storage instead of more common server hardware.
            I'm guessing they racked up a bunch of essentially SBC's maybe multiple SBC per PCB. At least that's what the image search suggests. I can't answer why they don't make blades that group the storage into more common server hardware and have network boot to simplify each node, and build in a fully populated switch. If they did that and each node was pi-sized or smaller, they could take advantage of low power density and high rack depth to make some very dense blades indeed.

            Comment


            • #7
              Please excuse my nitpicking:
              Shouldn't compression speed relative to price for cloud instances have a unit like GB/$, since both compression throughout and cloud instances price are per unit of time?

              Comment


              • #8
                Originally posted by schmidtbag View Post
                The more "advancements" I see with RISC-V, the less I'm convinced it'll be taking over ARM any time soon, let alone x86-64. It might be fine for microcontrollers, embedded devices, or as a medium for PCIe devices, but for any real workloads, 2W per core doesn't matter if there are CPUs out there with similar power consumption per-core but are several orders of magnitude faster.
                Yup. There's a reason RISC-V has already found success in micro-controllers. We are a long way away from usable laptop / desktop performance. IBM Power11+ seems like a better contender to give us something open and performant for those use cases near to mid term. I hope like hell that both succeed.

                Comment


                • #9
                  Would be better to compare to their START-2-M servers, which are offered for the same 16.99 EUR/month.

                  2024-05-14T183740Z-scaleway.png
                  But those would also be 3x-4x faster. As such, the price for the RISC-V machine needs to come down accordingly.

                  Comment


                  • #10
                    I'm not entirely sure it it's the case here or not but I think you're supposed to use specific compile flags (-mcpu=thead-c906 maybe? see here: https://gcc.gnu.org/onlinedocs/gcc-1...V-Options.html ) for rvv 0.7.1 SoCs like the th1520.

                    I also suspect you'd want to use gcc 14 and even 14.1 since I remember seeing specific fixes to vector instructions for thead there as well.

                    disclaimer: not my specificity.

                    Comment

                    Working...
                    X