Announcement

Collapse
No announcement yet.

Five-Way Linux OS Comparison On Amazon's ARM Graviton CPU

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Five-Way Linux OS Comparison On Amazon's ARM Graviton CPU

    Phoronix: Five-Way Linux OS Comparison On Amazon's ARM Graviton CPU

    Last month Amazon rolled out their "Graviton" ARM processors in the Elastic Compute Cloud. Those first-generation Graviton ARMv8 processors are based on the ARM Cortex-A72 cores and designed to offer better pricing than traditional x86_64 EC2 instances. However, our initial testing of the Amazon Graviton EC2 "A1" instances didn't reveal significant performance-per-dollar benefits for these new instances. In this second round of Graviton CPU benchmarking we are seeing what is the fastest of five of the leading ARM Linux distributions.

    http://www.phoronix.com/vr.php?view=27263

  • #2
    hello michael and phoronix members

    For deep learning benchmark on ARM devices, you may interest in the ncnn opensource project.
    ncnn is a high-performance neural network inference framework optimized for the mobile platform.
    It is a heavily hand-optimized and widely-used NN inference framework. It makes extensive use of NEON assembly instructions.

    https://github.com/Tencent/ncnn

    Many companies and individual developers use caffe/mxnet/pytorch/tensorflow/... for training and use ncnn for deploying on arm devices.

    https://github.com/Tencent/ncnn/wiki...th-ncnn-inside

    There is a tool bundled for easy benchmarking, and results for some CNN models posted

    https://github.com/Tencent/ncnn/tree/master/benchmark


    best wishes

    Comment


    • #3
      Michael it seems a nice test case

      Comment


      • #4
        Originally posted by nihui View Post
        hello michael and phoronix members

        For deep learning benchmark on ARM devices, you may interest in the ncnn opensource project.
        ncnn is a high-performance neural network inference framework optimized for the mobile platform.
        It is a heavily hand-optimized and widely-used NN inference framework. It makes extensive use of NEON assembly instructions.

        https://github.com/Tencent/ncnn

        Many companies and individual developers use caffe/mxnet/pytorch/tensorflow/... for training and use ncnn for deploying on arm devices.

        https://github.com/Tencent/ncnn/wiki...th-ncnn-inside

        There is a tool bundled for easy benchmarking, and results for some CNN models posted

        https://github.com/Tencent/ncnn/tree/master/benchmark


        best wishes
        With not being familiar with ncnn, happen to have a reference to a complete benchmark script for it?
        Michael Larabel
        http://www.michaellarabel.com/

        Comment


        • #5
          Originally posted by Michael View Post

          With not being familiar with ncnn, happen to have a reference to a complete benchmark script for it?
          https://github.com/Tencent/ncnn/tree/master/benchmark provides the instructions. Basically, you need to build the benchncnn.cpp file and then run the executable. It enumerates the current directory for all param files for benchmarks to run.

          The output looks well structured so it's definitely possible to put that into PTS.

          Comment


          • #6
            Originally posted by Michael View Post

            With not being familiar with ncnn, happen to have a reference to a complete benchmark script for it?
            hello Michael

            you can follow the how-to-build wiki for bulding the ncnn library
            https://github.com/Tencent/ncnn/wiki/how-to-build

            and for the benchmark tool, that's the benchncnn executable, it will print the inference time consumed for each CNN model
            https://github.com/Tencent/ncnn/blob...mark/README.md

            best wishes
            Last edited by nihui; 12-23-2018, 10:20 PM.

            Comment

            Working...
            X