Announcement

Collapse
No announcement yet.

Intel MKL-DNN Deep Neural Network Library Benchmarks On Xeon & EPYC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel MKL-DNN Deep Neural Network Library Benchmarks On Xeon & EPYC

    Phoronix: Intel MKL-DNN Deep Neural Network Library Benchmarks On Xeon & EPYC

    This week Intel released MKL-DNN 1.1 as their open-source deep learning library. They also rebranded the software project as the "Deep Neural Network Library" (DNNL) though its focus remains the same. I ran some initial benchmarks on MKL-DNN/DNNL 1.1 on AMD EPYC and Intel Xeon hardware for reference...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Oh, more biased benchmarks, because Intel was destroyed by AMD in neutral tests?

    Comment


    • #3
      Originally posted by Volta View Post
      Oh, more biased benchmarks, because Intel was destroyed by AMD in neutral tests?
      The article explains quite clearly that MKL-DNN / DNNL is an Intel library.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        I had seen a patched version of MKL which performed pretty well on AMD.

        Now this probably won't easily match with these benchmarks, but I can attest much better results can be reproduced with whatever patch is there!

        Comment


        • #5
          I mean, MKL is a Intel libray so.. heck. But the performance difference is ridiculous, it looks like two totally different libraries running.

          Comment


          • #6
            Originally posted by Rigaldo View Post
            I had seen a patched version of MKL which performed pretty well on AMD.

            Now this probably won't easily match with these benchmarks, but I can attest much better results can be reproduced with whatever patch is there!
            I wonder what are the chances of those patches ever making it into Intel's repo...

            Anyway, it'd be interesting to compare something like TensorFlow, compiled according to Intel & AMD's recommendations, on their respective CPUs. Most people aren't using MKL-DNN, directly - if they use it at all (because GPUs and the growing number of AI ASICs are really the way to do this stuff), they're using it as a backend for another framework.

            Comment


            • #7
              Originally posted by Michael View Post

              The article explains quite clearly that MKL-DNN / DNNL is an Intel library.
              This test has serious issues.
              You can't use it as a test comparing AMD to Intel stating it's an "Intel library".
              Anything that protrudes like 40x performance difference is between contemporary performance equals (yes, more or less) is either:
              Seriously flawed or extremely biased.

              And no, one more shiny instruction set do not account for the difference. So I vote for the former.

              Comment


              • #8
                I am not experienced Linux user but I don’t get the flags used.
                First if it uses OpenMP why need for the thread lib?
                Also, in DNN accuracy isn’t important, so why not use the Ofast flag which is basically O3 flag with non precise FP.

                Also, why not use AVX2 for compilation?

                Really strange choices.

                Regarding the library.
                While this is an Intel library it is open source unlike the classic MKL library.
                Moreover, while MKL is known for discriminating non Intel CPUs this library doesn’t as it chooses code path based only on CPU features.

                Hopefully this policy will be applied in MKL as well.

                Comment


                • #9
                  Originally posted by milkylainen View Post
                  Anything that protrudes like 40x performance difference is between contemporary performance equals (yes, more or less) is either:
                  Seriously flawed or extremely biased.

                  And no, one more shiny instruction set do not account for the difference. So I vote for the former.
                  Assuming it wasn't intentionally rigged to make AMD look bad, my guess is it's just using instructions (probably AVX-512, at that) for which AMD has no equivalent. Then, AMD has to fall back on some scalar code path included for the sake of compatibility.

                  If you look at specifically which tests are extremely Intel-biased, they're:
                  • deconvolution
                  • u8s8f32 (meaning: f32 += unsigned 8-bit * signed 8-bit ?)
                  Lacking a key instruction used in the optimized deconvolution code path could break AMD in those benchmarks, and getting good performance on the u8s8f32 tests surely depends on having the right instructions for it.

                  But, the bottom line is that this benchmark really doesn't tell us how the CPUs compare, unless you happen to be running a workload that's dependent on this specific library. So, I would suggest such strongly-biased tests not be included in PTS.

                  Comment


                  • #10
                    Originally posted by coder View Post
                    Assuming it wasn't intentionally rigged to make AMD look bad, my guess is it's just using instructions (probably AVX-512, at that) for which AMD has no equivalent. Then, AMD has to fall back on some scalar code path included for the sake of compatibility.

                    If you look at specifically which tests are extremely Intel-biased, they're:
                    • deconvolution
                    • u8s8f32 (meaning: f32 += unsigned 8-bit * signed 8-bit ?)
                    Lacking a key instruction used in the optimized deconvolution code path could break AMD in those benchmarks, and getting good performance on the u8s8f32 tests surely depends on having the right instructions for it.
                    MKL-DNN chooses the same code path for Intel CPU or AMD given they have the same features.
                    Since in the test the flasg -msse4.1 was used the code generated was using features which both CPU's have.

                    Comment

                    Working...
                    X