Announcement

Collapse
No announcement yet.

GCC Lands AVX-512 Fully-Masked Vectorization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC Lands AVX-512 Fully-Masked Vectorization

    Phoronix: GCC Lands AVX-512 Fully-Masked Vectorization

    Stemming from looking at the generated x264 video encode binary and some performance inefficiencies, SUSE engineers have worked out AVX-512 fully masked vectorization support for the GCC 14 development code...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Dear Mr. Intel,

    I am sorry to point it out that directly, but your policy regarding AVX512 is no longer comprehensible. Solve this issue asap. You even can't blame AMD or someone else for the situation you are in - AVX512 is YOUR child and YOU had all the opportunities to create compelling products with it. I do not care for the reasons you failed in this regard. Just fix it. Just deliver.

    Sincerely,

    Mr. Customer.

    Comment


    • #3
      Originally posted by Joe2021 View Post
      Dear Mr. Intel,

      I am sorry to point it out that directly, but your policy regarding AVX512 is no longer comprehensible. Solve this issue asap. You even can't blame AMD or someone else for the situation you are in - AVX512 is YOUR child and YOU had all the opportunities to create compelling products with it. I do not care for the reasons you failed in this regard. Just fix it. Just deliver.

      Sincerely,

      Mr. Customer.
      Devil's advocate here:
      AVX512 was very poorly received by the Linux community, primarily because Linus reamed them out for making it. Perhaps Intel was like "fine, then I'm not doing any more work".
      Makes me wonder too if dropping support for it on desktop platforms (I think it was Alder Lake?) was a way to test how much people were going to care if it went away.

      Comment


      • #4
        Originally posted by schmidtbag View Post
        AVX512 was very poorly received by the Linux community, primarily because Linus reamed them out for making it.
        But that has nothing to do with it's success, esle C++ or Nvidia would also have taken another route.

        Them leaving AVX512 away has more to do with their shortcomings in process nodes and inability to develop efficient hardware.

        Comment


        • #5
          Originally posted by Joe2021 View Post
          Dear Mr. Intel,

          I am sorry to point it out that directly, but your policy regarding AVX512 is no longer comprehensible. Solve this issue asap. You even can't blame AMD or someone else for the situation you are in - AVX512 is YOUR child and YOU had all the opportunities to create compelling products with it. I do not care for the reasons you failed in this regard. Just fix it. Just deliver.

          Sincerely,

          Mr. Customer.
          Working great on my Rocket Lake...

          Comment


          • #6
            Originally posted by schmidtbag View Post
            Devil's advocate here:
            AVX512 was very poorly received by the Linux community, primarily because Linus reamed them out for making it. Perhaps Intel was like "fine, then I'm not doing any more work".
            Makes me wonder too if dropping support for it on desktop platforms (I think it was Alder Lake?) was a way to test how much people were going to care if it went away.
            I did find the disparity in the community's reaction pretty silly.

            Intel introduces AVX-512 in 2017: Boo! Hiss! Stop with the magic function garbage!

            AMD adds AVX-512 in 2022: Yay! Amaze balls!

            Yes AMD's first implementation was better than Intel's first implementation, but it damn sure better be half a decade after their competitor did it and on a TSMC 5nm node vs Intel 14nm.

            Comment


            • #7
              Intel's lacklustre efforts to remedy the power consumption problem is probably more than a little to do with them betting on the E-cores. If those didn't exist then the P-Cores would be better for it, IMHO.

              Comment


              • #8
                Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
                Intel introduces AVX-512 in 2017: Boo! Hiss! Stop with the magic function garbage!
                You left out one important fact, by using AVX their whole CPU was throttled and everything worked slower, which ruled their implementation useless for mixed workloads.
                TSMC 5nm node vs Intel 14nm.
                The problem was not node specific, it was just a bad implementation.

                Comment


                • #9
                  Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
                  Yes AMD's first implementation was better than Intel's first implementation, but it damn sure better be half a decade after their competitor did it and on a TSMC 5nm node vs Intel 14nm.
                  We don't know if it's a node thing though?

                  Comment


                  • #10
                    I would say that it is not so much AVX-512 itself, but how intel fragmented it. Look at how the subsets are supported (from wikipedia).

                    maim.png

                    The clock issue with Intel implementations also did not help.
                    Last edited by SofS; 19 June 2023, 11:52 AM.

                    Comment

                    Working...
                    X