Announcement

Collapse
No announcement yet.

LLVM Developers Looking At Phasing Out Intel MMX Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • LLVM Developers Looking At Phasing Out Intel MMX Support

    Phoronix: LLVM Developers Looking At Phasing Out Intel MMX Support

    Upstream developers are looking at phasing out Intel MMX that was popular in the late 90's but has since long been succeeded by SSE and AVX instruction set extensions...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Or just leave it as it is? Best effort? Call it deprecated?
    Is it that much of a maintenance burden?

    Also, how about the front end issue? If MMX is redundant over >= SSE2 in terms of functionality how does the typical CPU insn. issue look like?
    Could you technically issue integer MMX and integer (or float for that matter) SSE2 to maximize front end issue capability?
    Ie. I guess you can only issue that many SSE instructions per clock?

    I know. Highly unlikely scenario, also very messy.

    Comment


    • #3
      We're talking 32bit x86. A nickel buys you a CPU that's better nowadays. With that said, I'm no fan of creating landfill. But arguably, does it makes sense environmentally to keep on providing power to such infrastructures? They consume power, take up space and deliver little in return.

      If it weren't for the PIII, the core revolution may have never happened for Intel. Who knows, maybe AMD would have gotten their act together instead of taking their long nap?

      It was fun to show how a 1.4Ghz PIII could run circles around a netburst 2.4Ghz P4. But that was "then". I would consider neither for anything "new" and I think the idea of preserving an existing infrastructure based on either (or earlier as mentioned), is a mistake.

      At the same time, just remember the idea that (reasonably contemporary) Linux supports really really old hardware.... not as true as it used to be.

      Comment


      • #4
        Originally posted by cjcox View Post
        We're talking 32bit x86. A nickel buys you a CPU that's better nowadays. With that said, I'm no fan of creating landfill. But arguably, does it makes sense environmentally to keep on providing power to such infrastructures? They consume power, take up space and deliver little in return.

        If it weren't for the PIII, the core revolution may have never happened for Intel. Who knows, maybe AMD would have gotten their act together instead of taking their long nap?

        It was fun to show how a 1.4Ghz PIII could run circles around a netburst 2.4Ghz P4. But that was "then". I would consider neither for anything "new" and I think the idea of preserving an existing infrastructure based on either (or earlier as mentioned), is a mistake.

        At the same time, just remember the idea that (reasonably contemporary) Linux supports really really old hardware.... not as true as it used to be.
        You realize every CPU since then has MMX, right? Else already compiled binary applications depending on it would crash?

        This removes ability to compile new code with MMX intrinsics using LLVM. You can still compile them with inline asm, and yes, execute it on your current CPU.

        Comment


        • #5
          Originally posted by Weasel View Post
          You realize every CPU since then has MMX, right? Else already compiled binary applications depending on it would crash?

          This removes ability to compile new code with MMX intrinsics using LLVM. You can still compile them with inline asm, and yes, execute it on your current CPU.
          ...though I vaguely remember reading that Intel plans to (maybe has by now) switch the MMX support over to microcode-level emulation as any legitimate use of it should have been written for much slower hardware.

          Comment


          • #6
            Originally posted by cjcox View Post
            We're talking 32bit x86. A nickel buys you a CPU that's better nowadays. With that said, I'm no fan of creating landfill. But arguably, does it makes sense environmentally to keep on providing power to such infrastructures? They consume power, take up space and deliver little in return.

            If it weren't for the PIII, the core revolution may have never happened for Intel. Who knows, maybe AMD would have gotten their act together instead of taking their long nap?

            It was fun to show how a 1.4Ghz PIII could run circles around a netburst 2.4Ghz P4. But that was "then". I would consider neither for anything "new" and I think the idea of preserving an existing infrastructure based on either (or earlier as mentioned), is a mistake.

            At the same time, just remember the idea that (reasonably contemporary) Linux supports really really old hardware.... not as true as it used to be.
            I don't think the question is about landfills.
            There are other time frame and product scenarios beside wear-and-tear-public-joe ones.
            There are lot of military, embedded and other stuff that can be all sorts of mismash.
            Just because you see an end for your typical usage does not mean everybody and everything else does.

            Comment


            • #7
              Originally posted by milkylainen View Post
              There are lot of military, embedded and other stuff that can be all sorts of mismash.
              Just because you see an end for your typical usage does not mean everybody and everything else does.
              To be fair, 99% of embedded and industrial is using their own toolchain (which is some fork of whatever compiler, either LLVM or something else), provided by the hardware vendor in a SDK that will never be updated, so whatever upstream does is mostly irrelevant.

              Comment


              • #8
                Originally posted by starshipeleven View Post
                To be fair, 99% of embedded and industrial is using their own toolchain (which is some fork of whatever compiler, either LLVM or something else), provided by the hardware vendor in a SDK that will never be updated, so whatever upstream does is mostly irrelevant.
                Definitely so. I just think it is a false assumption that the need does not exist (coming from the desktop).
                I have seen project that reuse "proven" hardware but the software was so buggy they've decided to redo stuff.
                Military though. And a very rare situation. But yes, you're right.

                Comment


                • #9
                  Originally posted by ssokolow View Post

                  ...though I vaguely remember reading that Intel plans to (maybe has by now) switch the MMX support over to microcode-level emulation as any legitimate use of it should have been written for much slower hardware.
                  If it isn't so already?
                  If so, that would invalidate my entire theory about using different decode paths to maximize the CPU front end.

                  Comment


                  • #10
                    I'd kill for Intel or AMD to release a pure 64bit CPU built from the ground up for UEFI systems. Drop the legacy code. Reduce the clutter of micro code and obsolete technologies. Itanium might have been successful if they had implemented a basic x86 translation layer in. These days microcode space is cheap.

                    Comment

                    Working...
                    X