Announcement

Collapse
No announcement yet.

GCC Developers Look At Dropping i386 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GCC Developers Look At Dropping i386 Support

    Phoronix: GCC Developers Look At Dropping i386 Support

    Now that the Linux kernel has dropped support for old Intel 386 CPUs, GCC developers are also considering the removal of i386 support from their compiler...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    overal all those recent changes are probably good if it means that actual more time is spend by developers coding new stuff not getting more time on the coffe breaks cause they will now have more time on their hands ^^ actually even more coffe breaks is good if during that time they get think of the best way to code afterwards so kinda win-win

    btw

    "If you use an almost 30 years old architecture, why would you need the latest-and-greatest compiler technology? Seriously..." way i see it is that from 1991 till 2012 only 21 years have passed, so giving that 30 years statement is true than why in the world would any one want to develop a system in 1991 for the 10 years old platform any way in the first place

    still was it really all that much work?

    Comment


    • #3
      Just throw it away...

      Comment


      • #4
        Dropping 386 is completely fine.

        This quote though:
        If you use an almost 30 years old architecture, why would you need the latest-and-greatest compiler technology? Seriously...
        That's _exactly_ why you need the best compiler tech!

        The slower your cpu, the better your compiler needs to be to get the same speed.

        Comment


        • #5
          Originally posted by curaga View Post
          That's _exactly_ why you need the best compiler tech!

          The slower your cpu, the better your compiler needs to be to get the same speed.
          So, you are actually saying that, today's work on a compiler will benefit compiling and running on a CPU that was released 30 years ago? I find that really hard to believe.

          Please note: I'm not an expert in *any* of this. Just finding it unbelievable mkay...

          Comment


          • #6
            Originally posted by ryszardzonk View Post
            way i see it is that from 1991 till 2012 only 21 years have passed
            1985 was the year the 80386 was released. So it's 27 years.

            Comment


            • #7
              Originally posted by Rexilion View Post
              So, you are actually saying that, today's work on a compiler will benefit compiling and running on a CPU that was released 30 years ago? I find that really hard to believe.

              Please note: I'm not an expert in *any* of this. Just finding it unbelievable mkay...
              Yes, that's what I'm saying.

              Only part of the improvements go towards new extensions like vectorizing. Part is the good old, "how do we make this specific combination of instructions faster", via better inlining, better ordering of instructions, or a number of other ways.

              Comment


              • #8
                Originally posted by Rexilion View Post
                So, you are actually saying that, today's work on a compiler will benefit compiling and running on a CPU that was released 30 years ago? I find that really hard to believe.

                Please note: I'm not an expert in *any* of this. Just finding it unbelievable mkay...
                It depends. A ("normal") compiler basically reads your source code with the "front end" and does all sorts of semantic analysis on it. Then it transforms the source code to some intermediate language where the "middle end" (which is kind of a nonsensical term) is going to do its work. Ideally this form is the same for all programming languages supported by the compiler. The compiler then does magic on the intermediate form in order to optimize it. Only after having done that will it take the optimized intermediate form and produce machine code with it's "back end".

                The main optimizations on the indermediate code should be agnostic to the programming language and to the architecture but I guess in practice a compiler will do optimizations for the architecture and probably even from the programming language's specification, if there is something to optimize (for example for bulldozer you could maybe use the knowledge that it has more integer units than floating point units).

                Comment


                • #9
                  Originally posted by ChrisXY View Post
                  The main optimizations on the indermediate code should be agnostic to the programming language and to the architecture but I guess in practice a compiler will do optimizations for the architecture and probably even from the programming language's specification, if there is something to optimize (for example for bulldozer you could maybe use the knowledge that it has more integer units than floating point units).
                  Originally posted by curaga View Post
                  Only part of the improvements go towards new extensions like vectorizing. Part is the good old, "how do we make this specific combination of instructions faster", via better inlining, better ordering of instructions, or a number of other ways.
                  I get it. So work on this 'intermediate step' and the 'language interpretation step' will also be of benefit for old CPU's as long as this 'CPU specific step' is provided.

                  Didn't knew compilers were so modular like this.
                  Last edited by Rexilion; 13 December 2012, 01:40 PM. Reason: Grammar...

                  Comment


                  • #10
                    Originally posted by curaga View Post
                    Only part of the improvements go towards new extensions like vectorizing. Part is the good old, "how do we make this specific combination of instructions faster", via better inlining, better ordering of instructions, or a number of other ways.
                    Yes, but all of those still depend on the target CPU - the answers will depend on what assumptions you can make about cache sizes, the size and number of execution pipelines, etc. Even without using modern extensions, optimisations for a modern CPU aren't necessarily optimal for an ancient 386...

                    Comment

                    Working...
                    X