Results 1 to 10 of 10

Thread: GCC Developers Look At Dropping i386 Support

  1. #1
    Join Date
    Jan 2007
    Posts
    14,353

    Default GCC Developers Look At Dropping i386 Support

    Phoronix: GCC Developers Look At Dropping i386 Support

    Now that the Linux kernel has dropped support for old Intel 386 CPUs, GCC developers are also considering the removal of i386 support from their compiler...

    http://www.phoronix.com/vr.php?view=MTI1MDU

  2. #2
    Join Date
    Mar 2011
    Posts
    90

    Default

    overal all those recent changes are probably good if it means that actual more time is spend by developers coding new stuff not getting more time on the coffe breaks cause they will now have more time on their hands ^^ actually even more coffe breaks is good if during that time they get think of the best way to code afterwards so kinda win-win

    btw

    "If you use an almost 30 years old architecture, why would you need the latest-and-greatest compiler technology? Seriously..." way i see it is that from 1991 till 2012 only 21 years have passed, so giving that 30 years statement is true than why in the world would any one want to develop a system in 1991 for the 10 years old platform any way in the first place

    still was it really all that much work?

  3. #3
    Join Date
    Jun 2010
    Posts
    145

    Default

    Just throw it away...

  4. #4
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,991

    Default

    Dropping 386 is completely fine.

    This quote though:
    If you use an almost 30 years old architecture, why would you need the latest-and-greatest compiler technology? Seriously...
    That's _exactly_ why you need the best compiler tech!

    The slower your cpu, the better your compiler needs to be to get the same speed.

  5. #5
    Join Date
    Dec 2012
    Posts
    457

    Default

    Quote Originally Posted by curaga View Post
    That's _exactly_ why you need the best compiler tech!

    The slower your cpu, the better your compiler needs to be to get the same speed.
    So, you are actually saying that, today's work on a compiler will benefit compiling and running on a CPU that was released 30 years ago? I find that really hard to believe.

    Please note: I'm not an expert in *any* of this. Just finding it unbelievable mkay...

  6. #6
    Join Date
    Sep 2008
    Location
    Vilnius, Lithuania
    Posts
    2,521

    Default

    Quote Originally Posted by ryszardzonk View Post
    way i see it is that from 1991 till 2012 only 21 years have passed
    1985 was the year the 80386 was released. So it's 27 years.

  7. #7
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,991

    Default

    Quote Originally Posted by Rexilion View Post
    So, you are actually saying that, today's work on a compiler will benefit compiling and running on a CPU that was released 30 years ago? I find that really hard to believe.

    Please note: I'm not an expert in *any* of this. Just finding it unbelievable mkay...
    Yes, that's what I'm saying.

    Only part of the improvements go towards new extensions like vectorizing. Part is the good old, "how do we make this specific combination of instructions faster", via better inlining, better ordering of instructions, or a number of other ways.

  8. #8
    Join Date
    Jun 2010
    Location
    ฿ 16LDJ6Hrd1oN3nCoFL7BypHSEYL84ca1JR
    Posts
    1,020

    Default

    Quote Originally Posted by Rexilion View Post
    So, you are actually saying that, today's work on a compiler will benefit compiling and running on a CPU that was released 30 years ago? I find that really hard to believe.

    Please note: I'm not an expert in *any* of this. Just finding it unbelievable mkay...
    It depends. A ("normal") compiler basically reads your source code with the "front end" and does all sorts of semantic analysis on it. Then it transforms the source code to some intermediate language where the "middle end" (which is kind of a nonsensical term) is going to do its work. Ideally this form is the same for all programming languages supported by the compiler. The compiler then does magic on the intermediate form in order to optimize it. Only after having done that will it take the optimized intermediate form and produce machine code with it's "back end".

    The main optimizations on the indermediate code should be agnostic to the programming language and to the architecture but I guess in practice a compiler will do optimizations for the architecture and probably even from the programming language's specification, if there is something to optimize (for example for bulldozer you could maybe use the knowledge that it has more integer units than floating point units).

  9. #9
    Join Date
    Dec 2012
    Posts
    457

    Default

    Quote Originally Posted by ChrisXY View Post
    The main optimizations on the indermediate code should be agnostic to the programming language and to the architecture but I guess in practice a compiler will do optimizations for the architecture and probably even from the programming language's specification, if there is something to optimize (for example for bulldozer you could maybe use the knowledge that it has more integer units than floating point units).
    Quote Originally Posted by curaga View Post
    Only part of the improvements go towards new extensions like vectorizing. Part is the good old, "how do we make this specific combination of instructions faster", via better inlining, better ordering of instructions, or a number of other ways.
    I get it. So work on this 'intermediate step' and the 'language interpretation step' will also be of benefit for old CPU's as long as this 'CPU specific step' is provided.

    Didn't knew compilers were so modular like this.
    Last edited by Rexilion; 12-13-2012 at 12:40 PM. Reason: Grammar...

  10. #10
    Join Date
    Apr 2010
    Posts
    706

    Default

    Quote Originally Posted by curaga View Post
    Only part of the improvements go towards new extensions like vectorizing. Part is the good old, "how do we make this specific combination of instructions faster", via better inlining, better ordering of instructions, or a number of other ways.
    Yes, but all of those still depend on the target CPU - the answers will depend on what assumptions you can make about cache sizes, the size and number of execution pipelines, etc. Even without using modern extensions, optimisations for a modern CPU aren't necessarily optimal for an ancient 386...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •