Given nVidia's hostility toward open source, I would never purchase from nVidia again.
All the gnashing and hand wringing about the obvious shortcomings surrounding AMD's open source driver offerings aside, I'd rather have specs and a fairly good driver now that allow me to have a better driver down the road later(which is coming), than have a good driver now and be stuck up a creek without a paddle later on down the road.
It's the Instruction Set that matters
It is instruction sets not particular chip implementations that matter. Current x86 chips don't actually have x86 as their ISA - instead x86 is translated into the internal ISA. Even if ARM gave up tomorrow, there would still be the need for the ARM ISA. Heck it is even conceivable that Intel could add support for the ARM ISA to their products. (eg many Android applications include ARM shared libraries internally.)
Last edited by grotgrot; 06-15-2012 at 05:04 PM.
x86 is a horrible ugly architecture, although well documented. x86 has never been pretty, not even back in the days.
Most architectures, ARM, POWER, SPARC, MIPS, Alpha, m68k, etc are prettier than x86.
ARM is not the prettiest architecture either though.
I don't know, but I could imagine that Alpha might be the prettiest architecture.
Donald Knuth's MMIX should be be pretty though, although its a theoretical architecture made for education and might not be suitable for reality.
I suspect that IA-64 (Intel Itanium Architecture) might be a good architecture too. Itanium failed horribly, but it was because of the implementations were horrible (also bad compilers, got bad reputation because x86-emulation mode performed bad, etc). It does not mean that the ISA per se is bad.
Maybe there are some very modern soft-core architecture that is nice too.
CIL or JVM would be pretty cool on silicon.
Sun Microsystems made a CPU with JVM (Java) as instruction set architecture, but it never caught on.
Are you saying that Intel would make an ARM CPU or that Intel would add the ARM instruction set to the x86 CPU?
Originally Posted by grotgrot
Not going to happen.
It would be huge overhead. It would bloat the ISA horribly. Also, its not just add instructions from one ISA to another, because there are other differences between ISAs besides instructions.
Intel tried integrating x86 (IA-32) into the IA-64 (Itanium architecture) and it went horribly bad.
One, I love how lazy Micheal is in that even after multiple people pointed out the link in the news post is wrong it remains to be so.
Two Kevin Kolfer is a joke and has been for a rather long while, he's a rather large control freak and most of the projects he's been a part of that I have seen have resulted in either him or third parties forking due to disagreements. One he forked solely because the old head maintainer left someone other than him in charge. He has yet to make a release of his fork while the person actually left in charge has made multiple releases. He has little respect in any of the communities he is a part of and I doubt any higher up in the fedora project would even bother to read his mail. But I'm sure Micheal did proper research and already knows this right?
Micheal this was a poor excuse for an article and you know it, please stop posting clear, ad money grubbing, tabloid quality crap. If you want hardware vendors and the Linux community to take you seriously try to at least have some journalistic integrity.
Note that the architecture and instruction set are not the same thing. The x86 does not execute x86 - the internal instruction set is micro-ops and there is a decoder that translates x86 into the micro-ops. An additional translator could be added for the ARM iSA into micro-ops. (ARM is considerably more orthogonal being RISC in the first place so this is a lot simpler than translating x86). This approach was not applicable to Itanic because it didn't have internal micro-ops so they had to have what amounted to an entire copy of an x86.
Originally Posted by uid313
So Intel *could* add the ARM ISA to existing x86 processors. Whether they should is a business decision, and in their shoes I would strongly consider it since there is sufficient installed base of ARM binaries.
Here is wonderful talk from 2004 by Bob Colwell about Things CPU Architects Need To Think About: http://stanford-online.stanford.edu/...-ee380-100.asx
Intel had already ARM CPUs, but they abandoned them:
Originally Posted by uid313
Those were separate chips. ARM chips have a zillion suppliers in all sorts of flavours.
Originally Posted by TobiSGD
The problem is what to do if you accept the large ARM installed base, but also believe it dying (or want it to die). If an Intel x86 chip is swapped for an ARM chip, you still have the problem that you need to recompile your OS and use different device drivers. That is to be expected. But you now have the problem that much of the software won't work. For example if you grab the Android package for Angry Birds it won't run without the included ARM library. A platform that not many applications work on, or require the majority of apps to be recompiled is going to have a hard time getting traction.
Intel *could* add an ARM ISA to internal micro-ops translator which would let you run existing ARM software while "migrating" to Intel. This is mostly a business decision than a technical hardship.
CISC and RISC does not mean Complex or Simple Instructions, actually means Complex or Reduced Instruction-Set, that means Complex or Reduced relations between Instructions. So on RISC, Instructions are relative between them, that means the one is a little the other and goes on. So RISC can execute all kinds of Instructions (float,integer,...) by the same units --and-- with the same way, while the ugly CISC cannot. All the above means that RISC wants 1-Million-Transistor without cache for 2.5dmips/mhz while CISC wants 20m, also the difference goes to 40/1 for Stream-Processing like Gaming. Finally CISC does not translating or decoding anything to internal RISC, decoding for processors its a totally different thing, CISC to RISC is recompiling and its the reason for Intel to insist on x86, because it was difficult to emulate a CISC on a RISC or another ISA CISC. But that is in the past: http://en.wikipedia.org/wiki/Loongson See this in LC3-part: 2*512bit(fmac) vectors for Float and another same length unit for Integer (bigger against Ivy Bridge). Also has Emulation-Instructions and Mips3D-Instruction, in order to accelerate Qemu and Software-Rasterizers like LLVMpipe, and its not the smallest of the RISC processors (16cores@2ghz=20watt). Don't invest on Intel, Amd, Nvidia, Microsoft, Apple, Adobe, Unreal,...
That x86 smartphone shows that power consumption is just fine on x86. Most x86 machines just haven't optimized everything for power like they could - like creating SoC's instead of using big power hungry motherboards.
Originally Posted by sirdilznik
Although I agree it's silly to claim ARM is going away. They're getting more important every day.
Modern x86 processors are all RISC designs, with a frontend that translates x86 into RISC micro-ops.*
Originally Posted by artivision
*For the most part. Some of the processors then internally fuse multiple micro-ops back together into macro-ops, so for the most part the things are just hybrids of whatever Intel and AMD can figure out how to run fastest.
I wouldn't be surprised if Intel adds ARM support, but they might not implement the entire ISA. Instead, they could just implement some of the commonly used instructions, as a x86ARM extension similar to SSE or their virtualization acceleration hardware. Then maybe you'd have an emulator program that would run the ARM programs, and it could translate some of the rarer instructions to x86 while using native x86ARM instructions for the majority.
Last edited by smitty3268; 06-15-2012 at 09:23 PM.