I only had hands on experience with Merced. It was quick and compilers absolutely could do a good job, some could do a great job. Really great? No. Closer to Optimal? No. Now I worked for the competition and have some bias but I believe the compiler line is largely because intel and hp assembled a shitty team that never produced anything interesting; to this date I’m still unaware of any interesting output and to be fair they weren’t run by a group that has serious software chops so blaming the software is an easy institutional explanation that feels better than saying intel made a crappy chip.
epic instructions were basically 3 32bit instructions passed in as one unit. Think of it as instruction fusion. Compilers absolutely could and do take that in to consideration. Additionally Merced would prefetch both branches on a branch, assuming your compiler(really just the instruction tiler) could encode that, and there wasn’t a misprediction penalty; this actually made the compiler easier to write in ways. It also had surplus registers and all sorts of other things that helped with compilers too, optimal register allocation was a very hot and challenging issue at the time. I don’t know that Rice’s theorem applies more to Merced than any processor
It failed for the same reasons that almost all hardware architectures fail: it was expensive because of low volume, it wasn’t faster enough to justify the loss of compatibility, and then the biggest technical miss was that they had no energy story when laptops were starting to match pcs in sales. It lacked enterprise reliability and serviceability capabilities that IBM and Sun needed so they didn’t adopt it, which further hurt the volume and then all of intel's grand predictions just turned in to dramatic fails. Making x86 64bit fixed the most immediate technical needs of the market and you could run all your old DOS and Windows stuff on it.
epic instructions were basically 3 32bit instructions passed in as one unit. Think of it as instruction fusion. Compilers absolutely could and do take that in to consideration. Additionally Merced would prefetch both branches on a branch, assuming your compiler(really just the instruction tiler) could encode that, and there wasn’t a misprediction penalty; this actually made the compiler easier to write in ways. It also had surplus registers and all sorts of other things that helped with compilers too, optimal register allocation was a very hot and challenging issue at the time. I don’t know that Rice’s theorem applies more to Merced than any processor
It failed for the same reasons that almost all hardware architectures fail: it was expensive because of low volume, it wasn’t faster enough to justify the loss of compatibility, and then the biggest technical miss was that they had no energy story when laptops were starting to match pcs in sales. It lacked enterprise reliability and serviceability capabilities that IBM and Sun needed so they didn’t adopt it, which further hurt the volume and then all of intel's grand predictions just turned in to dramatic fails. Making x86 64bit fixed the most immediate technical needs of the market and you could run all your old DOS and Windows stuff on it.
Comment