Page 3 of 3 FirstFirst 123
Results 21 to 29 of 29

Thread: Modern Intel Gallium3D Driver Still Being Toyed With

  1. #21
    Join Date
    Apr 2011
    Posts
    386

    Default

    Quote Originally Posted by Kayden View Post
    For a completely new architecture, yes, I agree. For new generations of Intel hardware, though, it's a lot less effort to add incremental support to our existing driver than to write a whole new driver every year.


    With Gallium you only need a small extension code and an LLVM back-end for new hardware. Oh, and your compiler suite for x86 is ridiculous and out of time and space. Statically linking compilers should never be existed in the first place and your x86 monopoly tactic will fail. I believe in one or two years the emulation techniques will let as to emulate processors with 80-90% efficiency. Remember as then. Its only a small possibility for you to be forgiven, lets say for your contribution to MESA.

  2. #22
    Join Date
    Jun 2012
    Posts
    366

    Default

    Quote Originally Posted by artivision View Post
    I believe in one or two years the emulation techniques will let as to emulate processors with 80-90% efficiency.
    Not likely, though if it ever happened, all OS's will die.

    Seriously, if we got to the point where emulating different processors/instruction sets became trivial, you would simply launch VM's on a per-app basis. The need for a heavy duty OS goes away.

  3. #23
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    Welcome to my hobby. You guys don't seem to know history very well, remember the Pentium Pro. It was the first RISC-like x86 architecture with instruction decoders. Emulating x86 would require a native architecture with as many direct one-to-one instruction relationships as possible. Every x86 compatible processor in use today already do that. What would be the point then of designing an ISA that emulates an ISA that is exactly equal to itself? That is the whole conundrum of why x86 is still used today even after all these years.
    Last edited by duby229; 05-21-2013 at 01:05 PM.

  4. #24
    Join Date
    Apr 2011
    Posts
    386

    Default

    Quote Originally Posted by gamerk2 View Post
    Not likely, though if it ever happened, all OS's will die.

    Seriously, if we got to the point where emulating different processors/instruction sets became trivial, you would simply launch VM's on a per-app basis. The need for a heavy duty OS goes away.


    Today's solution with Qemu is: InsSet-to_InsSet-to_Low(microOPS...), so High-VM_High-High-Low. Tomorrow using LLVM or other technology with Qemu, will be: High-VM_low-Low. From the 40-45% of today we can go 80-90% and also possibly using CPU decoders and virtualization operations. Paradigm: http://news.softpedia.com/news/X86-o...s-296390.shtml
    Last edited by artivision; 05-21-2013 at 01:11 PM.

  5. #25
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    80% performance would require that ARM have like 90% of x86 instructions available as a direct one-to-one relationship. ARM doesnt have that. 40% seems far more likely.

  6. #26
    Join Date
    Apr 2011
    Posts
    386

    Default

    Quote Originally Posted by duby229 View Post
    80% performance would require that ARM have like 90% of x86 instructions available as a direct one-to-one relationship. ARM doesnt have that. 40% seems far more likely.


    Doesn't need that. Recompile means reconstructing an instruction of one CPU to one or more of another. A big x86 instruction becomes one ARM ore two smaller or three smaller but different, they do not represent the x86 style, its like you have the source of a program. The output is usually efficient, the problem is the CPU heavy recompile action, you can see that with Wine when you have a game like Rift or Tera and you need heavy HLSL compilation, then your CPU bottlenecks and does not fill the GPU. The final efficiency is: Recompilation_program loss + output code efficiency. So the targeted 80% is combination of both, the second alone is 70-80% efficient today, meaning if it was static would be 70-80% today. So if your recompilation program costs only a 10-20% of your CPU and the code that gives is exactly like you had the source of your game then you have 80+% efficiency. At the end experts more than you say that is possible for 2014.
    Last edited by artivision; 05-21-2013 at 03:58 PM.

  7. #27
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    We'll see but I don't believe it. Transmeta has already tried and failed with Crusoe. Intel has tried and failed with IA64. Billions of dollars have already been spent. The failures have already been made. I just don't see it happening.
    Last edited by duby229; 05-21-2013 at 06:02 PM.

  8. #28
    Join Date
    Apr 2011
    Posts
    386

    Default

    Quote Originally Posted by duby229 View Post
    We'll see but I don't believe it. Transmeta has already tried and failed with Crusoe. Intel has tried and failed with IA64. Billions of dollars have already been spent. The failures have already been made. I just don't see it happening.


    I prefer a Loongshon L3C.

  9. #29
    Join Date
    Nov 2007
    Posts
    1,353

    Default

    Quote Originally Posted by artivision View Post
    I prefer a Loongshon L3C.
    Sorry but MIPS has even less chance of displacing x86 than ARM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •