Announcement

Collapse
No announcement yet.

Modern Intel Gallium3D Driver Still Being Toyed With

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by deanjo View Post
    I guess it comes down to what would switching to Gallium actually do for them. There are advantages to maintaining your own code base as well rather then relying on more factions to support your product. Switching just for the sake of switching with no real clear advantage doesn't seem like a smart or cost effective move.


    Gallium=LLVM use for many different functions: Graphics, OpenCL, C_accel on GPU. I also propose to Intel, to drop their own compiler suite and support C11 with LLVM for Linux and Windows.

    Comment


    • #17
      Originally posted by artivision View Post
      Gallium=LLVM use for many different functions: Graphics, OpenCL, C_accel on GPU. I also propose to Intel, to drop their own compiler suite and support C11 with LLVM for Linux and Windows.
      Using LLVM hasn't yielded any real improvements in performance, openCL support on an igp is a bit of a joke, no one is going to do any real serious openCL work on a IGP. None of the listed reasons are compelling enough for them to change. It doesn't make them any more money, the advantages are very small and don't overly improve performance. That is a lot of effort for next to no return on investment.

      Pretty much any entry level video card can pummel the igp's in computing performance and nobody is going to use them for any serious gpgpu purposes.



      Please note that the intel IGP's don't even have any DP capabilities. The DP results there are actually from the CPU doing the crunching on the IB processors.
      Last edited by deanjo; 05-19-2013, 02:44 PM.

      Comment


      • #18
        Originally posted by deanjo View Post
        Using LLVM hasn't yielded any real improvements in performance, openCL support on an igp is a bit of a joke, no one is going to do any real serious openCL work on a IGP. None of the listed reasons are compelling enough for them to change. It doesn't make them any more money, the advantages are very small and don't overly improve performance. That is a lot of effort for next to no return on investment.

        Pretty much any entry level video card can pummel the igp's in computing performance and nobody is going to use them for any serious gpgpu purposes.



        Please note that the intel IGP's don't even have any DP capabilities. The DP results there are actually from the CPU doing the crunching on the IB processors.


        Sorry but you are unfamiliar. First DP can be done in software. Second Intel-HD-Graphics are powerful: Intel-HD4000= 170Gflops-64bit-Fmac= 340Gflops-32bit-Fmac= 510Gflops-32bit-Streaming. GT3 is almost 3 times faster and OpenCL is important for OpenGL Compute-Shaders. Third LLVM will be faster when Optimizations for Graphics-Languages are ready, today we use extra things with some Back-Ends. Also someone needs to give as a complete Back-End and not one that hides functionality like AMD's. Sure they don't want as to have good performance to both Games and 3D-Profecional-Applications, but.

        Comment


        • #19
          Originally posted by deanjo View Post
          Switching just for the sake of switching with no real clear advantage doesn't seem like a smart or cost effective move.
          Choosing Gallium for new, yet unwritten drivers isn't switching.

          Comment


          • #20
            Originally posted by Awesomeness View Post
            Choosing Gallium for new, yet unwritten drivers isn't switching.
            For a completely new architecture, yes, I agree. For new generations of Intel hardware, though, it's a lot less effort to add incremental support to our existing driver than to write a whole new driver every year.
            Free Software Developer .:. Mesa and Xorg
            Opinions expressed in these forum posts are my own.

            Comment


            • #21
              Originally posted by Kayden View Post
              For a completely new architecture, yes, I agree. For new generations of Intel hardware, though, it's a lot less effort to add incremental support to our existing driver than to write a whole new driver every year.


              With Gallium you only need a small extension code and an LLVM back-end for new hardware. Oh, and your compiler suite for x86 is ridiculous and out of time and space. Statically linking compilers should never be existed in the first place and your x86 monopoly tactic will fail. I believe in one or two years the emulation techniques will let as to emulate processors with 80-90% efficiency. Remember as then. Its only a small possibility for you to be forgiven, lets say for your contribution to MESA.

              Comment


              • #22
                Originally posted by artivision View Post
                I believe in one or two years the emulation techniques will let as to emulate processors with 80-90% efficiency.
                Not likely, though if it ever happened, all OS's will die.

                Seriously, if we got to the point where emulating different processors/instruction sets became trivial, you would simply launch VM's on a per-app basis. The need for a heavy duty OS goes away.

                Comment


                • #23
                  Welcome to my hobby. You guys don't seem to know history very well, remember the Pentium Pro. It was the first RISC-like x86 architecture with instruction decoders. Emulating x86 would require a native architecture with as many direct one-to-one instruction relationships as possible. Every x86 compatible processor in use today already do that. What would be the point then of designing an ISA that emulates an ISA that is exactly equal to itself? That is the whole conundrum of why x86 is still used today even after all these years.
                  Last edited by duby229; 05-21-2013, 12:05 PM.

                  Comment


                  • #24
                    Originally posted by gamerk2 View Post
                    Not likely, though if it ever happened, all OS's will die.

                    Seriously, if we got to the point where emulating different processors/instruction sets became trivial, you would simply launch VM's on a per-app basis. The need for a heavy duty OS goes away.


                    Today's solution with Qemu is: InsSet-to_InsSet-to_Low(microOPS...), so High-VM_High-High-Low. Tomorrow using LLVM or other technology with Qemu, will be: High-VM_low-Low. From the 40-45% of today we can go 80-90% and also possibly using CPU decoders and virtualization operations. Paradigm: http://news.softpedia.com/news/X86-o...s-296390.shtml
                    Last edited by artivision; 05-21-2013, 12:11 PM.

                    Comment


                    • #25
                      80% performance would require that ARM have like 90% of x86 instructions available as a direct one-to-one relationship. ARM doesnt have that. 40% seems far more likely.

                      Comment


                      • #26
                        Originally posted by duby229 View Post
                        80% performance would require that ARM have like 90% of x86 instructions available as a direct one-to-one relationship. ARM doesnt have that. 40% seems far more likely.


                        Doesn't need that. Recompile means reconstructing an instruction of one CPU to one or more of another. A big x86 instruction becomes one ARM ore two smaller or three smaller but different, they do not represent the x86 style, its like you have the source of a program. The output is usually efficient, the problem is the CPU heavy recompile action, you can see that with Wine when you have a game like Rift or Tera and you need heavy HLSL compilation, then your CPU bottlenecks and does not fill the GPU. The final efficiency is: Recompilation_program loss + output code efficiency. So the targeted 80% is combination of both, the second alone is 70-80% efficient today, meaning if it was static would be 70-80% today. So if your recompilation program costs only a 10-20% of your CPU and the code that gives is exactly like you had the source of your game then you have 80+% efficiency. At the end experts more than you say that is possible for 2014.
                        Last edited by artivision; 05-21-2013, 02:58 PM.

                        Comment


                        • #27
                          We'll see but I don't believe it. Transmeta has already tried and failed with Crusoe. Intel has tried and failed with IA64. Billions of dollars have already been spent. The failures have already been made. I just don't see it happening.
                          Last edited by duby229; 05-21-2013, 05:02 PM.

                          Comment


                          • #28
                            Originally posted by duby229 View Post
                            We'll see but I don't believe it. Transmeta has already tried and failed with Crusoe. Intel has tried and failed with IA64. Billions of dollars have already been spent. The failures have already been made. I just don't see it happening.


                            I prefer a Loongshon L3C.

                            Comment


                            • #29
                              Originally posted by artivision View Post
                              I prefer a Loongshon L3C.
                              Sorry but MIPS has even less chance of displacing x86 than ARM.

                              Comment

                              Working...
                              X