Announcement

Collapse
No announcement yet.

Intel's OpenCL Beignet Project Is Gaining Ground

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by uid313 View Post
    I see.
    Why is that Intel make their own DRI driver and refuse to use Gallium3D?
    In the past, I believe the reasons have been along the lines of:
    1. They've already put a lot of work into the DRI driver, and don't want to stall development for a year or so while they port everything to Gallium, and then also have to deal with all the bug reports of anything that gets mis-ported.
    2. They don't believe that Gallium would give them superior performance to the back-end that they already have been working on.
    3. Most of the Intel developers weren't familiar with the Gallium APIs and TGSI, and it would take training time to get up to speed... time where they wouldn't be improving the DRI driver


    So... that being said, If Chia-I Wu can get the 'ilo' driver up to the point where it is competitive with the Intel DRI driver, maybe we'll see a re-evaluation in policy. I believe that it re-uses much of the existing Intel back-end code, so it would make it easier for Intel to transition over to the new Gallium model, since the code for their back-end would still be familiar, and most of the TGSI abstraction/conversion code would already be done.

    Comment


    • #22
      Originally posted by uid313 View Post
      I see.
      Why is that Intel make their own DRI driver and refuse to use Gallium3D?
      Because they had a working driver long before Gallium3D was ready.

      AMD made the switch after somebody else did the original port of r300 and r600 to the Gallium architecture. Once these ports started outperforming the original drivers, AMD made them the default drivers and started working on them.

      Intel had a range of drivers working on classic Mesa when Gallium3D was still in its infancy. There was a proof-of-concept driver for one chipset which was kind-of-OK, but Intel never saw the need to switch to Gallium3d. They have a large Linux team, and a codebase they are familiar with. Switching to Gallium would mean lots of short-term headaches, and they don't see any significant pay-off in the long term. At least that's my understanding.

      Nouveau was Gallium3D from the beginning.

      EDIT: Veerappan was faster.

      Comment


      • #23
        Originally posted by pingufunkybeat View Post
        Because they had a working driver long before Gallium3D was ready.

        AMD made the switch after somebody else did the original port of r300 and r600 to the Gallium architecture. Once these ports started outperforming the original drivers, AMD made them the default drivers and started working on them.

        Intel had a range of drivers working on classic Mesa when Gallium3D was still in its infancy. There was a proof-of-concept driver for one chipset which was kind-of-OK, but Intel never saw the need to switch to Gallium3d. They have a large Linux team, and a codebase they are familiar with. Switching to Gallium would mean lots of short-term headaches, and they don't see any significant pay-off in the long term. At least that's my understanding.

        Nouveau was Gallium3D from the beginning.

        EDIT: Veerappan was faster.
        There are benefits to going togalluim that makes those headaches worthwhile. I could could pop off at least half dozen right now just off the top of my head. But Intel isnt going to change their mind though, so really isnt any point in trying.

        Comment


        • #24
          Originally posted by Veerappan View Post
          They don't believe that Gallium would give them superior performance to the back-end that they already have been working on.
          That was never the point of Gallium, but just a possible side-effect. The point of using Gallium is to speed up development. Better performance *might* happen because of this (you spend less time reinventing the wheel, and more time optimizing your code), while the theoretical maximum is probably by using specific code for specific drivers, just that it would take forever.

          Comment


          • #25
            Originally posted by duby229 View Post
            There are benefits to going togalluim that makes those headaches worthwhile. I could could pop off at least half dozen right now just off the top of my head. But Intel isnt going to change their mind though, so really isnt any point in trying.
            I thought we were discussing to know more, not to convince anyone of changing their minds. I'm interested in knowing about such benefits.

            Comment


            • #26
              Originally posted by archibald View Post
              They don't agree that it's the best way to write their drivers. If you look for posts by Kayden on here he's gone into detail about it.
              Then maybe he should have proposed how to fix Gallium3D or propose something better than Gallium3D.

              I think unified graphics architecture is a good idea.
              Windows have Windows Display Driver Model (WDDM).

              Comment


              • #27
                Originally posted by mrugiero View Post
                That was never the point of Gallium, but just a possible side-effect. The point of using Gallium is to speed up development. Better performance *might* happen because of this (you spend less time reinventing the wheel, and more time optimizing your code), while the theoretical maximum is probably by using specific code for specific drivers, just that it would take forever.
                The reason that gallium speeds up development is because it allows a lot more code sharing. Intel could use the existing VDPAU state tracker instead if writing a VA-API state tracker and save time. But that is also exactly the same reason they don't want to use gallium.

                EDIT: or more pertinent to this thread, they could use clover instead of writing beigenet.
                Last edited by duby229; 19 August 2013, 05:22 PM.

                Comment


                • #28
                  Originally posted by duby229 View Post
                  The reason that gallium speeds up development is because it allows a lot more code sharing. Intel could use the existing VDPAU state tracker instead if writing a VA-API state tracker and save time. But that is also exactly the same reason they don't want to use gallium.

                  EDIT: or more pertinent to this thread, they could use clover instead of writing beigenet.
                  I'm aware of how it does speed up development. My point is, it doesn't inherently lead to better performance, and that's what I was correcting on the quote. It usually leads to better performance because of the faster development, caused, as you said, because of the shared code. I already stated in a previous code such facts about the use of Gallium and why I think they avoid it (since I'm not an Intel developer/executive, I can't do much more than speculating about it, but my guess is they don't want to benefit their competitors through shared code, even if that means more work for them).

                  EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.

                  Comment


                  • #29
                    Originally posted by mrugiero View Post
                    I'm aware of how it does speed up development. My point is, it doesn't inherently lead to better performance, and that's what I was correcting on the quote. It usually leads to better performance because of the faster development, caused, as you said, because of the shared code. I already stated in a previous code such facts about the use of Gallium and why I think they avoid it (since I'm not an Intel developer/executive, I can't do much more than speculating about it, but my guess is they don't want to benefit their competitors through shared code, even if that means more work for them).

                    EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.
                    You can take that as two examples of code sharing that Intel has chosen not to participate in. Don't misunderstand me, Intel has every right to want their OSS drivers to work with their OSS solutions. I'm fine with that. Plus they do contribute a lot of code to a lot of projects. Nobody can really fault Intel for their OSS commitment.

                    I do feel that there is an argument to be made for Intel to port their OSS driver to gallium due to the potential it would have on improving the whole stack. But that is really selfish of me to want.

                    Comment


                    • #30
                      Originally posted by mrugiero View Post
                      EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.
                      There are various cool tech things built on top of Gallium that all gallium drivers can take advantage of, but Intel drivers can not.

                      For example, there's the on screen HUD that Marek wrote a while back to display stats on the screen. Or there's the Direct3D9 backend. It's unlikely Intel will ever create something like that for their own driver because of legal reasons, but they could have taken advantage of the free community work. Instead, they'll be stuck with the same D3D -> OGL wine translation that the binary drivers need.

                      Further, if the intel drivers merged into gallium there is a fair amount of cleanups that could be done to the rest of the Mesa codebase. Given that Intel is essentially the only classic driver left, or at least the only modern one that is running shaders.

                      Comment

                      Working...
                      X