Announcement

Collapse
No announcement yet.

Intel's OpenCL Beignet Project Is Gaining Ground

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by Alejandro Nova View Post
    I'm uninformed here.

    Is Intel implementing their own API, like NVIDIA CUDA? Or is this a free extension made to leverage OpenCL APIs but without Gallium?
    It's OpenCL support without using the existing code already present in Gallium.

    The thing is, that support is very flexible. Nouveau is sending it through TGSI to take advantage of their current driver support. AMD is sending it through an entirely different LLVM pipeline. There's no reason Intel couldn't just copy and paste their Beignet code into the same framework, getting the benefit of some shared code in the front while keeping their same custom backend code.

    Except, they are apparently allergic to anything named Gallium that might actually work to benefit their competitors. So, we get a completely separate out-of-Mesa support that shares no code at all. Similar to Mir, the original announcement came with a bunch of technical reasons behind the decision, which were all quickly outed for bullshit within a couple hours. Since then, there's been no more discussion at all about why beignet even exists, other than, well, they'd already started it.


    Yes, I agree Intel has every right to do whatever they want to do with their own time and money. Just as I'm free to call them out about what i think is stupid.
    Last edited by smitty3268; 08-18-2013, 05:21 PM.

    Comment


    • #17
      Originally posted by smitty3268 View Post
      Just in terms of the sheer, "we don't like upstream so we're going to reimplement everything from scratch" NIH syndromeness.

      At least this project will still have the same OpenCL API that apps can all share. I'm half surprised they aren't making their own incompatible API that will "better fit Intel hardware".
      Have you ever been involved with a project being out-sourced to india/china? If so, you would understand why the India/China team does not want to work with anybody outside of their team. Outsourcing firms are not "community" developers.

      The best programmers in India/China have the absolute worst English I've ever heard and they can't say much more than Hello and Bye in video conferences. They can't read anything more complicated than Dr. Seuss and so they need to have a translator/spokesman for communication.

      That's why they have a go-between to do all the communications. Expecting an outsourced India/China team to work together with an Open Source community like Mesa/Gallium that speaks English at such a high level? Not going to happen on the money they're getting paid to do the project. The spokesman/translator is often a linguist who understands absolutely nothing of source code and can become a big problem for communicating anything more than very specific requirements and specifications.

      If Intel implemented OpenCL using Gallium, they would have to work closely with other Gallium devs.

      Getting an outsourced team to work together with Gallium devs would be a lot more expensive than letting the outsourced team work on their own from very specific requirements and specifications and then just merging it in without using Gallium at all.

      Of course Intel is going to keep the work that absolutely needs to be coordinated with the open source community in-house instead of out-sourcing it... And anything that can be developed separately, they'll out-source it to India/China for a tiny fraction of the development cost.. It's the wise thing to do from a business perspective, and it's exactly what they're doing.
      Last edited by Sidicas; 08-18-2013, 10:05 PM.

      Comment


      • #18
        Originally posted by pingufunkybeat View Post
        This is unfortunate, but probably inevitable when you employ more people to work on something than all of your competitors combined.

        Oh well, go clover!
        That's not inevitable at all. If you employ more people, you can actually put them to work on features not implemented by anyone else, instead of duplicating the same work. Just saying.

        Comment


        • #19
          Originally posted by uid313 View Post
          As I understand it AMD and Nouvoue are implementing OpenCL support using the Gallium3D state tracker.

          Intel however, instead of using the Gallium3D state tracker and sharing code with AMD and Nouvoue so we can have a common OpenCL implementation shared by all drivers, they're making their own OpenCL implementation outside of Gallium3D that will only be used by themselves.
          They can't implement their OpenCL within Gallium since they have DRI driver, not Gallium one. They'd need to port their driver(s) to Gallium first (which they countless times said they wont) to use clover state tracker.

          Comment


          • #20
            Originally posted by Krejzi View Post
            They can't implement their OpenCL within Gallium since they have DRI driver, not Gallium one. They'd need to port their driver(s) to Gallium first (which they countless times said they wont) to use clover state tracker.
            I see.
            Why is that Intel make their own DRI driver and refuse to use Gallium3D?

            Comment


            • #21
              Originally posted by uid313 View Post
              I see.
              Why is that Intel make their own DRI driver and refuse to use Gallium3D?
              They don't agree that it's the best way to write their drivers. If you look for posts by Kayden on here he's gone into detail about it.

              Comment


              • #22
                Originally posted by uid313 View Post
                I see.
                Why is that Intel make their own DRI driver and refuse to use Gallium3D?
                In the past, I believe the reasons have been along the lines of:
                1. They've already put a lot of work into the DRI driver, and don't want to stall development for a year or so while they port everything to Gallium, and then also have to deal with all the bug reports of anything that gets mis-ported.
                2. They don't believe that Gallium would give them superior performance to the back-end that they already have been working on.
                3. Most of the Intel developers weren't familiar with the Gallium APIs and TGSI, and it would take training time to get up to speed... time where they wouldn't be improving the DRI driver

                So... that being said, If Chia-I Wu can get the 'ilo' driver up to the point where it is competitive with the Intel DRI driver, maybe we'll see a re-evaluation in policy. I believe that it re-uses much of the existing Intel back-end code, so it would make it easier for Intel to transition over to the new Gallium model, since the code for their back-end would still be familiar, and most of the TGSI abstraction/conversion code would already be done.

                Comment


                • #23
                  Originally posted by uid313 View Post
                  I see.
                  Why is that Intel make their own DRI driver and refuse to use Gallium3D?
                  Because they had a working driver long before Gallium3D was ready.

                  AMD made the switch after somebody else did the original port of r300 and r600 to the Gallium architecture. Once these ports started outperforming the original drivers, AMD made them the default drivers and started working on them.

                  Intel had a range of drivers working on classic Mesa when Gallium3D was still in its infancy. There was a proof-of-concept driver for one chipset which was kind-of-OK, but Intel never saw the need to switch to Gallium3d. They have a large Linux team, and a codebase they are familiar with. Switching to Gallium would mean lots of short-term headaches, and they don't see any significant pay-off in the long term. At least that's my understanding.

                  Nouveau was Gallium3D from the beginning.

                  EDIT: Veerappan was faster.

                  Comment


                  • #24
                    Originally posted by pingufunkybeat View Post
                    Because they had a working driver long before Gallium3D was ready.

                    AMD made the switch after somebody else did the original port of r300 and r600 to the Gallium architecture. Once these ports started outperforming the original drivers, AMD made them the default drivers and started working on them.

                    Intel had a range of drivers working on classic Mesa when Gallium3D was still in its infancy. There was a proof-of-concept driver for one chipset which was kind-of-OK, but Intel never saw the need to switch to Gallium3d. They have a large Linux team, and a codebase they are familiar with. Switching to Gallium would mean lots of short-term headaches, and they don't see any significant pay-off in the long term. At least that's my understanding.

                    Nouveau was Gallium3D from the beginning.

                    EDIT: Veerappan was faster.
                    There are benefits to going togalluim that makes those headaches worthwhile. I could could pop off at least half dozen right now just off the top of my head. But Intel isnt going to change their mind though, so really isnt any point in trying.

                    Comment


                    • #25
                      Originally posted by Veerappan View Post
                      They don't believe that Gallium would give them superior performance to the back-end that they already have been working on.
                      That was never the point of Gallium, but just a possible side-effect. The point of using Gallium is to speed up development. Better performance *might* happen because of this (you spend less time reinventing the wheel, and more time optimizing your code), while the theoretical maximum is probably by using specific code for specific drivers, just that it would take forever.

                      Comment


                      • #26
                        Originally posted by duby229 View Post
                        There are benefits to going togalluim that makes those headaches worthwhile. I could could pop off at least half dozen right now just off the top of my head. But Intel isnt going to change their mind though, so really isnt any point in trying.
                        I thought we were discussing to know more, not to convince anyone of changing their minds. I'm interested in knowing about such benefits.

                        Comment


                        • #27
                          Originally posted by archibald View Post
                          They don't agree that it's the best way to write their drivers. If you look for posts by Kayden on here he's gone into detail about it.
                          Then maybe he should have proposed how to fix Gallium3D or propose something better than Gallium3D.

                          I think unified graphics architecture is a good idea.
                          Windows have Windows Display Driver Model (WDDM).

                          Comment


                          • #28
                            Originally posted by mrugiero View Post
                            That was never the point of Gallium, but just a possible side-effect. The point of using Gallium is to speed up development. Better performance *might* happen because of this (you spend less time reinventing the wheel, and more time optimizing your code), while the theoretical maximum is probably by using specific code for specific drivers, just that it would take forever.
                            The reason that gallium speeds up development is because it allows a lot more code sharing. Intel could use the existing VDPAU state tracker instead if writing a VA-API state tracker and save time. But that is also exactly the same reason they don't want to use gallium.

                            EDIT: or more pertinent to this thread, they could use clover instead of writing beigenet.
                            Last edited by duby229; 08-19-2013, 05:22 PM.

                            Comment


                            • #29
                              Originally posted by duby229 View Post
                              The reason that gallium speeds up development is because it allows a lot more code sharing. Intel could use the existing VDPAU state tracker instead if writing a VA-API state tracker and save time. But that is also exactly the same reason they don't want to use gallium.

                              EDIT: or more pertinent to this thread, they could use clover instead of writing beigenet.
                              I'm aware of how it does speed up development. My point is, it doesn't inherently lead to better performance, and that's what I was correcting on the quote. It usually leads to better performance because of the faster development, caused, as you said, because of the shared code. I already stated in a previous code such facts about the use of Gallium and why I think they avoid it (since I'm not an Intel developer/executive, I can't do much more than speculating about it, but my guess is they don't want to benefit their competitors through shared code, even if that means more work for them).

                              EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.

                              Comment


                              • #30
                                Originally posted by mrugiero View Post
                                I'm aware of how it does speed up development. My point is, it doesn't inherently lead to better performance, and that's what I was correcting on the quote. It usually leads to better performance because of the faster development, caused, as you said, because of the shared code. I already stated in a previous code such facts about the use of Gallium and why I think they avoid it (since I'm not an Intel developer/executive, I can't do much more than speculating about it, but my guess is they don't want to benefit their competitors through shared code, even if that means more work for them).

                                EDIT: Anyway, I want to know of the other reasons to use Gallium you thought about.
                                You can take that as two examples of code sharing that Intel has chosen not to participate in. Don't misunderstand me, Intel has every right to want their OSS drivers to work with their OSS solutions. I'm fine with that. Plus they do contribute a lot of code to a lot of projects. Nobody can really fault Intel for their OSS commitment.

                                I do feel that there is an argument to be made for Intel to port their OSS driver to gallium due to the potential it would have on improving the whole stack. But that is really selfish of me to want.

                                Comment

                                Working...
                                X