Announcement

Collapse
No announcement yet.

The Embedded Linux GPU Mess & How It Can Be Fixed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by airlied View Post
    ...in the embedded space, Windows is to Linux, what Linux is to Windows in desktops, but they seem to be trying to enforce the same mindset.
    Well, you know how much everyone hates having to come up with a new mindset all by themselves
    Test signature

    Comment


    • #12
      Originally posted by Henri View Post
      In that context, it has always surprised me a bit that both Mesa itself and most (all?) of the drivers are X11 licensed rather than something like LGPL.
      Well... X11's not the same thing as the 3D stuff is, to start out with. Couple that with inertia on licensing (I know there wasn't any impetus to change it with Utah-GLX when it was the thing and I was one of the contributors to it...) and it kind of makes some sense that it's still MIT/X11.

      Comment


      • #13
        Originally posted by Henri View Post
        In that context, it has always surprised me a bit that both Mesa itself and most (all?) of the drivers are X11 licensed rather than something like LGPL.
        Because Linux isn't the only operating system to use them - the BSD users would probably be a little upset if somebody attempted to change the licence.

        Comment


        • #14
          Originally posted by archibald View Post
          Because Linux isn't the only operating system to use them - the BSD users would probably be a little upset if somebody attempted to change the licence.
          Sure. (Although really, X11 and BSD allow you to do that, it seems to me that if you're upset when people actually use that ability you should have been using GPL/LGPL or similar in the first place. As a Wine developer I may be a bit biased here though.)

          However, my point is more that if AMD, Intel, or other hardware vendors are concerned about competitors reusing driver code, I would have expected r600 for example to be released under the LGPL. In the broader view you'd also expect the hardware vendors and distributions (RedHat, really) to push for (or at least be in favor of) LGPL Mesa / Gallium3D, but as far as I know that discussion has never come up, which suggests to me that people are mostly happy with the way things are. I could see how in the past Tungsten Graphics could have made the argument that they depend on the ability to create closed drivers based on Mesa, but I don't think the same argument would work so well for VMware.

          Comment


          • #15
            Originally posted by bridgman View Post
            You would be surprised how little of the magic voodoo is hardware specific. Quite a bit of the proprietary driver techniques would be portable to competitors hardware, and even more so to a new competitor who independently designed hardware that was architecturally similar to yours.



            If all of the hardware vendors contribute fairly to the common development effort, that's true. The problem is that it also effectively funnels benefits from vendors who do contribute to vendors who do not (ie the ones who aren't spending much on R&D in the first place). I suspect all the vendors agree that allowing open source developers to program their hardware is a Good Thing - the challenge is making that possible without giving up too much of your hardware advantage. That said, it can be done (and IMO should be done); it's just not as easy as some of the claims suggest.



            Strictly speaking I don't think many hardware mfgs would agree with that. If you can give your hardware a competitive advantage via investing in software developoment it's not clear how removing that advantage and commoditizing the hardware benefits the hardware vendor.

            Again, I'm not saying that vendors shouldn't support open source driver development, just that many of the seemingly obvious arguments don't actually hold up when you look at them more closely. I still think opening up hardware is a Good Thing, but if you want to understand why you haven't seen the wholesale rush to open source driver support that those arguments predict it's good to try to see things from the hardware vendor's POV as well.

            Bottom line is that you need arguments which make sense to a hardware vendor. Making sense to everyone else is nice but doesn't get the hardware opened up. Dave's post seemed like a good step towards refining those arguments.
            I find this line of reasoning quite suspect. R&D on the software side has in my observations followed merely this form: buy up software designers who already had the knowledge and/or interesting designs going for them before a penny was spent, and now they just get to apply that to one company or another's pet product. In terms of the rise of virtual machines - this happened with Sun/Java and Microsoft/CLR/etc. - and GPUs would appear to be going that way.

            I would be quite surprised if R&D funding for software design under the driver stack has seriously resulted in developments that a very technical person could not independently produce with sufficient expertise. I suspect that most of what is considered proprietary magic voodoo has really been implemented independently by most serious graphics vendors and that it's nothing more than a strawman to keep drivers closed. Sounds good to the layman and the media, and that's what counts?

            Especially as the hardware becomes increasingly generic, the technology issue is funneled back into the usual compilation and memory management strategies for which there is already plenty of publicly available literature on effective techniques.

            Comment


            • #16
              Somedev, are you talking about open source drivers (where I agree) or proprietary drivers (where I disagree strongly) ?

              If you look at the size of the proprietary driver teams for major HW vendors you'll find at least one order of magnitude more developers than can be "bought up" trained and ready to go. Not sure I understand your point about "developments that a very technical person could not independently produce with sufficient expertise" -- what matters is what you have today, not what you *could* have in the future.

              Dave's point is that the dynamics may be different for embedded GPUs, which are typically below the lowest end discrete GPUs in terms of performance and features. That's an interesting point and worth looking at.
              Test signature

              Comment


              • #17
                The other point Dave made was that the performance requirements (and hence driver complexity) for these small-footprint embedded systems seems to be lower than the norm for PCs, presumably because of the smaller screen size.

                One thing that surprised me was how many of these systems have been using pure software rendering, with hardware acceleration only showing up very recently.
                Test signature

                Comment


                • #18
                  Originally posted by bridgman View Post
                  Somedev, are you talking about open source drivers (where I agree) or proprietary drivers (where I disagree strongly) ?

                  If you look at the size of the proprietary driver teams for major HW vendors you'll find at least one order of magnitude more developers than can be "bought up" trained and ready to go. Not sure I understand your point about "developments that a very technical person could not independently produce with sufficient expertise" -- what matters is what you have today, not what you *could* have in the future.

                  Dave's point is that the dynamics may be different for embedded GPUs, which are typically below the lowest end discrete GPUs in terms of performance and features. That's an interesting point and worth looking at.
                  I don't deny that by now the major graphics vendors would have accumulated significant in-house teams that have matured their own base of experience. However, the basis on which this experience originally formed is still the shared programmer culture, and those teams are not more than 1-2 decades removed from this, with significant publicly available research that keeps advancing to date for new players to draw from, and driving developments into a rather singular direction - genericity and more extreme amounts of parallelism.

                  "Today" even seems like we're amidst a house of cards with CPUs and GPUs, so I'm not even sure today matters. CPU because the homogeneity of the x86 instruction set and its computation model is being attacked on both sides by the embedded sector with ARM and even incompatible x86 instruction sets, and also the ascendence of GPUs. GPUs because the CPUs are pushing for more parallelism simply because of the difficulties of making single-threaded performance any better.

                  While there is little risk of CPUs dethroning GPUs any time in the near future, I don't think the same is true for GPUs - as GPUs become general purpose accelerators, they become commodities for which the proprietary software stacks are irrelevant and that the graphics-only driver model will fail somewhere along the line, because in trying to dethrone CPUs, they will start accumulating CPU culture.

                  Originally posted by bridgman View Post
                  The other point Dave made was that the performance requirements (and hence driver complexity) for these small-footprint embedded systems seems to be lower than the norm for PCs, presumably because of the smaller screen size.

                  One thing that surprised me was how many of these systems have been using pure software rendering, with hardware acceleration only showing up very recently.
                  Lowered expectations? I mean, consumers don't believe serious games are possible on cell phones or netbooks, so they don't go out buying the devices on this principle, or faulting them for not being able to do it. But as the capabilities of these devices ramp up, and better gaming/user interfaces experiences become the norm on them, software rendering or hybrids like the older Intel GMA will just become unacceptable because they can't keep up by any stretch. The increasing presence of NV, ATI, and PowerVR in this sector is surely going to upset the status quo at some point - with embedded GPUs becoming every bit as complex as desktop ones.

                  Comment


                  • #19
                    Originally posted by somedev View Post
                    However, the basis on which this experience originally formed is still the shared programmer culture, and those teams are not more than 1-2 decades removed from this,
                    I don't get this part at all. Graphics has changed so much in the last 10-20 years that driver techniques from that time are pretty much irrelevent.

                    Originally posted by somedev View Post
                    with significant publicly available research that keeps advancing to date for new players to draw from, and driving developments into a rather singular direction - genericity and more extreme amounts of parallelism.
                    Other than a few "look Mom, I can use SIMD instructions to make video go really fast" papers I haven't seen much at all in terms of publicly available research on graphics driver implementation. There's a larger body of work on GPU compute but again that has very little to do with driver implementation.
                    Test signature

                    Comment


                    • #20
                      BTW I'm not saying that you couldn't find some techniques in modern drivers that were also in use 20 years ago (the basic concepts, like "don't let the hardware go idle if there's work to be done" and "use the hardware in the most efficient way" never go away but those are pretty much motherhood statements not specific techniques.

                      Let's take multithreading as an example. You can say "multithreading is not a proprietary technique" and you would be absolutely right in the generic sense, but multithreading a graphics driver is something you don't normally see outside of proprietary code. I'm not saying that the proprietary driver teams are changing our understanding of the universe, but they are doing a lot of work going from "possibility" and "concept" to shipping drivers, and there is significant proprietary value in that work.

                      Again, Dave's point was not an attack on the current PC environment as much as pointing out the probability that fewer of these considerations apply to the embedded "small devide" market.
                      Test signature

                      Comment

                      Working...
                      X