Announcement

Collapse
No announcement yet.

The Embedded Linux GPU Mess & How It Can Be Fixed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • archibald
    replied
    Originally posted by monraaf View Post
    Yeah those angry BSD folk with their pitchforks on a jihad against the GPL. But isn't BSD's biggest problem the shortage of developers working on the kernel side of the graphics stack (i.e. kms/dri2) ?

    Maybe they should ask Steve Jobs to throw them some crumbs
    I'm not qualified to answer what BSDs biggest problem is (would anybody?), but I certainly don't think that all the BSD folk have a jihad against the GPL. Personally, I think they are different tools, both good for different jobs, but I'm sure many people disagree, and I harbour them no ill-will for doing so.

    With that said, as a BSD-user I should really be saying 'oh noes, teh non-freez GeePeeEll will eat ur codez!'

    Or, more seriously, I don't think that (what would amount to) a licence-motivated fork of X is in the best interests of either camp.

    Leave a comment:


  • droidhacker
    replied
    The big important question still sitting on the table is this;
    the snapdragon's graphics core -- the "adreno 200" was designed by AMD and originally called the "z430".

    HOW MUCH does it have in common with GPUs for which AMD has released programming spec, i.e. R600? Certainly they didn't reinvent the wheel on it, therefore it must be made based on some common design. Do the open source kernel drivers that qualcomm just released give away any noteworthy similarities? Might it be reasonable to ADAPT the R600 driver to the adreno?

    Leave a comment:


  • monraaf
    replied
    Originally posted by archibald View Post
    Because Linux isn't the only operating system to use them - the BSD users would probably be a little upset if somebody attempted to change the licence.
    Yeah those angry BSD folk with their pitchforks on a jihad against the GPL. But isn't BSD's biggest problem the shortage of developers working on the kernel side of the graphics stack (i.e. kms/dri2) ?

    Maybe they should ask Steve Jobs to throw them some crumbs

    Leave a comment:


  • bridgman
    replied
    Originally posted by somedev View Post
    but I think the extent to which a proprietary environment is purported to be needed to produce an experienced GPU driver developer is overstated, and this underappreciates the value of pre-existing competences in a developer
    I don't think anyone has stated that at all, actually, let alone overstated it. The open source development community produces some extraordinarily good developers... it just doesn't produce enough of them.

    The only things a HW vendor reasonably expects from proprietary development are (a) the potential for proprietarty advantage, and (b) the ability to move more quickly by managing most of the proprietary IP as trade secrets rather than having to rely entirely on the patent process for protecting that proprietary advantage.

    Leave a comment:


  • somedev
    replied
    Originally posted by bridgman View Post
    Agree 100%; I don't think there's enough cross-domain synthesis going on today but it really can help. That said, that's not the same as saying :



    ... unless you mean that only in the most generic sense. My interpretation of your statement was that you were talking about "buying up" experienced GPU driver developers, but maybe you just meant "buying up" experienced programmers with general knowledge, not necessarily developers who had worked on GPU drivers before ?
    Sometimes the former, sometimes latter. In the case of GPU development, more the latter than the former, but I think the extent to which a proprietary environment is purported to be needed to produce an experienced GPU driver developer is overstated, and this underappreciates the value of pre-existing competences in a developer. Due to the nature of patents and intellectual property in the hardware industry, the proprietary GPU vendors are the only environment really placed to put the cherry on top of the programmer experience sundae anymore, but it need not be so.

    Leave a comment:


  • bridgman
    replied
    Agree 100%; I don't think there's enough cross-domain synthesis going on today but it really can help. That said, that's not the same as saying :

    R&D on the software side has in my observations followed merely this form: buy up software designers who already had the knowledge and/or interesting designs going for them before a penny was spent, and now they just get to apply that to one company or another's pet product
    ... unless you mean that only in the most generic sense. My interpretation of your statement was that you were talking about "buying up" experienced GPU driver developers, but maybe you just meant "buying up" experienced programmers with general knowledge, not necessarily developers who had worked on GPU drivers before ?

    Leave a comment:


  • somedev
    replied
    Originally posted by bridgman View Post
    I don't get this part at all. Graphics has changed so much in the last 10-20 years that driver techniques from that time are pretty much irrelevent.



    Other than a few "look Mom, I can use SIMD instructions to make video go really fast" papers I haven't seen much at all in terms of publicly available research on graphics driver implementation. There's a larger body of work on GPU compute but again that has very little to do with driver implementation.
    The nice thing about having changed so much in the ways they have changed, and starting from scratch at this point in time, you get to skip the foreplay of the intervening years. It is possible to dredge stuff up from other areas of computing and apply them where before you couldn't. If anything, it seems like all the stuff I was interested in before I got myself into the games ghetto - programming language design and implementation - is becoming more relevant as time goes on, not less.

    And, well, yeah, the majority of academic research is nonsense designed to get names on as many papers as possible - that is the nature of the game. I got published once and that was more than enough to turn me off from trying to do it again. But if you're looking in the right places and consistently, you find gems occasionally. I have many books and academic papers in my private collection that I think highly stand the test of time in terms of applicability, and offering general reasoning frameworks from which, without expending huge amounts of effort, one can devise ways to get within striking distance of state of the art.

    Leave a comment:


  • bridgman
    replied
    BTW I'm not saying that you couldn't find some techniques in modern drivers that were also in use 20 years ago (the basic concepts, like "don't let the hardware go idle if there's work to be done" and "use the hardware in the most efficient way" never go away but those are pretty much motherhood statements not specific techniques.

    Let's take multithreading as an example. You can say "multithreading is not a proprietary technique" and you would be absolutely right in the generic sense, but multithreading a graphics driver is something you don't normally see outside of proprietary code. I'm not saying that the proprietary driver teams are changing our understanding of the universe, but they are doing a lot of work going from "possibility" and "concept" to shipping drivers, and there is significant proprietary value in that work.

    Again, Dave's point was not an attack on the current PC environment as much as pointing out the probability that fewer of these considerations apply to the embedded "small devide" market.

    Leave a comment:


  • bridgman
    replied
    Originally posted by somedev View Post
    However, the basis on which this experience originally formed is still the shared programmer culture, and those teams are not more than 1-2 decades removed from this,
    I don't get this part at all. Graphics has changed so much in the last 10-20 years that driver techniques from that time are pretty much irrelevent.

    Originally posted by somedev View Post
    with significant publicly available research that keeps advancing to date for new players to draw from, and driving developments into a rather singular direction - genericity and more extreme amounts of parallelism.
    Other than a few "look Mom, I can use SIMD instructions to make video go really fast" papers I haven't seen much at all in terms of publicly available research on graphics driver implementation. There's a larger body of work on GPU compute but again that has very little to do with driver implementation.

    Leave a comment:


  • somedev
    replied
    Originally posted by bridgman View Post
    Somedev, are you talking about open source drivers (where I agree) or proprietary drivers (where I disagree strongly) ?

    If you look at the size of the proprietary driver teams for major HW vendors you'll find at least one order of magnitude more developers than can be "bought up" trained and ready to go. Not sure I understand your point about "developments that a very technical person could not independently produce with sufficient expertise" -- what matters is what you have today, not what you *could* have in the future.

    Dave's point is that the dynamics may be different for embedded GPUs, which are typically below the lowest end discrete GPUs in terms of performance and features. That's an interesting point and worth looking at.
    I don't deny that by now the major graphics vendors would have accumulated significant in-house teams that have matured their own base of experience. However, the basis on which this experience originally formed is still the shared programmer culture, and those teams are not more than 1-2 decades removed from this, with significant publicly available research that keeps advancing to date for new players to draw from, and driving developments into a rather singular direction - genericity and more extreme amounts of parallelism.

    "Today" even seems like we're amidst a house of cards with CPUs and GPUs, so I'm not even sure today matters. CPU because the homogeneity of the x86 instruction set and its computation model is being attacked on both sides by the embedded sector with ARM and even incompatible x86 instruction sets, and also the ascendence of GPUs. GPUs because the CPUs are pushing for more parallelism simply because of the difficulties of making single-threaded performance any better.

    While there is little risk of CPUs dethroning GPUs any time in the near future, I don't think the same is true for GPUs - as GPUs become general purpose accelerators, they become commodities for which the proprietary software stacks are irrelevant and that the graphics-only driver model will fail somewhere along the line, because in trying to dethrone CPUs, they will start accumulating CPU culture.

    Originally posted by bridgman View Post
    The other point Dave made was that the performance requirements (and hence driver complexity) for these small-footprint embedded systems seems to be lower than the norm for PCs, presumably because of the smaller screen size.

    One thing that surprised me was how many of these systems have been using pure software rendering, with hardware acceleration only showing up very recently.
    Lowered expectations? I mean, consumers don't believe serious games are possible on cell phones or netbooks, so they don't go out buying the devices on this principle, or faulting them for not being able to do it. But as the capabilities of these devices ramp up, and better gaming/user interfaces experiences become the norm on them, software rendering or hybrids like the older Intel GMA will just become unacceptable because they can't keep up by any stretch. The increasing presence of NV, ATI, and PowerVR in this sector is surely going to upset the status quo at some point - with embedded GPUs becoming every bit as complex as desktop ones.

    Leave a comment:

Working...
X