Announcement

Collapse
No announcement yet.

Broadcom Crystal HD Support For MPlayer, FFmpeg

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Yep, I can agree with "just as important".

    The issue is that right now we have enough resources to do 2D/3D and *just* keep up with the rate of new hardware introduction while working on opening up video accel hardware as a background task. If we divert any more resources to video accel then we're going to start falling behind on 2D/3D accel, and since 2D/3D/Xv are all accelerated with the same hardware these days it's kind of an all-or-nothing deal. We have to get the initial driver code working in order to understand the hardware and provide a useful reference for the rest of the development community (writing docs first didn't work so well), and that uses up pretty much all of our time.

    I do believe that the current balance is the best use of our developers for now, but if you think assigning priorities differently would make more users happy I'm open to discussing it.

    Comment


    • #32
      Originally posted by deanjo View Post
      The results of the survey seem to correspond to what I'm saying.
      Not sure I agree with that interpretation. The first chart didn't include "lighting up the display and getting initial 2D/3D acceleration working so other developers can build on the initial support", which is where most of our time goes, and the second chart didn't include "using the computer for everyday tasks".

      Originally posted by deanjo View Post
      Also the fact that even on the blob side of AMD the efforts there on video decoding acceleration hasn't been all that great either. Even S3 has a better solution then what ATI is offering at the moment. You take a look at nvnews.net forums. Stephen Warren is in there like a rabid dog hammering out and resolving vdpau issues.
      I'm not sure it makes sense to say "hey, the (much bigger) proprietary driver team isn't moving as fast as I want in that area so the (much smaller) open source team should do the work instead". Supporting video decode in open source drivers has all of the problems of video decode with binary drivers plus a lot more.

      Comment


      • #33
        Originally posted by bridgman View Post
        Yep, I can agree with "just as important".

        The issue is that right now we have enough resources to do 2D/3D and *just* keep up with the rate of new hardware introduction while working on opening up video accel hardware as a background task. If we divert any more resources to video accel then we're going to start falling behind on 2D/3D accel, and since 2D/3D/Xv are all accelerated with the same hardware these days it's kind of an all-or-nothing deal. We have to get the initial driver code working in order to understand the hardware and provide a useful reference for the rest of the development community (writing docs first didn't work so well), and that uses up pretty much all of our time.

        I do believe that the current balance is the best use of our developers for now, but if you think assigning priorities differently would make more users happy I'm open to discussing it.
        Well bridgeman the foss drivers are always going to require heavy work for 2D/3D anyways. It's not like AMD is going to all of a sudden say "OK no new GPU's for the next 3 years so that FOSS linux drivers can catch up in features and performance". Eventually AMD has to bite the bullet and start working on these features. If it means starting with giving the blobs some well supported capabilities because it would be easier "legal wise" then start it on the blobs but if AMD is going to support FOSS development they have to start also supporting more then just the basics.

        Comment


        • #34
          Like I said, I'm pretty sure his comment was out of frustration. He probably associates the lack of effort in that area with lack of interest.

          Comment


          • #35
            All fair points but they don't help to answer the question, which was why Deiter said I didn't understand the need for video decode. I don't manage the blob (Catalyst) drivers, and I don't manage the allocation of resources to open source driver work.

            If you (or Dieter) think that the developers we have should be given different priorities, I'm all ears.

            Comment


            • #36
              If you want my 2 cents then, ya, I really do think you have to start prioritizing that video decode capability higher on the list. Like I said, eventually you guys have to bite the bullet on that and "just get it done". AMD is always going to bringing out new cards with new capabilities that will require new 2d/3d code. If you at least gave them a good basis for using the UVD then it will grow as well with the 2d/3d but right now they have nothing to work with at all.

              Comment


              • #37
                We can't open up UVD right now so any work we do in the short term would be shader based, and would have to be for something like Theora or VP8.

                I'm not sure that would be the best use of developer time, particularly since it would mean that we would start falling behind pretty rapidly on basic 2D/3D/modesetting support.

                Comment


                • #38
                  Maybe some time could be spent on deciding which direction OSS video should be going in

                  Which API for example, least then volunteers might be able to get some ground work in a Gallium state tracker

                  Would it be possible for any info regarding UVD to be released? Or at least some clues to the cleaver clogs might be able to backward engineer some of it's features - I realise with poor blob video support this might be hard under linux

                  I feel video on linux is going through an API war much akin to the bluray / hddvd war which only slows everything down for the consumer

                  I would really love for a few linux developers from Intel, ATI, Nouveau and maybe even Broadcom to be locked in a room until they can decide at least what direction they should all go in

                  They should be competing on features and the quality of implementation not on the chosen implementation

                  Comment


                  • #39
                    Originally posted by bridgman View Post
                    We can't open up UVD right now so any work we do in the short term would be shader based, and would have to be for something like Theora or VP8.

                    I'm not sure that would be the best use of developer time, particularly since it would mean that we would start falling behind pretty rapidly on basic 2D/3D/modesetting support.
                    Well let's put it this way, until those capabilities come to the AMD camp I can't see myself switching to an AMD card / IGP / Hybrid solution. I build far too many HTPC's for myself and friends who also do common everyday tasks like watching video off the web.

                    Comment


                    • #40
                      Originally posted by FireBurn View Post
                      Would it be possible for any info regarding UVD to be released? Or at least some clues to the cleaver clogs might be able to backward engineer some of it's features - I realise with poor blob video support this might be hard under linux
                      If the information is safe to have in the public then we would release it ourselves rather than forcing the community to reverse-engineer it. If the information is not safe to have in the public then providing enough information to enable reverse engineering would be the worst of all worlds.

                      Determining whether the information *is* safe to release is a slow and expensive process, unfortunately. We have started the process but it's not particularly predictable in terms of either duration or outcome.

                      Comment


                      • #41
                        Originally posted by bridgman View Post
                        If the information is safe to have in the public then we would release it ourselves rather than forcing the community to reverse-engineer it. If the information is not safe to have in the public then providing enough information to enable reverse engineering would be the worst of all worlds.

                        Determining whether the information *is* safe to release is a slow and expensive process, unfortunately. We have started the process but it's not particularly predictable in terms of either duration or outcome.
                        That really is a shame and what really annoys me about "IP"

                        Could you at least stated what API you'd prefer any software or shader implementations to use. I'm thinking specifically VA-API, VDPAU and XvBA. The latter of course seems to be still born.

                        If the 3 major OSS camps could decide on which is best I'm sure there would be a lot of shared code in implementing acceleration on the different chipsets especially is shaders are used

                        Comment


                        • #42
                          I suspect the thing that really annoys you in this case is actually "DRM" more than "IP"

                          AFAIK the open source developers don't care too much about API - writing the decoder is the hard part, and the same code could be wrapped in any API that supports the appropriate level of abstraction. If you are doing VLD-level decode then any of the APIs should work, but if you are only offloading a subset of the decode tasks (say IDCT/MC/deblock) then VA-API is a good choice since it offers a wider choice of entry points.

                          Comment


                          • #43
                            Originally posted by bridgman View Post
                            I suspect the thing that really annoys you in this case is actually "DRM" more than "IP"

                            AFAIK the open source developers don't care too much about API - writing the decoder is the hard part, and the same code could be wrapped in any API that supports the appropriate level of abstraction. If you are doing VLD-level decode then any of the APIs should work, but if you are only offloading a subset of the decode tasks (say IDCT/MC/deblock) then VA-API is a good choice since it offers a wider choice of entry points.
                            Is there really a difference? Which territories are effected by DRM issues?

                            As for API not being important - it would stop Adobe bitchin' ;-)

                            Would that mean two state trackers with one auxiliary module powering both of them

                            Comment


                            • #44
                              Yeah, the difference is pretty significant. More specifically, the impact of being wrong on a DRM decision is much higher than the impact of being wrong on an IP decision. The information we have released so far mostly involved IP decisions, while UVD decisions are primarily related to DRM.

                              If the goal was to share code across multiple APIs on an ongoing basis then some kind of shared module would work, or the code could simply be built once for each API supported. It might also be possible to have a single chunk of code expose multiple APIs, but I don't think we have looked at that.

                              Comment


                              • #45
                                deanjo:
                                >> Just curious, what would make you say something like that ?
                                > Probably out of frustration from the lack of AMD's efforts bringing hardware decoding to linux.

                                s/linux/FLOSS/

                                In my case FreeBSD.

                                bridgman:
                                > assuming you don't count any of the work that *has* been done over the last year in that area

                                Work has been done? I've been skimming the headlines on phoronix and reading
                                anything that looks remotely relevant. Did I miss something? Are there other
                                news sources I should be watching? Progress on video decoding would have been
                                big news.

                                > video decoding is more important than the 2D and 3D acceleration required to
                                > support compositing and a modern desktop and agree that we should stop
                                > implementing and documenting 2D/3D acceleration hardware on new GPUs for a
                                > year or two and focus on video acceleration instead

                                Code:
                                #     # #######  #####    ###
                                 #   #  #       #     #   ###
                                  # #   #       #         ###
                                   #    #####    #####     #
                                   #    #             #
                                   #    #       #     #   ###
                                   #    #######  #####    ###
                                Instead of doing a half-assed job on all chips, pick one chip and get *everything*
                                working and working well. Then we will have a chip we can buy and use. As it
                                is there is no chip we can buy and use.

                                Warning: lame car analogy ahead
                                Imagine going to a car dealership and there are say, 15 different models
                                available. But you can only use reverse gear. You can't use a forward gear
                                in any of them. Wouldn't you rather there be one model that allows you
                                to use the forward gears as well as reverse? Would you buy a car that
                                only had a reverse gear?

                                We don't need compositing. We don't need 3D. We don't need a "modern" desktop.
                                A simple window manager works just fine. But there is no alternative for
                                video decoding.

                                It has been 3.3 years since the first batch of documentation was released.
                                Therefore more than 3.3 years since the effort started. Where is the revised
                                UVD with the DRM crappola seperated out to make it easy to document the decoding
                                part?

                                deanjo:
                                > He probably associates the lack of effort in that area with lack of interest.

                                Bridgman has posted that he doesn't care about video. So it is pretty obvious
                                why video decode is at the bottom of his priority list.

                                What percentage of the general population (the general population, not just the
                                phoronix users) cares about video? We can use TV ownership as a proxy. What
                                percentage of households have a TV? Probably 99%. What percentage of the
                                population cares about a "modern desktop"? A whole lot less than 99%

                                According to yahoo, AMD has a market cap of $6.07 Billion, a profit margin of
                                19.97%, and 10,400 full time employees. And you are telling us that AMD does
                                not have the resources to provide documentation for its products?

                                Comment

                                Working...
                                X