Announcement

Collapse
No announcement yet.

FFmpeg Moves Closer To 1.0 Release

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by HokTar View Post
    This might not be entirely true, because I read that UVD will be divided in GCN. One part for decode another one for DRM. So when I read that article I was like: it sounds something like what bridgman keeps telling for years.

    Obviously, it was phrased something like: this new architecture will allow us to better serve customer demand by handling DRM better. So PR stuff but we actually know that this is for Linux.
    well that's a really inventive way to interpret what these AMD/ATI "PR innovator's" want you to think they mean...

    and the key point you make "telling" rather than showing and using the (non existent) code for a full generation of the older GPU/UVD products, but a direct URL to this new "Graphic Card Next" text you read might be useful as a context reference so if you have it, add it somewhere perhaps....

    remember that GCN (Graphic Card Next) is billed as the next generation AMD gfx and like i said/implied elsewhere they more than likely wont replace UVD with something that actually works for both encode and decode in linux, so even if they do as you think they will and separate it out as someone here a very long time ago said they should then that really isn't enough today is it.

    where's the real time HW High profile H.264 encode, where's the L5.1 decode or the stereo3d 1080P encode/decode capabilities that will be all the rage in a few months as the vendors try and get your cash before the real cash spend on 2K and 4K super high def display kit comes along , x264 and FFmpeg/avconv can make/decode that along side their 10bit encode/decode CPU improvements now..... iv not seen any proof that AMD kit will be able to do any of that from inside Linux even with a slightly tweaked UVD4 and some yet to be written or not software
    Last edited by popper; 12-12-2011, 04:40 PM.

    Comment


    • #17
      Originally posted by popper View Post
      "The FFmpeg project encourages everyone to upgrade to version 0.9 unless they are followers of Git master."
      actually they say, and have for some time "unless they use current git master." as in , everyone that can,is recommended to use GIT over a point release.

      i noticed you apparently finally went and got the 0.8.7 (and GIT x264) the other day to replace your antiquated version after one of my posts, and imported it to your test suite ,i hope you go and do the same really soon with this newer version, and every version hence , preferably GIT as they recommend so you don't end up having yourself and your test suite users running older slower code before you even start 2011 retrospect and 2012 tests.

      and OC, change or add a real life 1080 or at least 720P sample encode ( using CRF 18 not two pass) test to your test suite product to reflect what people do today not some usless and ineffective vcd that no one today uses...
      I just had to bite here... Fetching the absolute latest version AND git all the time is the last thing that Michael should do. If he wants the search functionality on OpenBenchmarking.org to be useful, the versions of the software being tested need to remain consistent for a period of time. In order to be able to compare multiple processors against each other, you need to have a stable software base to test on. At least by keeping the ffmpeg version static for a while, he can reduce the number of variables that change between tests.

      Yes, he should update the ffmpeg software periodically, but he shouldn't grab every point release that comes out. He could create an ffmpeg-git test profile that would clone/build the latest git version, but that should be a separate test profile from the stable releases.

      Comment


      • #18
        Originally posted by Veerappan View Post
        I just had to bite here... Fetching the absolute latest version AND git all the time is the last thing that Michael should do. If he wants the search functionality on OpenBenchmarking.org to be useful, the versions of the software being tested need to remain consistent for a period of time. In order to be able to compare multiple processors against each other, you need to have a stable software base to test on. At least by keeping the ffmpeg version static for a while, he can reduce the number of variables that change between tests.

        Yes, he should update the ffmpeg software periodically, but he shouldn't grab every point release that comes out. He could create an ffmpeg-git test profile that would clone/build the latest git version, but that should be a separate test profile from the stable releases.
        Yep, right.
        Michael Larabel
        http://www.michaellarabel.com/

        Comment


        • #19
          Originally posted by Veerappan View Post
          I just had to bite here... Fetching the absolute latest version AND git all the time is the last thing that Michael should do. If he wants the search functionality on OpenBenchmarking.org to be useful, the versions of the software being tested need to remain consistent for a period of time. In order to be able to compare multiple processors against each other, you need to have a stable software base to test on. At least by keeping the ffmpeg version static for a while, he can reduce the number of variables that change between tests.

          Yes, he should update the ffmpeg software periodically, but he shouldn't grab every point release that comes out. He could create an ffmpeg-git test profile that would clone/build the latest git version, but that should be a separate test profile from the stable releases.
          LOL No Veerappan, i didn't mean to infer Michael get every single ffmpeg/x264 GIT update as it happens, only that he updates them now and again, such as every quarter when he grabs the Q whatever Intel related GIT's to compile for instance. id hope that would be acceptable and sane and keeping separate old and new for comparison as you say.
          Last edited by popper; 12-13-2011, 09:25 AM.

          Comment


          • #20
            Originally posted by popper View Post
            same as it always was since they split, the libav devs write the bulk of the patches you might use today, AVX,SIMD speed ups,audio ,new ARM code etc, ffmpeg/Michael runs a script to pull these patches now and again into ffmpeg , and pretty much ffmpeg/carl is the main contributor of patches to ffmpeg with help from other random devs .... or so it seems


            http://lists.libav.org/pipermail/libav-devel/
            You mean the ffmpeg team is just taking libav's work and getting all the press? In FLOSS development this sort of things usually are pretty clear (development is done in the open...). I'm reading at some places that ffmpeg development is getting faster than libav's. How can that be possible if they're just importing the other team's work? I also read somewhere Debian is choosing libav over ffmpeg, but I'm not sure this is true.

            I'm genuinely asking. I have no preference in any party "winning". I do have some interest in knowing who's "winning", though. In a few months time I'm planning to start a project based on these libs and I'd rather go with "the winners".

            Comment


            • #21
              Originally posted by Aleve Sicofante View Post
              You mean the ffmpeg team is just taking libav's work and getting all the press? In FLOSS development this sort of things usually are pretty clear (development is done in the open...). I'm reading at some places that ffmpeg development is getting faster than libav's. How can that be possible if they're just importing the other team's work? I also read somewhere Debian is choosing libav over ffmpeg, but I'm not sure this is true.

              I'm genuinely asking. I have no preference in any party "winning". I do have some interest in knowing who's "winning", though. In a few months time I'm planning to start a project based on these libs and I'd rather go with "the winners".
              Debian has indeed gone with libav. (the package is still called "ffmpeg" though)

              Comment


              • #22
                Originally posted by Aleve Sicofante View Post
                I'm reading at some places that ffmpeg development is getting faster than libav's. How can that be possible if they're just importing the other team's work?
                One good thing that came from the ffmpeg/libav schism is that development of both now happens at fast pace, and they are more welcoming of external patches than ever. When ffmpeg still had the "monopoly", developers could afford to reject or delay patches.

                As always with free software, you can pick what you want as long as the license permits it. Even from a non-friendly fork.
                Originally posted by Aleve Sicofante View Post
                I'm genuinely asking. I have no preference in any party "winning". I do have some interest in knowing who's "winning", though. In a few months time I'm planning to start a project based on these libs and I'd rather go with "the winners".
                While there is no firm commitment to keeping libav and ffmpeg drop-in replacements of each other, I think this is at least a goal. So using either for your project will be fine.

                Comment


                • #23
                  Originally posted by bridgman View Post
                  Until we can release UVD programming info, what patches to avconf/ffmpeg do you think we should be writing and sending ?
                  Seeing as how Nvidia's vdpau is going to work with your hardware before your own UVD, just keep it.

                  Nothing like putting the last nail in the coffin of your own specification due to hangups over imaginary property.

                  Comment


                  • #24
                    Using UVD is preferable especially in mobile devices, as it is more energy efficient than decoding in shaders.

                    Comment


                    • #25
                      Originally posted by chithanh View Post
                      Using UVD is preferable especially in mobile devices, as it is more energy efficient than decoding in shaders.
                      I noticed according to Wikipedia that vdpau supports way more codecs. It doesn't say anything about either one of them supporting open standards like VP8 or Theora though, just proprietary codecs controlled by the MPEG cartel.

                      Comment


                      • #26
                        I was under the impression that VP8 is sufficiently similar to H.264 that it could at least partially be decoded by H.264 decoders.

                        VDPAU is the analog to XvBA. It makes no sense to compare it with UVD (which would be the analog to PureVideo).

                        Comment


                        • #27
                          Originally posted by popper View Post
                          and the key point you make "telling" rather than showing and using the (non existent) code for a full generation of the older GPU/UVD products, but a direct URL to this new "Graphic Card Next" text you read might be useful as a context reference so if you have it, add it somewhere perhaps....
                          Well, I tried to find it but came up almost empty. I guess the SAMU (Secure Asset Management Unit) was the one which reminded me of bridgman's posts. Re-reading it made me unsure...
                          Btw, the site was not even in English, so apologies.

                          Comment


                          • #28
                            Wow. Looks like I missed an entire page of posts in this thread.

                            Popper, I often have trouble responding to your posts because you make "statements of fact" where not only the statement is incorrect but the assumptions on which you base your statement appear to be wrong as well. It makes it really hard to respond with anything smaller than a white paper, and it's been years since I've had time for something like that.

                            In the same way that kernel maintainers like big changes to be broken up into a set of smaller patches (reviewable-sized chunks) would it be possible for you to take a bit more of a bottom-up approach and validate your assumptions first ?

                            Originally posted by popper View Post
                            well clearly if you John as the head of the AMD Linux projects
                            I'm not the "head of AMD Linux" and never have been - part of my job includes managing the open source graphics effort and being part of a group which helps with proposals for releasing information to the open source community. The other parts of my job tend to be cross-OS rather than being Linux-specific.

                            Originally posted by popper View Post
                            cant walk into the board room (or know someone above you that can) and put a good case to release this antiquated UVD programming data (cant do L5.1 etc) by now, and not actually get them to write the OpenCL OpenVideo driver library for Linux as you say they havent even bothered to do that yet after far more then a year..... ( so you imply the windows version is not written in standard C99 code !then", perhaps its written in MS basic and doesnt conform to the spec so cant be simply conpiled and change where needed for the generic linux framework as is usual)
                            Programming language has nothing to do with it - the internal APIs of different operating systems are significantly different, it's not a "scan replace" effort to port code to a totally different OS and video framework. It's not a few hundred lines, or even a few thousand.

                            I think I can safely reveal that the code is not written in MS Basic

                            Originally posted by popper View Post
                            THEN it seems clear YOU as the head of the AMD Linux projects NEED to do one of two things... say fuck it , we cant help Linux end users use any form of AMD/ATI hardware assisted video decode directly other than whats available from 3rd party's...
                            Or, we could pick our battles, prioritize the ones where we can deliver useful benefit to our customers relatively quickly (eg display & 3D engine) and continue to release new information there, while working on the harder problems (open source video decode is the poster child for hard problems) in the background, and releasing code and programming information in other areas like GPU compute where we can. I'm sure it all looks boring and terribly slow from the outside, but you seem much more willing to give up on this area than we are.

                            Things might be more clear (although not as simple) if you didn't write off some of AMD's efforts as "third party", btw...

                            Originally posted by popper View Post
                            or get the board to stop fuckin about and give you same money and resources (if that really is the problem) to write something (at this point i suppose users don't really care "what it is" as long as it works in ffmpeg and can be openly ported from there everywhere else as usual) that the users can use and you and legal can be happy with , end of story really.... but that's your choice as the head of Linux to do something or not as is the case for way more than a year
                            I guess this is what I don't get. We released the XvBA API and some sample "how to use it" code a year or so ago. That API info is finally starting to be used (which is great news !), and AFAIK Tim is talking with the developers about issues they found (eg Tim was investigating a reported issue with thread safety under certain conditions).

                            Is your point that we somehow snubbed the ffmpeg community by not treating them as "special", or that we should have written all the patches ourselves rather than just providing API information, sample code, a mailing list, and a bit of support ?
                            Last edited by bridgman; 12-18-2011, 12:13 PM.

                            Comment


                            • #29
                              Originally posted by bridgman View Post
                              would it be possible for you to take a bit more of a bottom-up approach and validate your assumptions first ?
                              A guy called "Popper" being told that - *falls to the floor laughing*

                              Comment


                              • #30
                                Originally posted by johnc View Post
                                That laundry list of changes and yet still no mkv ordered chapters support.
                                It does, however, support OS/2 threads now.

                                They added TwinVQ file format support a while back.

                                Priorities.

                                Comment

                                Working...
                                X