Announcement

Collapse
No announcement yet.

FFmpeg Moves Closer To 1.0 Release

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bridgman View Post
    Until we can release UVD programming info, what patches to avconf/ffmpeg do you think we should be writing and sending ?
    well clearly if you John as the head of the AMD Linux projects cant walk into the board room (or know someone above you that can) and put a good case to release this antiquated UVD programming data (cant do L5.1 etc) by now, and not actually get them to write the OpenCL OpenVideo driver library for Linux as you say they havent even bothered to do that yet after far more then a year..... ( so you imply the windows version is not written in standard C99 code !then", perhaps its written in MS basic and doesnt conform to the spec so cant be simply conpiled and change where needed for the generic linux framework as is usual)

    THEN it seems clear YOU as the head of the AMD Linux projects NEED to do one of two things... say fuck it , we cant help Linux end users use any form of AMD/ATI hardware assisted video decode directly other than whats available from 3rd party's...

    or get the board to stop fuckin about and give you same money and resources (if that really is the problem) to write something (at this point i suppose users don't really care "what it is" as long as it works in ffmpeg and can be openly ported from there everywhere else as usual) that the users can use and you and legal can be happy with , end of story really.... but that's your choice as the head of Linux to do something or not as is the case for way more than a year
    Last edited by popper; 12 December 2011, 01:27 PM.

    Comment


    • #12
      Originally posted by curaga View Post
      Not following the "scene" actively, what's the latest state of ffmpeg and libav?
      same as it always was since they split, the libav devs write the bulk of the patches you might use today, AVX,SIMD speed ups,audio ,new ARM code etc, ffmpeg/Michael runs a script to pull these patches now and again into ffmpeg , and pretty much ffmpeg/carl is the main contributor of patches to ffmpeg with help from other random devs .... or so it seems


      Last edited by popper; 12 December 2011, 01:54 PM.

      Comment


      • #13
        Originally posted by popper View Post
        so you imply the windows version is not written in standard C99 code
        Of course it isn't written in C99. MSVC can't understand that. And he did not so much imply but explicitly mention (in another thread, but in reply to you - maybe you should read a bit more than you write) Windows having different APIs than Linux. Does that surprise you?

        Comment


        • #14
          Originally posted by AnonymousCoward View Post
          Of course it isn't written in C99. MSVC can't understand that. And he did not so much imply but explicitly mention (in another thread, but in reply to you - maybe you should read a bit more than you write) Windows having different APIs than Linux. Does that surprise you?
          (i did read it, he's right it was OT there and much better here so....) yeah sure , OC they have different API's but the code is written for windows it runs, and there cant be that much difference that a simple replace cant handle for the bulk of the code if its written properly and isn't just a few 100 lines not worth bothering about porting, but that's not the point as you well know, he as much as said the Linux open decode code isn't even started yet and thats what really matters, you cant use it if they don't make it and so what's the point of buying their kit if HW decoding is what you want, Intel and ARM have it, AMD don't and wont any time soon on linux/non windows platforms
          Last edited by popper; 12 December 2011, 02:17 PM.

          Comment


          • #15
            Originally posted by popper View Post
            he as much as said the Linux open decode code isn't even started yet and thats what really matters, you cant use it if they don't make it and so what's the point of buying their kit if HW decoding is what you want, Intel and ARM have it, AMD don't and wont any time soon on linux/non windows platforms
            This might not be entirely true, because I read that UVD will be divided in GCN. One part for decode another one for DRM. So when I read that article I was like: it sounds something like what bridgman keeps telling for years.

            Obviously, it was phrased something like: this new architecture will allow us to better serve customer demand by handling DRM better. So PR stuff but we actually know that this is for Linux.

            Comment


            • #16
              Originally posted by HokTar View Post
              This might not be entirely true, because I read that UVD will be divided in GCN. One part for decode another one for DRM. So when I read that article I was like: it sounds something like what bridgman keeps telling for years.

              Obviously, it was phrased something like: this new architecture will allow us to better serve customer demand by handling DRM better. So PR stuff but we actually know that this is for Linux.
              well that's a really inventive way to interpret what these AMD/ATI "PR innovator's" want you to think they mean...

              and the key point you make "telling" rather than showing and using the (non existent) code for a full generation of the older GPU/UVD products, but a direct URL to this new "Graphic Card Next" text you read might be useful as a context reference so if you have it, add it somewhere perhaps....

              remember that GCN (Graphic Card Next) is billed as the next generation AMD gfx and like i said/implied elsewhere they more than likely wont replace UVD with something that actually works for both encode and decode in linux, so even if they do as you think they will and separate it out as someone here a very long time ago said they should then that really isn't enough today is it.

              where's the real time HW High profile H.264 encode, where's the L5.1 decode or the stereo3d 1080P encode/decode capabilities that will be all the rage in a few months as the vendors try and get your cash before the real cash spend on 2K and 4K super high def display kit comes along , x264 and FFmpeg/avconv can make/decode that along side their 10bit encode/decode CPU improvements now..... iv not seen any proof that AMD kit will be able to do any of that from inside Linux even with a slightly tweaked UVD4 and some yet to be written or not software
              Last edited by popper; 12 December 2011, 05:40 PM.

              Comment


              • #17
                Originally posted by popper View Post
                "The FFmpeg project encourages everyone to upgrade to version 0.9 unless they are followers of Git master."
                actually they say, and have for some time "unless they use current git master." as in , everyone that can,is recommended to use GIT over a point release.

                i noticed you apparently finally went and got the 0.8.7 (and GIT x264) the other day to replace your antiquated version after one of my posts, and imported it to your test suite ,i hope you go and do the same really soon with this newer version, and every version hence , preferably GIT as they recommend so you don't end up having yourself and your test suite users running older slower code before you even start 2011 retrospect and 2012 tests.

                and OC, change or add a real life 1080 or at least 720P sample encode ( using CRF 18 not two pass) test to your test suite product to reflect what people do today not some usless and ineffective vcd that no one today uses...
                I just had to bite here... Fetching the absolute latest version AND git all the time is the last thing that Michael should do. If he wants the search functionality on OpenBenchmarking.org to be useful, the versions of the software being tested need to remain consistent for a period of time. In order to be able to compare multiple processors against each other, you need to have a stable software base to test on. At least by keeping the ffmpeg version static for a while, he can reduce the number of variables that change between tests.

                Yes, he should update the ffmpeg software periodically, but he shouldn't grab every point release that comes out. He could create an ffmpeg-git test profile that would clone/build the latest git version, but that should be a separate test profile from the stable releases.

                Comment


                • #18
                  Originally posted by Veerappan View Post
                  I just had to bite here... Fetching the absolute latest version AND git all the time is the last thing that Michael should do. If he wants the search functionality on OpenBenchmarking.org to be useful, the versions of the software being tested need to remain consistent for a period of time. In order to be able to compare multiple processors against each other, you need to have a stable software base to test on. At least by keeping the ffmpeg version static for a while, he can reduce the number of variables that change between tests.

                  Yes, he should update the ffmpeg software periodically, but he shouldn't grab every point release that comes out. He could create an ffmpeg-git test profile that would clone/build the latest git version, but that should be a separate test profile from the stable releases.
                  Yep, right.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #19
                    Originally posted by Veerappan View Post
                    I just had to bite here... Fetching the absolute latest version AND git all the time is the last thing that Michael should do. If he wants the search functionality on OpenBenchmarking.org to be useful, the versions of the software being tested need to remain consistent for a period of time. In order to be able to compare multiple processors against each other, you need to have a stable software base to test on. At least by keeping the ffmpeg version static for a while, he can reduce the number of variables that change between tests.

                    Yes, he should update the ffmpeg software periodically, but he shouldn't grab every point release that comes out. He could create an ffmpeg-git test profile that would clone/build the latest git version, but that should be a separate test profile from the stable releases.
                    LOL No Veerappan, i didn't mean to infer Michael get every single ffmpeg/x264 GIT update as it happens, only that he updates them now and again, such as every quarter when he grabs the Q whatever Intel related GIT's to compile for instance. id hope that would be acceptable and sane and keeping separate old and new for comparison as you say.
                    Last edited by popper; 13 December 2011, 10:25 AM.

                    Comment


                    • #20
                      Originally posted by popper View Post
                      same as it always was since they split, the libav devs write the bulk of the patches you might use today, AVX,SIMD speed ups,audio ,new ARM code etc, ffmpeg/Michael runs a script to pull these patches now and again into ffmpeg , and pretty much ffmpeg/carl is the main contributor of patches to ffmpeg with help from other random devs .... or so it seems


                      http://lists.libav.org/pipermail/libav-devel/
                      You mean the ffmpeg team is just taking libav's work and getting all the press? In FLOSS development this sort of things usually are pretty clear (development is done in the open...). I'm reading at some places that ffmpeg development is getting faster than libav's. How can that be possible if they're just importing the other team's work? I also read somewhere Debian is choosing libav over ffmpeg, but I'm not sure this is true.

                      I'm genuinely asking. I have no preference in any party "winning". I do have some interest in knowing who's "winning", though. In a few months time I'm planning to start a project based on these libs and I'd rather go with "the winners".

                      Comment

                      Working...
                      X