Announcement

Collapse
No announcement yet.

AMD Open-Sources VCE Video Encode Engine Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by agd5f View Post
    Why didn't Intel use omx or vdapu or some other existing APIs to begin with? omx is a lot more flexible in being able to support different types of hw. vaapi is very much tied to the way Intel's hw works (on both the encode and decode sides) which makes it a poor fit for other hw.
    Probably because they want to sell more of their hardware and not yours

    Comment


    • #52
      Originally posted by agd5f View Post
      The vaapi encode interface was designed before we started the open source vce project. Why didn't Intel use omx or vdapu or some other existing APIs to begin with? omx is a lot more flexible in being able to support different types of hw. vaapi is very much tied to the way Intel's hw works (on both the encode and decode sides) which makes it a poor fit for other hw.
      Guess it's a good thing now though that there *is* radeon support for VA-API as libva-vdpau-driver can promptly give VDPAU for radeon on top of that. So by supporting VA-API you end up supporting not one but two acceleration API's with little additional cost.

      Comment


      • #53
        Originally posted by curaga View Post
        - the mentioned flexibility doesn't include slice output?
        The flexibility is more on the input side. E.g. you can push a variety a data in it, but you always get an elementary stream as result. We might get slice level data output in a future VCE generation, but I'm not really sure about this.

        Comment


        • #54
          Originally posted by madbiologist View Post
          Probably because they want to sell more of their hardware and not yours
          Actually, they are more likely to sell hardware by using the existing libraries, as there already exists software supporting them, while creating a new library means there isn't. You sell more of your hardware if your customers can make better use of it.

          Comment


          • #55
            My 50c:

            Obviously hardware companies will design software in a way that favours their own product (hardware). Nobody should expect anything different.

            It should be up to the distribution maintainers to patch and maintain the upstream projects which the maintainers themselves are not interested in doing due to their own bias. Afterall, the distributions have the most to gain or loose by having a sane, well documented and supported set of APIs that developers can target and users can enjoy.

            Microsoft and Google do this and are very successful in doing so. Valve is doing the same thing in the form of SDL, but distros are probably too understaffed, lazy or worried with cosmetic changes or petty political bickering to actually do what has to be done, in this case, unify VAAPI, VDPAU and whatever other APIs are available into a single API app developers can target and be done with it.

            Comment


            • #56
              Originally posted by Deathsimple View Post
              Currently we only expose the "normal" 4:2:0 YUV to H264 encoding process. But you can for example aid encoding by calculating the best motion vectors with shaders (or the CPU or get them from the source video while transcoding etc..). In general it's quite flexible regarding which part of encoding it should do and could even only do things like bitstream encoding and the rest elsewhere.
              So does this mean that there are ways to take advantage of VCE without losing much (if any) quality? That's been my main issue with hardware encoding vs. regular CPU encoding.
              (Sorry if this was actually answered in what I quoted, I'm just not very knowledgeable with this kind of stuff.)

              Comment


              • #57
                Originally posted by agd5f View Post
                The vaapi encode interface was designed before we started the open source vce project. Why didn't Intel use omx or vdapu or some other existing APIs to begin with? omx is a lot more flexible in being able to support different types of hw. vaapi is very much tied to the way Intel's hw works (on both the encode and decode sides) which makes it a poor fit for other hw.
                This is addressed more to the managers, twriter and bridgman. I would have expected some forward-looking here. Even though the VCE open sourcing project was yet to be started, you likely knew such a unit would be included in the generations under planning at the time, and so would've been expected to send a few emails as vaapi 0.1 first started making waves.

                Mentioning such high-level details would hardly have given away competitive advantage.

                /me ends public flogging for failing to predict the future

                Comment


                • #58
                  Now I just need OpenCL support to drop fglrx completely with the 7770.

                  After that, I'll be looking towards hUMA .

                  Comment


                  • #59
                    Originally posted by curaga View Post
                    This is addressed more to the managers, twriter and bridgman. I would have expected some forward-looking here. Even though the VCE open sourcing project was yet to be started, you likely knew such a unit would be included in the generations under planning at the time, and so would've been expected to send a few emails as vaapi 0.1 first started making waves.

                    Mentioning such high-level details would hardly have given away competitive advantage.

                    /me ends public flogging for failing to predict the future
                    Yeah, the timing didn't quite work. The va-api decode work started around the same time we were kicking off the whole open source initiative (when we had no idea how far we would be able to go), and and the encode interface for va-api was being worked out when the big news on Phoronix was :

                    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite




                    I'm not saying we couldn't have looked sufficiently far ahead to anticipate open source VCE support back in early 2009, but I guess I am saying that I couldn't
                    Test signature

                    Comment


                    • #60
                      Hello,

                      I am working on an embedded AMD solution (Kabini) and need to use hardware accelerated video encoding. Until now, I was unable to make it work but I was using ffmpeg for my tests and I read here that it does not support h264 HW acceleration. So I am about to test it with gstreamer as suggested here but before I start searching everywhere, I thought I could ask for some guidance ! :-)

                      So the question is simple : does someone can point me were to start if I want to use HW accelerated h264 encoding on AMD G-series ?

                      For information, system is linux (for the tests, we run a simple ubuntu on the SOC) and we need to grab X11 screen to a h263 video file. For now, we just installed latest AMD drivers (13.25?).
                      I already saw that there is a gstreamer plugin to grab video from X11 but I don't know how to make use hardware acceleration.

                      Any experiences with that? Maybe a link with a tutorial or whatever ?

                      Best regards,

                      Meigetsu

                      Comment

                      Working...
                      X