Announcement

Collapse
No announcement yet.

AMD Open-Sources VCE Video Encode Engine Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    How about multi GPU setups? like an A10-7850K w/ an R9 series GPU? Would it be able to make use of both or would just one be used and if so which?

    Comment


    • #42
      Originally posted by GreatEmerald View Post
      It would be nice if this was available in FFmpeg. (Does FFmpeg support VA-API encoding, by the way?)
      ffmpeg has no h264 encoder. They use libx264 and that project don't want hardware encoders. the result is not so nice like what the cpu generates.

      Comment


      • #43
        Originally posted by Kivada View Post
        How about multi GPU setups? like an A10-7850K w/ an R9 series GPU? Would it be able to make use of both or would just one be used and if so which?
        I haven't tested that yet, but in theory it should work fine to just specify the GPU by just setting DRI_PRIME=x.

        Comment


        • #44
          Originally posted by agd5f View Post
          Is there some specific app you have in mind? There doesn't seem to much that supports hw encoding in general on Linux (either API). Most people use gstreamer for encoding which supports both omx and vaapi. Additionally omx has some nice features for implementing more efficient transcoding.
          I use mencoder (which is used by the major GUIs like handbrake). Mplayer supports vaapi for playback, so adding it for encode ought to be easier than adding a whole new api.

          (seriously, people use gstreamer for encoding? It is unusable for playback due to being 3-10x slower than mplayer derivatives...)

          Comment


          • #45
            Originally posted by curaga View Post
            Mplayer supports vaapi for playback, so adding it for encode ought to be easier than adding a whole new api.
            Sorry forgotten to explain that. The encoding part of VA-API works with slice level data, but the output of VCE is an elementary stream.

            So to support VA-API the software stack would look something like this:
            1. VCE encodes the frame to an elementary stream
            2. driver decodes the elementary stream back to slice level data
            3. VA-API passes slice level data to application
            4. Application encodes slice level data back to an elementary stream

            That makes no sense on both CPU and implementation overhead and is the main reason why we dropped VA-API support.

            Comment


            • #46
              Originally posted by Nille View Post
              ffmpeg has no h264 encoder. They use libx264
              Well yes, that's what I meant.

              Originally posted by Nille View Post
              and that project don't want hardware encoders. the result is not so nice like what the cpu generates.
              Oh... Well, that's too bad. Hardware acceleration may be limited, but it's fast, and it would be nice to have that option.

              Comment


              • #47
                Originally posted by Redi44 View Post
                I'm glad that I chose the AMD HW in my two recent builds over the Intel HW
                Now only if nVidia can do that...

                Comment


                • #48
                  Originally posted by GreatEmerald View Post
                  Oh... Well, that's too bad. Hardware acceleration may be limited, but it's fast, and it would be nice to have that option.
                  The biggest Hopes for a faster encodingwith x264 is HSA. The first trys with opencl (only for the lookahead) are already done.

                  Comment


                  • #49
                    Originally posted by Deathsimple View Post
                    Sorry forgotten to explain that. The encoding part of VA-API works with slice level data, but the output of VCE is an elementary stream.
                    Oh, that makes sense, though it does raise some questions:
                    - the mentioned flexibility doesn't include slice output?
                    - if it was known from the start AMD's solution would have this output, why didn't they take part in vaapi design to have it in there?

                    Comment


                    • #50
                      Originally posted by curaga View Post
                      Oh, that makes sense, though it does raise some questions:
                      - the mentioned flexibility doesn't include slice output?
                      - if it was known from the start AMD's solution would have this output, why didn't they take part in vaapi design to have it in there?
                      The vaapi encode interface was designed before we started the open source vce project. Why didn't Intel use omx or vdapu or some other existing APIs to begin with? omx is a lot more flexible in being able to support different types of hw. vaapi is very much tied to the way Intel's hw works (on both the encode and decode sides) which makes it a poor fit for other hw.

                      Comment

                      Working...
                      X