Announcement

Collapse
No announcement yet.

RV600, OpenCL, ffmpeg and blender

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • RV600, OpenCL, ffmpeg and blender

    Hi!

    My head is now close to bursting with API acronym overload so I was hoping I could just describe to you guys what I want to see my Linux box do and just how far off we are seeing this happen.

    My laptop has a Mobility Radeon HD 2400 so of course I was very excited to see the recent open X driver code drop as well as the announcement of OpenCL as I'd really like to see ffmpeg and/or mencoder being able to harness my GPU to greatly accelerate video encoding and rendering.

    I understand the drivers are dev only at the moment and we'll need to wait until at least the next big xorg release before mortals can get open source RV600 accel without rolling there own and hoping for the best but how does opencl fit into this? I presume that opencl can work independently of X seeing as it isn't just for accelerating graphics so am I right in thinking that first someone needs to write an opencl driver for RV600? Is this already being worked on?

    Then of course someone needs to update ffmpeg so it can take advantage of opencl- has this work already begun or are there no opencl drivers finished yet?

    Then what about blender? Is blender just going to be running straight on top of gallium or gallium and opencl or?? I know gallium isn't finished yet so I would imagine no work has been done on blender yet to get it playing nice with gallium right?
    Last edited by danboid; 01-02-2009, 11:46 AM.

  • #2
    OK, let's see. Not very simple questions for a first post

    OpenCL can't be completely independent of graphics unless you plan to run with a separate GPU dedicated to compute work. This might actually be an easier way to get OpenCL going quickly but you lose some things in the process (see below).

    If OpenCL is running on the same GPU as graphics, then you need a way to ensure that the two drivers (compute and graphics) don't both access the chip at the same time and lock it up or mess up what the other driver is doing. That means the OpenCL driver will need to use the drm (kernel module) at minimum.

    The next point is that one of the cool things about OpenCL from a programmer perspective is that it can interoperate with OpenGL, ie you can share data between compute and graphics to allow efficient visualization of the compute results. This is great for the app developers but a big pain for the driver developers, who actually have to *implement* OpenCL in such a way that it can share with OpenGL. For the open source world, this means tying into Mesa, I expect.So... at first glance OpenCL can be independent from graphics, but the closer you get the less independence there seems to be.

    Bottom line is that practically speaking an OpenCL implementation is likely to either be an extension to an existing OpenGL implementation or (more likely) a separate code base sharing some code and structures with the OpenGL tree). I know we have been working on adding OpenCL to the Catalyst driver stack for quite a while (since that gives us a solution for a wide range of OSes), but I am not aware of anyone working on an open source OpenCL implementation yet.

    For things like video processing (ffmpeg) I think coding over Gallium3D would probably make more sense than coding over OpenCL for the long term; it'll be interesting to see which shows up first; proprietary OpenCL drivers or Gallium3D getting merged into the mainstream open source driver stack. I have been meaning to check whether the current Gallium3D API includes support for resolving conflicts between multiple clients trying to draw through it at the same time (eg. video and gl at the same time), maybe I'll remember today

    Blender, on the other hand, may make more sense to run over OpenCL than Gallium, but I don't know enough about the internals to really say. Having the code be open-source is nice, but it still doesn't help with finding time to actually *look* at it all
    Last edited by bridgman; 01-02-2009, 10:49 AM.

    Comment


    • #3
      Thanks for the reply bridgman!

      I didn't know gallium was also going to be capable of being used for accelerating video encoding with ffmpeg etc.

      So if/when gallium support makes it into both xorg, compiz(++) and ffmpeg will I be able, for example, run a composited desktop (xorg/compiz accelerated by gallium) and then also encode some video using ffmpeg/gallium on the same RV600 GPU?

      What is the state of gallium now, especially in respect to being run on RV6/700? Do you think it would be unreasonable to say we might have a gallium stack in time for Ubuntu 9.10 (ie a finished gallium and working xorg with gallium support- not ffmpeg gallium too)?

      Comment


      • #4
        I think we all see 2009 as the year Gallium3D becomes part of the mainstream driver stack. Nobody is quite sure how smoothly the transition will go, although the folks working on Gallium3D seem to be more confident than the rest of us, which is a good sign

        The state of Gallium3D right now is (roughly) :

        - framework is integrated into Mesa in a branch, not yet merged to Mesa master (IT EXISTS)

        - most testing seems to have been done with the softpipe (CPU) driver, not sure about the implementation on Intel 915. There is a Cell implementation which apparently works pretty well but don't think that one is in a public repository

        - the Nouveau developers are working on the low level code for older NVidia chips and have some working code; glisse started writing some support for ATI R300 and MostAwesomeDude is working on LLVM integration

        - we aren't doing anything in house with Gallium3D yet but will probably switch over to it once we have basic 3D support going on 6xx/7xx using a copy of the "classic Mesa" HW driver code for 3xx-5xx. My guess is that Gallium3D will come up first on 3xx-5xx followed a month or two later by 6xx/7xx.

        Will it all be running this year ? My guess is yes. Not so sure if it will make it into a major distro this year although I think that's what everyone would like to see.

        The main challenges are:

        1. A lot of other things, particularly memory management and kernel modesetting, are also in the pipe for this year. Memory management is, practically speaking, a pre-requisite for Gallium3D so a lot of things have to come together quickly.

        2. I don't know if there is a plan worked out for merging Gallium3D into Mesa master. Mesa-over-Gallium3D may end up as a separate project alongside mesa/mesa, or Gallium3D may end up as another driver sub-tree alongside all of the other driver options in the Mesa tree today.

        The right solution may be a third option, where Gallium3D lives separately from Mesa and Mesa is changed to call the separate Gallium3D code; it all depends on how the devs see multiple instances of Gallium3D being used at the same time, ie is it a library linked into multiple clients or is it a standalone thing handling multiple clients ?

        So, I dunno... maybe 50/50 chance ?
        Last edited by bridgman; 01-02-2009, 12:19 PM.

        Comment


        • #5
          multitasking?

          Hi bridgman!

          Thanks for your great replies, you've really cleared up a lot of questions I had about gallium and opencl, 'cept one.

          multitasking!

          If compiz, quake3 and ffmpeg all get ported to gallium will I be able to run all three at once, all taking advantage of my RV600? Obviously ffmpeg is going to take a significant performance hit if I fire up a game (or two?) such as q3 but is this type of multitasking a planned feature?

          Wouldn't that be cool though?

          Comment


          • #6
            Originally posted by bridgman View Post
            For things like video processing (ffmpeg) I think coding over Gallium3D would probably make more sense than coding over OpenCL for the long term; it'll be interesting to see which shows up first; proprietary OpenCL drivers or Gallium3D getting merged into the mainstream open source driver stack. I have been meaning to check whether the current Gallium3D API includes support for resolving conflicts between multiple clients trying to draw through it at the same time (eg. video and gl at the same time), maybe I'll remember today
            This doesn't make a lot of sense to me. I can see playback going straight to Gallium but bypassing openCL for encoding purposes (filtering, effect transitions, pre-processing etc) doesn't seem to make a whole lot of sense when openCL is not limited to GPU's but also can be used with DSP's and CPU's.

            Comment


            • #7
              It doesn't make sense to me to use G3D instead of OCL because I would like to have my programs run on Windows/OSX/Linux from the same source. Let's not create another ALSA here

              Comment


              • #8
                Originally posted by RealNC View Post
                It doesn't make sense to me to use G3D instead of OCL because I would like to have my programs run on Windows/OSX/Linux from the same source. Let's not create another ALSA here

                That is another big reason why it would not make sense. You would be essentially limiting the use of GPU processing of ffmpeg etc to linux which is a backwards solution and exactly the type of painted corners projects are trying to avoid nowdays.

                Comment


                • #9
                  I was actually thinking about playback (ie decoding) rather than encoding when I wrote that (ie "coding" the ffmpeg support). For rendering Gallium3D is attractive because the decode processing can easily be integrated with the render processing (done by Xv today) to draw directly to the screen.

                  For encoding or transcoding (where display is not an integral part of the task) I agree that OpenCL would probably be the way to go.

                  Comment


                  • #10
                    Originally posted by bridgman View Post
                    I was actually thinking about playback (ie decoding) rather than encoding when I wrote that (ie "coding" the ffmpeg support). For rendering Gallium3D is attractive because the decode processing can easily be integrated with the render processing (done by Xv today) to draw directly to the screen.

                    For encoding or transcoding (where display is not an integral part of the task) I agree that OpenCL would probably be the way to go.

                    Even then, utilizing openCL on decoding would allow for useful features such as cleaning up the source through filtering etc before firing it off for accelerated playback something Gallium could not handle alone. Galluim in conjunction with openCL would allow far a more flexible solution. This would allow processing to be done such as what MotionDSP is trying to do with Carmel.

                    http://www.motiondsp.com/products/Carmel
                    Last edited by deanjo; 01-02-2009, 04:52 PM.

                    Comment


                    • #11
                      Not sure I agree there. If Gallium were present it would do filtering just fine; you would have to write your filters in TGSI rather than OpenCL but if you're running natively over Gallium then you're already using TGSI.

                      A bigger issue is that non-Linux platforms will typically already have a fair amount of video filtering and processing happening in the playback stack (eg DXVA). That gets into difficult tradeoffs; Gallium3D is potentially portable across OSes but OpenCL *will* be ported across OSes so you're likely to be trading off better portability (OpenCL) against better integration and efficiency (Gallium3D or something else OS-specific).

                      I'm glad I don't have to choose

                      Comment


                      • #12
                        Originally posted by bridgman View Post
                        Not sure I agree there. If Gallium were present it would do filtering just fine; you would have to write your filters in TGSI rather than OpenCL but if you're running natively over Gallium then you're already using TGSI.

                        A bigger issue is that non-Linux platforms will typically already have a fair amount of video filtering and processing happening in the playback stack (eg DXVA). That gets into difficult tradeoffs; Gallium3D is potentially portable across OSes but OpenCL *will* be ported across OSes so you're likely to be trading off better portability (OpenCL) against better integration and efficiency (Gallium3D or something else OS-specific).

                        I'm glad I don't have to choose
                        Real time filtering and image enhancing, video stabilization , etc, for doing jobs such as improving flash video would be heard to do with TGSI. Using openCL would allow such processing to easily be done cross platform where such abilities are not even present in DXVA etc. You could also make the code pull double duty instead of coding one method for encoding and an another for decoding.

                        Comment


                        • #13
                          We are getting further and further from the original question here. The OP asked about the kinds of things ffmpeg did today. If you're talking about the kind of generic processing which is not likely to already be implemented in the OS native video stack (and most of your examples fall into that category) then I agree OpenCL is a better choice.

                          If you're trying to get me to argue that there is no place for OpenCL, you're talking to the wrong guy

                          Comment


                          • #14
                            Originally posted by bridgman View Post
                            We are getting further and further from the original question here. The OP asked about the kinds of things ffmpeg did today. If you're talking about the kind of generic processing which is not likely to already be implemented in the OS native video stack (and most of your examples fall into that category) then I agree OpenCL is a better choice.

                            If you're trying to get me to argue that there is no place for OpenCL, you're talking to the wrong guy
                            Oh, I'm not arguing that your saying that there is no place for openCL. The original post did ask about encoding though. I would just hate to see projects such as ffmpeg paint themselves into a corner by not making it as flexible as it could be. openCL can make a huge improvement to the playback experience if directed in the right direction. There is nothing stopping it for example to limit itself to just the video part of playback as well. openCL could be used for realtime audio DSP processing as well (such as realtime upmixing or stream processing the signal into a AC-3 or DTS while doing psychoacoustic enhancments using impulse files for example).

                            Comment


                            • #15
                              What I want is GPU accelerated Fffmpeg (extra capitalised F intentional) so it looks like what I'm waiting for first is a working FOSS implementation of opencl.

                              The logical birth place of such code would be from within mesa but I would've presumed that if they had announced such a project then someone here would know about it as I'm presuming at least one mesa dev will frequent these forums.

                              Surely Apple have a working implementation of opencl (for OSX) already? Is that under NDA or a closed license?

                              EDIT

                              Formats I'd be looking to encode would inc DV, H.264, Dirac(Schroedinger), MPEG 2 and 4
                              Last edited by danboid; 01-02-2009, 06:43 PM.

                              Comment

                              Working...
                              X