Page 4 of 8 FirstFirst ... 23456 ... LastLast
Results 31 to 40 of 78

Thread: There May Still Be Hope For R600g Supporting XvMC, VDPAU

  1. #31
    Join Date
    Dec 2009
    Posts
    338

    Default

    Quote Originally Posted by tball View Post
    Anyone know where to contact König? I think it would be sad creating a lot of duplicate work. We are currently tree developers looking into a shader based decoder via gallium3d.
    Could you please elaborate this a little more? I'm very interested!
    What formats do you plan to support?

  2. #32
    Join Date
    Jan 2009
    Posts
    515

    Default

    Quote Originally Posted by HokTar View Post
    Could you please elaborate this a little more? I'm very interested!
    What formats do you plan to support?
    We have worked on a vdpau state_tracker, but are considering moving to vaapi instead due to complications with implementing a brand new bitstream decoder.

    So it will go something like this:
    libva -> va-gallium-state-tracker -> video-pipe -> hw-drivers

    Our goal is right now a working mpeg2 decoder, with the entrypoint right before IDCT.

    Currently the video-pipe already partially supports MC, due to some work done by Younen.

    Jrch2k8 is working on the IDCT part and already have a splendid fast sse2 implementation. AFAIK, he is working on its TGSI equivalent right now. (Correct me if I am wrong jrch2k8).

    I am into porting the current work done in on the vdpau state_tracker to a vaapi state_tracker.

  3. #33
    Join Date
    Dec 2009
    Posts
    338

    Default

    Quote Originally Posted by tball View Post
    We have worked on a vdpau state_tracker, but are considering moving to vaapi instead due to complications with implementing a brand new bitstream decoder.

    So it will go something like this:
    libva -> va-gallium-state-tracker -> video-pipe -> hw-drivers

    Our goal is right now a working mpeg2 decoder, with the entrypoint right before IDCT.

    Currently the video-pipe already partially supports MC, due to some work done by Younen.

    Jrch2k8 is working on the IDCT part and already have a splendid fast sse2 implementation. AFAIK, he is working on its TGSI equivalent right now. (Correct me if I am wrong jrch2k8).

    I am into porting the current work done in on the vdpau state_tracker to a vaapi state_tracker.
    Wow! Fantastic!

    The worst kind of question: any guesstimates for the project's timeframe to reach eg beta quality? Or are you still in the really early days?

    So what do each part do? I thought the state tracker would convert the eg ffmpeg va-api calls to tgsi. If these are supported by the hw driver -> it works.
    What does video-pipe do?

    If you implement va-api then how much additional work is necessary for different format support? So how far are you from h264 acceleration (assuming that the current work is done).

  4. #34
    Join Date
    Jan 2009
    Posts
    515

    Default

    Quote Originally Posted by HokTar View Post
    The worst kind of question: any guesstimates for the project's timeframe to reach eg beta quality? Or are you still in the really early days?
    We are still in the very early state. I can't anything about how we will reach any usable state.

    So what do each part do? I thought the state tracker would convert the eg ffmpeg va-api calls to tgsi. If these are supported by the hw driver -> it works.
    What does video-pipe do?
    Well beside the state_tracker, a video-pipe is to be developed for every driver which should be supported. Right now the video-pipe exists for the nouveau and softpipe driver. I don't know how good they are.

    If you implement va-api then how much additional work is necessary for different format support? So how far are you from h264 acceleration (assuming that the current work is done).
    Well the video-pipe within the softpipe and nouveau drivers only supports partially MC and compositing. So the hard work would be implementing a good framework in the video-pipe, such that our IDCT, MC implementation could be used for h264 also.

  5. #35
    Join Date
    Jan 2009
    Posts
    515

    Default

    @HotTar
    BTW. If you are interesting in helping us out, please pm me your email.

  6. #36
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    tball; search for "deathsimple koenig" and you'll get an email address... I'd put it here but don't want to make his spam situation any worse

  7. #37
    Join Date
    Oct 2007
    Location
    Toronto-ish
    Posts
    7,386

    Default

    Quote Originally Posted by gbeauche View Post
    No, VDPAU is VLD only. VDPAU might evolve in some more detailed like VA-API (slice level bitstream info), for better error recovery and checks, but nothing set in stone yet. And nothing else than VLD anyway.
    Yep, you're right. I thought I remembered seeing an entry point to submit YCbCr surfaces for an Xv-like interface but it's certainly not there now - just compressed video and RGB.

    Quote Originally Posted by HokTar View Post
    @ bridgman & gbeauche
    So what is the conclusion? Would va-api be a better choice as our "shiny new state tracker"? Or the differences are minor so it could be the individual developer's call?
    I don't think it really matters, since even if a low-level VA-API entry point was used it wouldn't be the one apps use today anyways. The only reasons for defining an intermediate API at all are (a) to insulate the bitstream decoder code from future changes in Gallium3D and (b) to allow the same low-level code to be used across multiple bitstream decoders if desired.

    In principle you could dump the entire bitstream decoder into the state tracker and use a higher level API but (a) you run into license problems -- most decoders are (L)GPL while mesa is X11 -- and (b) you are basically forking the code so you have to maintain the copies separately. Adding XvMC-level API calls to existing bitstream decoders seems like a more manageable approach.

    BTW I am of two minds regarding the implementation of IDCT in the GPU portion of the code -- the "IDCT" step for newer decoders is relatively cheap on CPU because the algorithm was tweaked to allow shifts & adds rather than multiplies, but it will eat up a bunch of shader instructions and texture fetches which may be a problem for low end GPUs. The advantage of doing IDCT on GPU (IIRC from 6 months ago) is that it makes it easier to do inter-prediction on GPU, although I have to admit I don't remember why I came away with that conclusion.

    The reality is that IDCT is going to have to be implemented anyways if only to determine whether running it on GPU was a good idea or not

  8. #38
    Join Date
    Jan 2007
    Posts
    459

    Default

    Quote Originally Posted by tball View Post
    We have worked on a vdpau state_tracker, but are considering moving to vaapi instead due to complications with implementing a brand new bitstream decoder.

    So it will go something like this:
    libva -> va-gallium-state-tracker -> video-pipe -> hw-drivers

    Our goal is right now a working mpeg2 decoder, with the entrypoint right before IDCT.

    Currently the video-pipe already partially supports MC, due to some work done by Younen.

    Jrch2k8 is working on the IDCT part and already have a splendid fast sse2 implementation. AFAIK, he is working on its TGSI equivalent right now. (Correct me if I am wrong jrch2k8).

    I am into porting the current work done in on the vdpau state_tracker to a vaapi state_tracker.
    so hang on , keeping in mind you also say
    "We are still in the very early state. I can't [say?] anything about how we will reach any usable state."

    so if im getting this right, Jrch2k8 has made or ported someones existing IDCT sse2, and he likes and understands x86 assembly ?

    i also seem to get the impression (reading some post or other from him here) that he also knows TGSI assembly and will be writing that too ?

    so, your at a very early stage and are basicly wanting to use any and all the other missing parts MC,CABAC , CAVLC etc for your Decoder from elsewhere or new wrapper code etc where needed Yes?

    hmmm, Yet it appears Jrch2k8, you and whoever have not actually appeared and asked questions where your most likely to get in-depth help, feedback and actual working code to look at and learn from.

    that being the codebase and IRC dev channels for x264 and FFmpeg or Doom10

    Dark Shikari and team are VERY helpful (but will call you on stupid if you say you actually get something explained then turns out you dont lol )and encourages all new devs to ask video Encode/Decode and related code questions as soon as possible, preferably before they make to many 101's or waste time working out problems that dont exist, they dont care who you are, as high ranking intel guy found out until he got on IRC and finally understood their working practice .

    for instance see http://doom10.org/index.php?topic=658.0
    http://doom10.org/index.php?topic=571.0

    for the x264 openCL code some non x264 dev's patched http://www.gpucomputing.net/?q=node/1143
    http://li5.ziti.uni-heidelberg.de/x264gpu/ but no one seems to know if it even runs, never mind might give you hints and in your DEcode project but worth a look perhaps, but apparently they never did talk to DS Etc to get it submitted, submitting easy

    theres also http://x264dev.multimedia.cx/archives/157 search on x264 IDCT or whatever video related term you like and the chances are Dark Shikari or a related x264 dev has made an indepth block about it.

    for instance the x264 dev's helped make the FFmpeg VP8 Decoded the worlds fastest LOL.http://x264dev.multimedia.cx/

    adjusting and expanding the x264 code framework to help, and some of the x264 dev's are right now working on x262 the Mpeg2 Encoder using this same x264 framework remember also, ffmpeg being their preferred input/decode code base and they hack that too, DS etc having rights there to post patches and new code etc with write access to the repo.

    all told you and indeed any dev's wanting to know Detailed inner working of video are well advised to go over to freenode.net/ and Actually talk to Dark Shikari and the other x264/FFmpeg dev's in #x264dev and #ffmpeg-devel as that is where actual development happens.

    you can even use any part of their code and get algorithm advice and details...

  9. #39
    Join Date
    Jan 2009
    Posts
    515

    Default

    Quote Originally Posted by bridgman View Post
    tball; search for "deathsimple koenig" and you'll get an email address... I'd put it here but don't want to make his spam situation any worse
    Thx. Found him :-)

  10. #40
    Join Date
    Jan 2009
    Posts
    515

    Default

    Quote Originally Posted by popper View Post
    so hang on , keeping in mind you also say
    "We are still in the very early state. I can't [say?] anything about how we will reach any usable state."

    so if im getting this right, Jrch2k8 has made or ported someones existing IDCT sse2, and he likes and understands x86 assembly ?

    i also seem to get the impression (reading some post or other from him here) that he also knows TGSI assembly and will be writing that too ?

    so, your at a very early stage and are basicly wanting to use any and all the other missing parts MC,CABAC , CAVLC etc for your Decoder from elsewhere or new wrapper code etc where needed Yes?

    hmmm, Yet it appears Jrch2k8, you and whoever have not actually appeared and asked questions where your most likely to get in-depth help, feedback and actual working code to look at and learn from.

    that being the codebase and IRC dev channels for x264 and FFmpeg or Doom10

    Dark Shikari and team are VERY helpful (but will call you on stupid if you say you actually get something explained then turns out you dont lol )and encourages all new devs to ask video Encode/Decode and related code questions as soon as possible, preferably before they make to many 101's or waste time working out problems that dont exist, they dont care who you are, as high ranking intel guy found out until he got on IRC and finally understood their working practice .

    for instance see http://doom10.org/index.php?topic=658.0
    http://doom10.org/index.php?topic=571.0

    for the x264 openCL code some non x264 dev's patched http://www.gpucomputing.net/?q=node/1143
    http://li5.ziti.uni-heidelberg.de/x264gpu/ but no one seems to know if it even runs, never mind might give you hints and in your DEcode project but worth a look perhaps, but apparently they never did talk to DS Etc to get it submitted, submitting easy

    theres also http://x264dev.multimedia.cx/archives/157 search on x264 IDCT or whatever video related term you like and the chances are Dark Shikari or a related x264 dev has made an indepth block about it.

    for instance the x264 dev's helped make the FFmpeg VP8 Decoded the worlds fastest LOL.http://x264dev.multimedia.cx/

    adjusting and expanding the x264 code framework to help, and some of the x264 dev's are right now working on x262 the Mpeg2 Encoder using this same x264 framework remember also, ffmpeg being their preferred input/decode code base and they hack that too, DS etc having rights there to post patches and new code etc with write access to the repo.

    all told you and indeed any dev's wanting to know Detailed inner working of video are well advised to go over to freenode.net/ and Actually talk to Dark Shikari and the other x264/FFmpeg dev's in #x264dev and #ffmpeg-devel as that is where actual development happens.

    you can even use any part of their code and get algorithm advice and details...
    Thx for your advice. I will look into it.
    But as I have stated before, we haven't even reach h264 yet. The first milestone is to get a mpeg2 decoder up and running, with e.g. a vaapi state_tracker.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •