Announcement

Collapse
No announcement yet.

AMD Releases Open-Source R600/700 3D Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RobbieAB
    replied
    Originally posted by bridgman View Post
    I don't think Via extended the actual XvMC API to support H.264 - that would have been a much larger task. AFAIK they just added a "slice level" API and used that to feed into the slice level hardware on the chip, bypassing all the places where the XvMC details didn't match the H.264 details. Since a lot of our GPUs don't have slice level hardware, and at the moment we do not have plans to open up the slice level hardware on the chips which *do* have it, that approach is probably not feasible unless we implement the slice level support in software inside the driver (which, I guess, is an option). I have only skimmed the code so far; need to look through it in more detail. Hey, that's what weekends are for, right ?
    IF the Via XvMC is "slice level" based, would using a slice level approach in the AMD drivers allow cross driver code usage? Afterall, if one of the problems is the lack of developers, maximising the usage of current code is a good idea...

    Leave a comment:


  • bridgman
    replied
    Originally posted by highlandsun View Post
    Still, it sounds like you're equating "open source" == "easy to do" and "proprietary" == "more sophisticated". I should point out that this is an old-fashioned mentality; for example the highest performance directory software in the world is open source (OpenLDAP); it's generally 5-10x faster than any/all of the proprietary directory software packages out there, and it implements the specs correctly where all the proprietary vendors cut corners. Sophistication and performance doesn't require closed source proprietary developers. It doesn't even require the highest paid development teams. I did the profiling and refactoring of OpenLDAP's code simply because I saw it needed to be done, not because anybody paid me to do it...
    If you read enough of my posts you'll see that I don't believe in that line of thinking (proprietary automatically = more sophisticated) at all. The problem here is the sheer size of the work relative to the size of the development community.

    Right now getting drivers with features and performance comparable to the proprietary drivers on other OSes takes more development work than the community can do on its own *or* than HW vendors can fund based on the size of the Linux client market. That means the HW vendors will need to share the costs (and the code) across multiple OSes, and so far the business realities of those OTHER OSes dictate that the resulting code remain closed on Linux as well.

    In that context closed source drivers offer a way to tap into more development resources than we could get access to otherwise; nothing more.

    Originally posted by highlandsun View Post
    It annoys me that no one has jumped in here yet re: XvMC, and I regret that I don't have the time to do it myself.
    Yeah, that is my whole argument in a nutshell; the expectations and demands of the Linux market are growing faster than the development community or the market share (and, of course, market share drives the funding which HW vendors and commercial distros can put in).

    One of our hopes is that by providing sample code and by continuing to work on drivers for older GPUs we will make it easier for new developers to get started and allow more people to participate in graphcis driver development than we have today. The experienced volunteer X developers we have today are extremely good and can take on a project like this on their own, but we don't have anywhere near enough of them.

    Originally posted by highlandsun View Post
    And it still sounds like XvMC is worth investing in, given that Via already extended their implementation to work with H.264 etc; it was obviously the path that gave them the most bang (software compatibility) for their development buck. But if something like VAAPI is suddenly getting adopted, as it now appears to be, then that'd be fine instead.
    I don't think the developers extended the detailed XvMC API to support H.264 on Via HW - that would have been a much larger task. AFAIK they just added a "slice level" API and used that to feed into the slice level hardware on specific GPUs, bypassing all the places where the XvMC details didn't match the H.264 details. Since a lot of our GPUs don't have slice level hardware, and at the moment we do not have plans to open up the slice level hardware on the chips which *do* have it, that approach is probably not feasible unless we implement the slice level support in software inside the driver (which, I guess, is an option).

    I have only skimmed the code so far; need to look through it in more detail. Hey, that's what weekends are for, right ?
    Last edited by bridgman; 01-04-2009, 06:49 AM.

    Leave a comment:


  • rbmorse
    replied
    What Bridgeman said holds true for video drivers.

    Leave a comment:


  • highlandsun
    replied
    Originally posted by bridgman View Post
    Yeah

    It's probably obvious but when I said "high performance expectations" I was talking about the graphics subsystem where the drivers are hugely complex and proprietary drivers still have an edge.
    Still, it sounds like you're equating "open source" == "easy to do" and "proprietary" == "more sophisticated". I should point out that this is an old-fashioned mentality; for example the highest performance directory software in the world is open source (OpenLDAP); it's generally 5-10x faster than any/all of the proprietary directory software packages out there, and it implements the specs correctly where all the proprietary vendors cut corners. Sophistication and performance doesn't require closed source proprietary developers. It doesn't even require the highest paid development teams. I did the profiling and refactoring of OpenLDAP's code simply because I saw it needed to be done, not because anybody paid me to do it... It annoys me that no one has jumped in here yet re: XvMC, and I regret that I don't have the time to do it myself.

    And it still sounds like XvMC is worth investing in, given that Via already extended their implementation to work with H.264 etc; it was obviously the path that gave them the most bang (software compatibility) for their development buck. But if something like VAAPI is suddenly getting adopted, as it now appears to be, then that'd be fine instead.
    Last edited by highlandsun; 01-03-2009, 11:45 PM.

    Leave a comment:


  • bridgman
    replied
    The library itself is binary so it is presumably covered by the same EULA as the rest of the binary driver. The header files etc... are covered by NVidia's usual open source license, which is a slightly modified MIT license (looks like the one SGI used to use).

    EDIT - popper, did you make another (long) post around 8 AM ? I can't find it anywhere although I have the notification email from it.

    I thought the post was pretty good, although you may still be missing that we are arguing on the same side. I was arguing AGAINST investing in a vanilla MPEG2/XvMC implementation because I felt that time would be better spent working on code which would work with H.264 and VC-1.
    Last edited by bridgman; 01-03-2009, 04:13 PM.

    Leave a comment:


  • rbmorse
    replied
    Originally posted by bridgman View Post
    The announcement talks about it being cross-platform so presumably it may show up on Linux or MacOS at some point.
    I wonder what license nVidia put on them.

    Leave a comment:


  • bridgman
    replied
    Thanks popper, that answered my question. I put up the "getting mad" icon because I kept asking "what is this library you say we need" and you just kept telling me how great it would be for us to do... whatever it was...

    The library appears to be an NVidia-supplied Windows-only binary called NVCUVID, which allows a CUDA program to use the VP2 dedicated video processor (like one piece of our UVD) to do the initial decoding of H.264 or VC-1 frames followed by CUDA code doing motion comp and filtering. The CUDA video program can either display the decoded frames through DX or return the frames to the calling application. The second option is obviously what a ripper/frameserver would want.

    This collection (CUDA run time + NVCUVID binary + CUDA video decode program) is then built into a frame serving program written by Don Graft (neuron2 on the doom9 forum), called either DGAVCDecNV or DGVC1DecNV depending on whether you are using the AVC (H.264) or VC-1 version.

    The announcement talks about it being cross-platform so presumably it may show up on Linux or MacOS at some point.
    Last edited by bridgman; 01-03-2009, 03:33 PM.

    Leave a comment:


  • popper
    replied
    "I'm sure bridgeman is very aware of all the people who want accelerated video decode" i wouldnt make that assumption though, if bridgeman is one of several high ranking people inside ATI/AMD, its more likely he and they are mostly only hearing the commercial vendors wants, and generic 3D linux driver mantra.

    knowing that a high % of people want some form of video processing is one thing,picking the right app (FFMPEG)to support and show off your HW is another matter all together.

    i find the "you want Transcoding" rather telling, as theres far more to video processing than mearly transcoding, and it comes down to prioritys again and how you support your end game.

    the windows AVIVO internal driverset transcoder was a real messy implimentation,the old 2005 stand alone AVIVO was a far betetr idea thats for sure ,and its results are bad when compared to the lowest comparable x264 settings, however, for "quick and dirty" and remember its still only using the CPU, no matter what the ATI/AMD PR try and tell you, it can still take a 90minute HD AVC [email protected] MKV and transcode it to an xbox360 VC1 WMV container in 20minutes on a simple core2 dual 2GHz machine, good enough for your tversity streamed 360.

    with some work, you can also bypass the internal 4 year old childrens GUI with virtually no codec parameta output control, and just use all the ATI AVIVO installed directshow drivers directly to make a working graph, and improve the output etc for quick HD 360 streaming use, but thats OT (but related)here....

    im not asking that he/they write all the optimised SIMD code for FFMPEG inclusion, mearly supply a good fully working example that improves/provides frame accurate stream decoding and MBAFF and PAFF at the very least, OC AVCHD and AVC intra lossless would give many more semi pro people reason to look at ATI/AMD GPUs

    that simple base code wouldnt need some new set in stone linux HD Video API, but would give masses of people around the world a good reason to give ATI a fighting chance in the PR and long term HD video processing mindset.

    im not fully up on OpenCL but i seem to remember that ATI current implimentation isnt working right now,or in the near future, and even then, OpenCL doesnt and isnt intended to cover all the tipical HD video stream processing you might expect today anyway !

    but OC if someone patchs OpenCL ATI into the FFMPEG framwork, and people get to save 10% of their time on a given workflow because of it, then thats a good thing too.
    Last edited by popper; 01-03-2009, 07:08 AM.

    Leave a comment:


  • Nille
    replied
    Originally posted by smitty3268 View Post
    I agree that CUDA seems to have a much stronger mind share right now, for whatever reason.
    Nvidia has a better Marketing. You hear anywhere from Nvidia and CUDA and ATIs Steam promo Software is [email protected] and the Crappy AVIVOConverter.

    Leave a comment:


  • smitty3268
    replied
    Originally posted by popper View Post
    you consider asking for, and airing the subject of GPU assisted HD video processing with the readers here rambling ?, very odd.
    ...
    i take it then that you dont care about any potential future ATI assisted GPU code (whatever that might be, if any)being written, and diffs against the current FFMPEG codebase being submitted.
    Not at all, I just found that post in particular hard to read and confusing. Perhaps "rambling" was the wrong description. I'm guessing that maybe you aren't a native english speaker?

    More to the point, I didn't feel it really added much to the discussion. I'm sure bridgeman is very aware of all the people who want accelerated video decode, and if he is spending 30 minutes looking through a single post which I figured was just telling him what he already knew then I figured he should just drop the matter

    I think adding some OpenCL code to the FFMPEG decoders is a pretty good idea, but I'm not really sure I see it as AMD's responsibility. OpenCL is a standard, and I think the FFMPEG devs probably have a better idea about how to write a codec than AMD does. Certainly if you look at their transcoding tool on the windows 8.12 drivers I think you may want to keep AMD as far away as possible and have them stick to the underlying video driver code
    Last edited by smitty3268; 01-03-2009, 05:38 AM.

    Leave a comment:

Working...
X