Announcement

Collapse
No announcement yet.

Radeon* video (R600+) ?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Radeon* video (R600+) ?

    Lately there has been some news on R600+ progress - also mentioning XV

    It would be very nice to know more about the current status and plans for video.
    Does the initial XV implementation have biqubic scaling (or better) ?
    Is there any limitations on the supported formats ? (Hopefully everything just works )
    Does the V-sync etc. work so there won't be tearing ?

    Any thoughts on this live-TV / streaming issue:
    Local playback is the easy case where you can fine-tune the playback rate to mach the (primary) display device frame rate (or actually the master clock of GPU).
    With live-TV and (continuous) streaming the data rate is dictated by the transmitting end. Hence the receiving end must somehow adapt to this foreign master clock. (In addition there could be some drift and jitter ..) Simply skipping or duplicating frames won't do because they cause motion artefacts (hesitation and judder etc.).
    There are some experimental patches which try to synchronize the local frame rate to follow exactly the incoming framerate. (And if there is too much mitchmatch then the target is some integer multiple or simple ratio..)
    See: http://linuxtv.org/pipermail/vdr/2008-July/017347.html and http://lowbyte.de/vga-sync-fields/
    (and http://www.vdr-portal.de/board/threa...172#post751172 and http://www.vdr-portal.de/board/threa...threadid=80567 etc.)
    IMHO: It would be best for all to have one properly thought-out solution.

    Any plans for a future roadmap ?:
    Motion compensation with shaders was mentioned somewhere.
    Is all implemented shader instructions and their special variants documented ?
    (Or is there only 3D essentials ?)
    Any plans for deblocking / post-processing filters ?
    Proper motion adaptive/avare de-interlacing / framerate conversion ?
    Any chance for UVD+ .. or is it RE then ?
    IMHO: I'm not sure how beneficial the full bitsream HW acceleration actually is. If the HW is not fully programmable processor then there will always be some not-so-standard content which just won't work. Perhaps the high flexibility of (existing) CPU software is more beneficial here.

    Finally big thanks to all those contributing to the open drivers !

    ;-PWMx

  • #2
    Originally posted by PWMx View Post
    Does the initial XV implementation have biqubic scaling (or better) ?
    Is there any limitations on the supported formats ? (Hopefully everything just works )
    Does the V-sync etc. work so there won't be tearing ?
    First priority is getting basic, working accel code and associated HW info into users and developers hands so that subsequent work can happen in public. I would not expect bicubic filtering out of the box unless we could do it natively with hardware (which I think may be possible with 6xx and up). More formats = more time required to implement, so I imagine the first release will only have a small set of formats supported. Since the vsync support is primarily software work, not hw dependent, no attempt to add that into the first release.

    Originally posted by PWMx View Post
    Any plans for a future roadmap ?:
    Motion compensation with shaders was mentioned somewhere.
    Is all implemented shader instructions and their special variants documented ?
    (Or is there only 3D essentials ?)
    Any plans for deblocking / post-processing filters ?
    Proper motion adaptive/avare de-interlacing / framerate conversion ?
    All this post-processing stuff is done in shaders today in our other drivers, and I expect we would want to implement the same way in the open drivers. AFAIK all of the required information for 5xx and below has already been published, just needs someone with time enough to give it a try. Note that for deblocking I'm talking about deblocking done during post-processing, not the in-loop deblocking filter used in some HD decode protocols.

    Originally posted by PWMx View Post
    Any chance for UVD+ .. or is it RE then ?
    IMHO: I'm not sure how beneficial the full bitsream HW acceleration actually is. If the HW is not fully programmable processor then there will always be some not-so-standard content which just won't work. Perhaps the high flexibility of (existing) CPU software is more beneficial here.
    Here we get into the usual icky tradeoffs. If you make the video processor completely programmable and generic then the power & efficiency benefits start to diminish - in general the dedicated video processors are intended to be extremely efficient on a small number of formats (typically those used for DVD/BD playback).
    Test signature

    Comment


    • #3
      First thank you for very quick reply

      Originally posted by bridgman View Post
      AFAIK all of the required information for 5xx and below has already been published
      I was referring to R600+. Also somewhere was stated that there is some extra functionality / future proofing included. Will all be documented - even the odd ones ? (Never know when those turn out to be handy..)

      Originally posted by bridgman View Post
      Note that for deblocking I'm talking about deblocking done during post-processing, not the in-loop deblocking filter used in some HD decode protocols.
      Actually I had both in my mind. And one could add also deringing etc.
      Not sure how the whole thing would come together with shaders..

      Originally posted by bridgman View Post
      Here we get into the usual icky tradeoffs. If you make the video processor completely programmable and generic then the power & efficiency benefits start to diminish - in general the dedicated video processors are intended to be extremely efficient on a small number of formats (typically those used for DVD/BD playback).
      Indeed. The end result seems to be quite uneven with relatively inflexible HW implementations: Some content works very well and efficienty - And some simply not at all. With SW almost anything can be made to work - but not so high efficiency. (I think majority of users would prefer the later.)

      I just hope the published specifications will eventually include all even remotely useful functionality so that one could implement basically whatever later on. Preferably GPU documentation would be in parity with CPU and chipset documentation. (Intel seems to be heading that way. And that internal ATI documentation I happened to see some time ago was not impressive at all - Hopefully things are getting better now )

      ;-PWMx

      Comment


      • #4
        Originally posted by PWMx View Post
        I was referring to R600+. Also somewhere was stated that there is some extra functionality / future proofing included. Will all be documented - even the odd ones ? (Never know when those turn out to be handy..)
        We haven't finished the initial IP release for R600+ - so far we just have the shader instruction set docs and initial DRM support in freedesktop.org. We're working on the first "full" code & info drop now.

        The extra functionality was not really for future-proofing but to include some potential "future features" in current silicon in case they turned out to be required today. Any documentation for those features would come with the future GPUs where we use the features in production driver code, not in the current GPUs.

        Originally posted by PWMx View Post
        Actually I had both in my mind. And one could add also deringing etc. Not sure how the whole thing would come together with shaders..
        All that stuff is really just a function of how the driver stack gets designed and isn't really hardware specific. In general we tend to treat shader code as just another part of the driver code.

        Originally posted by PWMx View Post
        I just hope the published specifications will eventually include all even remotely useful functionality so that one could implement basically whatever later on. Preferably GPU documentation would be in parity with CPU and chipset documentation. (Intel seems to be heading that way.
        Yes and no. To the extent that GPU functionality is being used in the same general-purpose way as CPU functionality I think we will end up with comparable documentation. Problem is that GPUs also contain all kinds of dedicated functionality which is woven together with things like DRM implementations, so those areas will probably never get documented in the same way as the rest of the GPU.

        Originally posted by PWMx View Post
        And that internal ATI documentation I happened to see some time ago was not impressive at all - Hopefully things are getting better now )
        The perceived quality of the documentation depends a lot on what you are trying to do with it. For its intended purpose (getting agreement within the HW teams re: exactly what will be developed and how all the blocks will work together) the docs have been pretty good -- they just don't do so well for OUR purpose, which is "show someone how to write a driver from scratch". That's where there have been big gaps and also where we are starting to see significant improvement.
        Test signature

        Comment


        • #5
          Originally posted by bridgman View Post
          The extra functionality was not really for future-proofing but to include some potential "future features" in current silicon in case they turned out to be required today.
          Isn't it exactly the same thing ?
          Anyway I would prefer all possible documented.
          (Especially what comes to the shader instruction set and various addressing modes etc.)

          Originally posted by bridgman View Post
          woven together with things like DRM implementations
          You must not say the name of You Know What in loud !

          Seriously, there are ways to handle any cryptographic issues in open fashion. When the mathematical foundations are strong then you can have open and unobfuscated mechanism in place.
          Well, hopefully with the future generations then.
          (And probably the whole idea of DRM is as long living as trusted computing )

          Originally posted by bridgman View Post
          The perceived quality of the documentation depends a lot on what you are trying to do with it.
          Not necessarily. How to use the HW should already be quite well documented in the system design phase. Very detailed implementation specific documentation should then follow to clarify all those small details.

          Gigapixel used to have quite impressive methodology for all. Basically they had a library of functional blocks. Then a "compiler" analyzed the target performance and feature requirements. Finally the whole thing was combined together in fully automatic way - Including all HW documentation etc. For the overall system they had common system design documentation. (It also generated some sort of driver skeleton in addition to various test benches etc.)

          But not all technical excellency ends well

          ;-PWMx

          Comment

          Working...
          X