Announcement

Collapse
No announcement yet.

ATI R600g Gains Mip-Map, Face Culling Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • droidhacker
    replied
    Originally posted by bridgman View Post
    That wasn't how I understood Q's question but you may be right.
    Q has a very mischievous streak. I suspect that he knows what he's saying a lot better than he makes it look. He probably worded the question precisely to ask both things.

    Leave a comment:


  • bridgman
    replied
    That wasn't how I understood Q's question but you may be right.

    Leave a comment:


  • Agdr
    replied
    Originally posted by bridgman View Post
    One essential part of holding something back is, well, holding it back. I'm not going to give you a list of the stuff we *didn't* release
    I believe he was asking about stuff you *initially* didn't release, but where you changed your mind and actually released it later (which according to you happened in several instances).

    I listed what you added in R5xx 3D acceleration 1.5 that wasn't in 1.4: probably some of that meets the above criteria.

    Leave a comment:


  • droidhacker
    replied
    Don't need to make any promises. If you tell them of your interests, they may HIRE you or hire someone to do some grunt work and/or even take over the project and do it FOR you.

    Wishful thinking? Never hurts to ask. Especially google -- this could be exactly what they need to get VP8 really off the ground.... and they have TONS of money to throw around.

    Leave a comment:


  • tball
    replied
    Originally posted by Hans View Post
    No I haven't. This is a sparetime project only, and I don't want to be bounded by any promises. I am going to USA for some time in a couple of month, and I don't know if I have time developing on GPU decoding over there.
    I guess I am confusing people here. Well Hans = Tball :-)
    i created the user Hans, because I forgot my password for tball. Luckily firefox has stored the password, so I am now back with tball.

    Once in a while I use chromium browser, which is login in automatically as Hans :-)

    Leave a comment:


  • Hans
    replied
    Originally posted by droidhacker View Post
    tball: have you considered asking either of them for assistance or funding?
    No I haven't. This is a sparetime project only, and I don't want to be bounded by any promises. I am going to USA for some time in a couple of month, and I don't know if I have time developing on GPU decoding over there.

    Leave a comment:


  • bridgman
    replied
    Agreed. The thing that works in our favour, though, is that as long as anyone working on shader decode starts at the end of the pipe and works backwards, they'll probably run out of development time about the same time the smallest GPUs run out of shader power

    I haven't had time to tinker with any code yet but my feeling is that everything from bitstream parsing to IDCT and inter-prediction should stay on the CPU, while motion comp (intra-prediction) and deblock filtering should go on the GPU. That seems like a good split in the sense that computationally expensive stuff would be on the GPU while "moving fiddly little bits around" would stay on the CPU.

    It's not clear that moving more of the work onto the GPU (ie going further back up the decode pipe) would be a win anyways.

    Leave a comment:


  • droidhacker
    replied
    Originally posted by bridgman View Post
    re: video decode, one of the cool things about shader-assisted decode is that once you have it working with one API it can be adapted to other APIs fairly easily. The key point though is that you want to be able to lean on an existing pure-SW decoder since some of the processing is going to stay on the CPU and you don't want to have to write all that code from scratch for each new standard.
    And one of the *complexities* of it is balancing the decode functions between the CPU and the GPU such that you can leverage as much shader-assist as that GPU is capable of without overloading it such that you end up with inadequate performance. This needs to be dynamic since you have a huge range of GPU capabilities from the fairly weak IGP's to the insanely powerful discrete (which can obviously handle a much greater portion of the work).

    Leave a comment:


  • bridgman
    replied
    Originally posted by Qaridarium
    why not tell us a exampel for an hold back technique? i have tried to make an example.
    One essential part of holding something back is, well, holding it back. I'm not going to give you a list of the stuff we *didn't* release

    re: video decode, one of the cool things about shader-assisted decode is that once you have it working with one API it can be adapted to other APIs fairly easily. The key point though is that you want to be able to lean on an existing pure-SW decoder since some of the processing is going to stay on the CPU and you don't want to have to write all that code from scratch for each new standard.

    Did I mention how much I hate having to delete and re-post every time I want to edit something ?

    Leave a comment:


  • droidhacker
    replied
    Originally posted by hal2k1 View Post
    Is there any chance that hardware-accelerated video decoding support could go beyond mpeg2, beyond h264, and extend to Theora and/or WebM?

    Pretty please?

    It would be legendary if that could be done. It would remove all of the impetus of claims that "open codecs have no hardware support". It could be a real boost to open video, IMO.
    From what I've read, a lot of the h264 stuff can be leveraged for VP8 (webm is just the container and needs no acceleration), so it is a natural progression once h264 is working -- i.e., the reason why ffmpeg already has a functional decoder that blows google's out of the water is because they use a lot of code from their h264 decoder.

    Theora shouldn't be much of a priority -- it was never in a place where it could be considered "successful", and with VP8 now being free, it doesn't look like it ever will be.


    *** I wonder if it would be possible to get any kind of support for this from google and/or ffmpeg? One would think that google would jump at the opportunity to get out some free universal VP8 acceleration, and it seems right up ffmpeg's alley. tball: have you considered asking either of them for assistance or funding?

    Leave a comment:

Working...
X