Announcement

Collapse
No announcement yet.

AMD Releases R600/700 3D Documentation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 89c51
    replied
    First a big thanks to AMD for the docs and for supporting Open Source



    Secondly as stated above people with C skills want to help and don't know where to start. (sadly i fall into the category of people with lower than basic skills when it comes to C )

    Bridgman proposed a way but wouldn't be a bit easier if some of the main developers publishes something like a task list with a small description of every task (i.e. what it should do, where to start, difficulty etc.)

    Leave a comment:


  • bridgman
    replied
    I don't think so. Everyone has "announced support" (rah rah !!), but AFAIK nothing has been released yet.

    An unaccelerated reference implementation is a bit tricky because of the interoperability requirements with OpenGL -- you would need to either build it around an existing software OpenGL renderer or hook into the driver stack for a single GPU. Building OpenCL into Mesa would be useful in many ways.

    Leave a comment:


  • Ex-Cyber
    replied
    Has an OpenCL implementation (even a non-accelerated reference implementation) even been released yet? From my perspective, it looks like that's the biggest barrier to OpenCL work right now - the spec is out, but there doesn't seem to be any way to actually run OpenCL code (unless you're a developer at ATI/Nvidia/Apple). I think that has more to do with the lack of excitement over it than anything about its technical merits.

    Leave a comment:


  • bridgman
    replied
    OpenCL falls into the same category as Gallium3D, CUDA, or any purely shader-based implementation. It's probably going to be quite good for the back-end part of the decoding pipe (motion comp, filtering), might or might not be good for the middle part of the pipe (inverse quantization, IDCT) depending on the implementation, and probably not good for the start of the pipe (bitstream processing, entropy decoding).

    The good news is that the processing at the front of the pipe tends to be easier to do on the CPU than the processing at the end of the pipe, so with luck it will all balance out.

    Leave a comment:


  • Louise
    replied
    OpenCL haven't been mentioned to decode H264.

    Is that because OpenCL isn't good for that?

    Leave a comment:


  • dashcloud
    replied
    Originally posted by bridgman View Post
    I took a quick skim through the forums; looks like the work has been started but is not fully there. It may be that slice-level decoding is working (for video which has multiple slices per frame) but apparently not all encoders make heavy use of slices.

    Slices sure seem like the most obvious option for multi-threading and the only one which doesn't involve building and balancing a pipeline.

    EDIT - here we go :



    In short words, ffmpeg supports multithreading if the video is encoded with multiple slices per frame, so the most common h.264 encoder creates video which can't be multi-threaded on the most common h.264 decoder. Boo
    I'd just like to add that this is (thankfully) not totally correct: there's an experimental tree that adds frame-level parallelism decoding for mpeg1/2/4 and H264. If you check out http://gitorious.org/projects/ffmpeg/repos/ffmpeg-mt you can get that tree. It still needs some work, so if you'd like to see multi-threaded decoding of H264 videos in the main tree, poke the owner of that tree.

    If you look on the ffdshow-tryout thread on doom9, you can see some benchmarking numbers for ffmpeg-mt- they're pretty good.

    Leave a comment:


  • tmpdir
    replied
    Originally posted by bridgman View Post
    BTW, for anyone not following IRC, not only did MostAwesomeDude implement a lot of the 5xx 3D support (including the shader compiler for ARB_vertex_program and ARB_fragment_program) but he has been working on a Gallium3D implementation for 3xx-5xx and saw the first screen output from that in the last few days.
    Respect

    Excellent news about the 3D documentation.
    Last edited by tmpdir; 27 January 2009, 05:45 PM.

    Leave a comment:


  • bridgman
    replied
    Originally posted by MostAwesomeDude View Post
    Doing r6xx support on Mesa is kind of silly in my opinion, but Gallium work requires a bunch of experimental pieces, and there's still bugs here and there.
    Heck, even we agree with that, but there's a "but..."

    From a developer's perspective, working in the classic Mesa HW driver model is silly. Nobody feels that more strongly than the devs actually doing the work. From a user perspective, though, it's different -- until all the experimental bits and pieces fetch up in at least a few distros, anything 6xx-ish we do in Gallium is not going to be broadly accessible to them.

    The best compromise we could come up with was to get the basic programming sequences worked out in classic Mesa so that users of current distros will have Compiz support, then port the working 6xx code across to Gallium and never look back.

    BTW, for anyone not following IRC, not only did MostAwesomeDude implement a lot of the 5xx 3D support (including the shader compiler for ARB_vertex_program and ARB_fragment_program) but he has been working on a Gallium3D implementation for 3xx-5xx and saw the first screen output from that in the last few days.
    Last edited by bridgman; 27 January 2009, 02:53 PM.

    Leave a comment:


  • MostAwesomeDude
    replied
    Wow, nice to see people wanting to get involved.

    I'm one of those that wasn't born when X was around. (I'm only 20!) I got into this kind of work because AMD put out the r5xx documentation, and at the time my only working computer was an Asus laptop with a Radeon Mobility X1700.

    So, being the enterprising entity that I am, I walked into the IRC channel (#radeon on Freenode), and inquired. Turns out that there wasn't really anybody working on it, but there was only one piece of the puzzle missing, so if somebody could write a fragment program compiler for r5xx, it should all just magically work.

    So I did. It was not exactly easy; it took me a few months before I came anywhere near actual understanding of the code. I knew C, but I didn't *know* C. But, as I worked, I kept reading code, and reading docs, and bugging airlied and glisse with stupid questions, and eventually, things started to come together.

    I'll even dish a few pointers for free. Doing r6xx support on Mesa is kind of silly in my opinion, but Gallium work requires a bunch of experimental pieces, and there's still bugs here and there. If I were to start r6xx drivers today, I'd start by getting a mug of hot chocolate and sitting down with the r6xx docs, and read those front to back a few times.

    ~ C.

    Leave a comment:


  • steefjeqv
    replied
    Originally posted by chaos386 View Post
    Not sure about what the minimum is, but my 2.2 GHz C2D laptop can play back 1080p H.264 just fine. AFAIK, none of the open-source video players are multithreaded, either, so the number of cores shouldn't be too important.
    I think Xine can multithread. You can change the number of threads in the Xine settings.
    It uses both cores of my AMD 5000x2. Together with my Sapphire X550 and the latest xorg driver (ati) + ffmpeg, I can now watch 1080i BBC HD. 1080p is not possible.


    Greetings,
    Steven

    Leave a comment:

Working...
X