Announcement

Collapse
No announcement yet.

AMD Releases R600/700 3D Documentation

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    OpenCL

    Originally posted by bridgman View Post
    I expect that any OpenCL implementation would be in userspace, interacting with the GPU through the drm (kernel) module. If you think about it as being "just like OpenGL" you won't be that far off.

    All of the participating companies worked together on the spec but I am not aware of any plans to work together on the implementation. One of the requirements for OpenCL is the ability to share data with OpenGL, which more-or-less requires that the OpenCL implementation be tied to the OpenGL implementation for that specific GPU family.

    I haven't heard about any plans to implement an open source OpenCL implementation yet; presumably if one were to be created it would be based on or work with the Mesa OpenGL driver because of the need for OpenCL/OpenGL data sharing. A am starting to wonder if it would be worth doing an initial OpenCL implementation which did not directly support interoperability with OpenGL in order to allow more design options.
    TechMage; I agree that implementing over Gallium3D probably makes the most sense; the unknown question is how closely the implementation would need to be integrated with an OpenGL implementation also running over Gallium3D; I suspect that building it around Mesa-over-Gallium3D is the way to go.

    bugmenot; AFAIK all necessary information has already been released, including the new double-precision opcodes (in the latest r600isa manual available on the AMD Stream site) :

    http://ati.amd.com/technology/stream...g/R600_ISA.pdf
    Ohh, well theres a suprise perhaps you can ask
    Jason Yang,Christopher Oat, and Justin Hensley how they did that....




    what you ask





    "That was the demo that we gave when we announced OpenCL support back in December at Siggraph Asia.
    "



    "AMD OpenCL parallel computing demo from Siggraph Asia 2008
    Posted by Tony DeYoung on February 03, 2009
    The first public demonstration of OpenCL functionality was given by AMD at Siggraph Asia 2008.

    OpenCL is the new vendor-independent standard designed to extract high performance parallel computing out of GPUs, DSPs and multicore CPUs.

    Basically the idea is that you can write your core computational code in OpenCL and voila! - your code scales to whatever processors are available.

    OpenCL will greatly improve speed and responsiveness for a wide spectrum of applications from entertainment to scientific and 3D visualization.

    ...."

    its not clear they were actually running CL on the GFx card or what that might have been, they talk about cores so implying CPU only for this first test code....

    Comment


    • #72
      Originally posted by bridgman View Post
      Not really, we're basically starting one task when we finish the one before, ie the plan is really a sequence not a schedule. If you can live with guesses I would say somewhere between 1 and 3 months for the first 6xx/7xx power management info.

      We have already released enough info to do a fair amount of video decode work on all GPUs up to and including the HD4xxx parts; that's really just waiting for a dev with the time, interest and experience to start designing and coding.

      Note that current power management work on 5xx is primarily being done in the usermode X driver; while that does offer some benefits, I don't think we are really going to see comparable power savings to fglrx until the open source power management work moves to the kernel. I don't know any devs who are thinking about that right now.
      perhaps it might suit some of the C coders here looking to help the ATI efforts in some way.

      some tips and links to the right documentation migtht be a good way to get them reading and leaning in the right direction perhaps... some work and learning FUN is better than non after all , at least until any UVD review/movement happens....
      Last edited by popper; 09 February 2009, 06:48 PM.

      Comment


      • #73
        Originally posted by popper View Post
        Ohh, well theres a suprise perhaps you can ask
        Jason Yang,Christopher Oat, and Justin Hensley how they did that....

        what you ask


        "That was the demo that we gave when we announced OpenCL support back in December at Siggraph Asia. "
        <snip>

        its not clear they were actually running CL on the GFx card or what that might have been, they talk about cores so implying CPU only for this first test code....
        Not sure I understand the question. I suspect everyone involved in the spec creation had OpenCL code running in house before the spec was finalized - we sure did. The code we are running in house is closed source, but my quoted comments were mostly about open source. The first comment (to Louise) was about *all* OpenCL implementations, both open and closed source.

        Since I made those comments, Zack posted about TG's plans to release an open source implementation soon (over Gallium as we suspected), once it had passed IP review.

        Originally posted by popper View Post
        perhaps it might suit some of the C coders here looking to help the ATI efforts in some way. some tips and links to the right documentation migtht be a good way to get them reading and leaning in the right direction perhaps... some work and learning FUN is better than non after all , at least until any UVD review/movement happens....
        Fair point. The big question for me is which API we should be using - there was some discussion about this at FOSDEM but nothing conclusive. The pros and cons seem to be :

        XvMC :
        - already supported in the X protocol, so easy to implement in the server (no multi-client drm/dri hassles)
        - straightforward to implement for MPEG2 but needs a lot of work to extend for H.264/VC-1 (Via notwithstanding)

        VA-API :
        - API spec already covers all the formats of interest
        - seems to have considered both server-side and direct-rendering implementations
        - very few real-world implementations

        VDPAU :
        - seems to have been conceived as direct-rendering only (not sure what would be involved in a server-side implementation)
        - API spec already covers all the formats of interest
        - at least one working implementation on commonly available PC hardware

        The big issue for me is whether we should be implementing in the X server (which removes a number of potential problems) or as a new, separate direct rendering client. It seems pretty clear that the long-term implementation will be as a direct-rendering client, but for the short term it's not so clear that multiple DRI clients (ie 3D plus the new video driver) can coexist reliably with broadly available drm code.

        The good news is that the decision sequence seems pretty simple :

        1. does the drm/dri code which is broadly available now (or real soon) support multiple direct rendering clients ?

        2. if we reach agreement on a single open source HD video API, can support for that API be added quickly to the X protocol and server ?

        3. based on the answers to 1 and 2, which protocol do we go with and do we implement in X or as a dri client ?

        There appears to be broad consensus that an agreement has not yet been reached, but that we really need to do so

        Once we have that agreement it will be clear where to point developers on the API side. All our docs are in one place already.
        Last edited by bridgman; 09 February 2009, 07:07 PM.
        Test signature

        Comment


        • #74
          Originally posted by bridgman View Post
          Not sure I understand the question. I suspect everyone involved in the spec creation had OpenCL code running in house before the spec was finalized - we sure did. The code we are running in house is closed source, but my quoted comments were mostly about open source. The first comment (to Louise) was about *all* OpenCL implementations, both open and closed source.

          Since I made those comments, Zack posted about TG's plans to release an open source implementation soon (over Gallium as we suspected), once it had passed IP review.
          ohh right, it was just when you said "you were wondering about doing an initial OpenCL implementation which did not directly support interoperability with OpenGL" ,i thought your ATI demo would help people choose the way,its closed inhouse test code though so no matter to the open devs here.

          it wasnt so much a question as just bringing the first ever (ATI)OpenCL demo to peoples attention here.

          Comment


          • #75
            Got it. Thanks !
            Test signature

            Comment


            • #76
              Seems to me with video formats that there needs to be a standard. A real standard, not a bunch of proprietary, look what I made up standards. Thus for video the CPU could just say, "Hey, this is video, I'll send this to the hardware decoder part of the GPU and send this audio data straight to the audio hardware audio decoder and be done with that. Job done."

              This way nVidia and ATI can focus more on speed and power monitoring, instead of just throwing out hardware specks that even they can't figure out how to make work properly with all the different APIs.

              A CPU shouldn't have to waist even one cycle on graphics or audio, it should just be decoded on the proper chip.

              That's my thought about it. Things are more complicated than they have to be in my opinion.

              It's like the advances in hardware are not keeping up in the leaps forword in software bloat.

              Comment


              • #77
                It's not a question of figuring out how to make the hardware work with all the different APIs -- most of the APIs support the same set of video formats anyways (XvMC is the exception, unfortunately).

                The issue is that video acceleration is not a priority for any of the community devs right now so we're discussing ways of making it easier for new devs to get started. Given the shortage of devs interested in working on video acceleration right now, focusing resources on a single API rather than trying to implement *all* of them seems like a good idea.
                Test signature

                Comment

                Working...
                X