Announcement

Collapse
No announcement yet.

AMD Already Has Open-Source Fusion Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by smitty3268 View Post
    It has UVD3, the same version that the new 6xxx series has. The problem is that they don't support using it under Linux, even with their proprietary fglrx driver. Especially not with the OSS ones.
    Hm, i guess support for UVD3 will arrive eventually in the closed driver (in time for an ubuntu 11.04-"early preview"?), and the open drivers will have to do with some shader-based solution, which has yet to be tackled.

    Cool stuff, the 0-day support only strengthens the advancement of open drivers in the mobile space. Also, support for MeeGo can only benefit all parties involved.
    Now i have to save money for January, for getting a bobcat-based netbook with decent batteries & keyboard

    Comment


    • #32
      Originally posted by curaga View Post
      I'm thinking of the Intel-designed gpus. After all, the Ontario is not competing against Atom Z.
      Sure it is. The fact that it wasn't intel's INTENTION for atomz to be used where it is doesn't mean that it isn't being used there....

      Comment


      • #33
        Yeah, the crappy netbooks with it do exist. However, the Z atoms have a tdp of ~2W, how is that anywhere close to Ontario's 18W?


        Also, I take bridgman's silence as "no, the 80 shaders won't be enough".

        Comment


        • #34
          In this case my silence means I was busy and didn't see your question

          Quick answer is "we don't know yet". I was concerned that the 40-ALU cores in the IGP parts would not be enough and recommended that folks go for one model up (eg hd2600 vs hd2400 at the time) if they wanted to have some shader cycles free for decode assist. The low clocked Ontario parts are going to be in the same ballpark as the (40 ALU but higher clock) IGP parts while the higher clocked Ontario parts should have ~2x the graphics throughput which should be enough to help with decode.

          One of the interesting things will be the degree to which code can be written that takes advantage of the IGP and Fusion parts using the same memory for video and system RAM. With the right cache-fu it should be possible to avoid one of the major pains of GPU/CPU cooperative programming where you have to shuttle data between memory spaces in order to get decent performance.

          The CPU wouldn't normally be set up to cache video memory IIRC (this is a driver/OS thing not a hardware thing) so CPU accessing video RAM wouldn't necessarily be fast but I *think* the GPU should be able to texture from and render into system memory more quickly than on a discrete video card where the system memory is on the far side of a PCIE bus. That, in turn, should mean that CPU and GPU processing could be intermixed to a greater extent than usual. We'll have to see but I think this is going to be a fun year for driver development.

          Comment


          • #35
            Originally posted by bridgman View Post
            One of the interesting things will be the degree to which code can be written that takes advantage of the IGP and Fusion parts using the same memory for video and system RAM. With the right cache-fu it should be possible to avoid one of the major pains of GPU/CPU cooperative programming where you have to shuttle data between memory spaces in order to get decent performance.
            I thought the Intel GEM was written with these exact issues in mind.

            Comment


            • #36
              Originally posted by curaga View Post
              Yeah, the crappy netbooks with it do exist. However, the Z atoms have a tdp of ~2W, how is that anywhere close to Ontario's 18W?


              Also, I take bridgman's silence as "no, the 80 shaders won't be enough".
              the shader based h264 part in the hd2900 is fokused on 320 shader cores..

              but the shader cores in the hd2900 do have less power per clock cycle and do have lower clock cycle..

              and the hd2900 waste time on transfering the data from cpu to gpu.. over pcie..

              the hd2900 don't have gpu core cache per shader block and don't have a gpu core global cache means if the radeon driver uses this cache an 80 shader based fusion can beat an big hd2900 in video acceleration.

              thats the key the hd2900 can not handle openCL because of this cache stuff.-

              Comment


              • #37
                What I and arguably most Linux-using potential AMD Fusion buyers want to know is simply whether distros support the new platform when it's released (or if later, when) *and* how well it is supported (features, power management etc.)

                The other chipzilla tends to roll out full open-source support well in advance of hardware availability and from what I gather is that AMD is still figuring out how and what features to support.

                I'm all for supporting the underdog, provided that they're "doing the right thing". However I'm not keen on buying half-supported hardware in the hope that full support may arrive a year down the line.

                I trust Phoronix and the knowledgeable forum here to keep the wider community updated on upcoming (Fusion support) developments.

                Comment


                • #38
                  Originally posted by misGnomer View Post
                  What I and arguably most Linux-using potential AMD Fusion buyers want to know is simply whether distros support the new platform when it's released (or if later, when) *and* how well it is supported (features, power management etc.)
                  Initial support should be basically the same as Evergreen running 600c.

                  Comment

                  Working...
                  X