Announcement

Collapse
No announcement yet.

AMD Releases OpenCL ATI GPU Support For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by mattmatteh View Post
    This is great that AMD/ATI is supporting linux, but i assume this is proprietary. Is there an open source alternative, eiher as an independent package or with xorg-ati? Is there any aditional documentation needed for this ?

    I wonder how impossible it would be for this to work on older cards, and if an open source driver could take advantage of this.

    matt
    There's an open source gallium state tracker currently under development, but it's going to be awhile before the r600g driver is working. They haven't even started it yet (12 months?).

    I don't think it will work for anything earlier than r600 because of hardware limitations - at that point you'd just be running it through the software pipe on the CPU.

    Comment


    • #32
      If we thought the r600g driver was going to take a year to write we would have started it a long time ago

      Once we have a solid r300g base I don't think it should take too long to port the 6xx/7xx code across from the classic Mesa driver.
      Test signature

      Comment


      • #33
        Gallium's OpenCL tracker is pre-alpha, IIRC. Free OpenCL will happen, just not yet. AFAIK there's no complete free OpenCL platform for any OS, so I'm not going to split any hairs.

        nVidia and AMD's OpenCL Linux development is (probably) shared with Windows driver development, so now that fglrx is sync'd with the Windows codebase it's a cheap ride. Of course there's packaging, testing, etc, which is actually a lot of work - but the core code is shared.

        Pre-4xxx cards can do interesting GPGPU work, but OpenCL demands more than what older cards can do. OpenCL won't run fully on a GeForce <8k either!

        As for me, I'm planning to put my 4830 back in my quad-core and start playing with it this weekend. If I really get into it I'll probably snag a 58xx whenever it's a good enough value for me to upgrade.

        Comment


        • #34
          Originally posted by mattmatteh View Post
          This is great that AMD/ATI is supporting linux, but i assume this is proprietary. Is there an open source alternative, eiher as an independent package or with xorg-ati? Is there any aditional documentation needed for this ?

          I wonder how impossible it would be for this to work on older cards, and if an open source driver could take advantage of this.

          matt
          Remember that OpenCL is fundamentally HW agnostic. Multi-core CPU and multi-core GPUs are both targets (albiet for differing workloads).

          The AMD OpenCL driver that was released supports both CPU (AMD and Intel) and GPU (ATI).

          As others have suggested, OpenCL expects a certain level of capability that is very difficult to provide using older GPU hardware. If the hardware capability isn't there then you would simulate the missing functionality on the CPU. If a punt to software occurs, you may as well be using the CPU OpenCL implementation anyway.

          Regards,

          Matthew

          Comment


          • #35
            Originally posted by bridgman View Post
            If we thought the r600g driver was going to take a year to write we would have started it a long time ago

            Once we have a solid r300g base I don't think it should take too long to port the 6xx/7xx code across from the classic Mesa driver.
            Well that's very good news to hear. I know people complain when they think you guys are missing deadlines that they've heard, but a little bit more info on when these upcoming things are going to be ready would be nice even if it's very vague, like sometime next summer.

            I was just guessing based on what I've heard that the 300g driver wouldn't be ready until mesa 7.8 which i figured would be 6 months, and then tacked on another 6 to do the 600g driver.
            Last edited by smitty3268; 15 October 2009, 01:47 AM.

            Comment


            • #36
              Only r7xx (and rv670) GPUs support double precision ops.

              Comment


              • #37
                Originally posted by smitty3268 View Post
                Well that's very good news to hear. I know people complain when they think you guys are missing deadlines that they've heard, but a little bit more info on when these upcoming things are going to be ready would be nice even if it's very vague, like sometime next summer.
                Well, there's the first mistake (not yours)

                There are no deadlines. We make rough estimates for how long a small group of developers, all working on multiple projects, will take to do something they've never done before. We try to update the estimates as new information comes in, but if anyone thinks of that as a deadline the discussion is out of control already.

                If we were talking about the third or fourth Gallium3D driver to go into everyday use we could talk about hard plans and maybe even deadlines, but the primary schedule risk for 300g is the fact that (right now) there's a good chance it will be the first. That implies more of a learning curve and even more schedule uncertainty.

                Originally posted by smitty3268 View Post
                I was just guessing based on what I've heard that the 300g driver wouldn't be ready until mesa 7.8 which i figured would be 6 months, and then tacked on another 6 to do the 600g driver.
                Ahh, I see your reasoning. It makes sense, but the thinking is 300g in 7.8 because 7.7 is too close for anyone to have confidence so 7.8 is the next option. However, the devs don't need to wait until 7.8 before starting work on 600g, just until 300g has progressed far enough that the remaining work looks like bug fixing rather than architectural or API changes. My guess is that 300g and 600g will finish quite close together in time, and hopefully by doing all the heavy lifting in 300g rather than "learning everything twice" the overall time will be minimized. That's the hope anyways
                Last edited by bridgman; 15 October 2009, 10:46 AM.
                Test signature

                Comment


                • #38
                  Originally posted by bridgman View Post
                  Well, there's the first mistake (not yours)

                  There are no deadlines. We make rough estimates for how long a small group of developers, all working on multiple projects, will take to do something they've never done before. We try to update the estimates as new information comes in, but if anyone thinks of that as a deadline the discussion is out of control already.
                  A poor choice of words on my part there. What I was referring to was when Michael writes a post about how some feature is going to be done next week, or later this month, and then nothing visible to users happens on that front for another 6 months. People get frustrated because they were expecting something to be done even if it actually wasn't anywhere close, so I know the usual tendency is to try to be as vague as possible about when things are going to be done.

                  I know it's tough to give good estimates the first time you do something, especially when it involves a whole framework as large as Gallium that may not be working 100% since you are one of the first to try to get it all working.

                  Comment


                  • #39
                    Originally posted by smitty3268 View Post
                    I know it's tough to give good estimates the first time you do something, especially when it involves a whole framework as large as Gallium that may not be working 100% since you are one of the first to try to get it all working.
                    When you hear an estimate on software, a good bet is to double it and then increase it by an order of magnitude. In other words, when you hear "it will be ready by tomorrow", you should translate that to "it should be ready in two weeks from now".

                    As an added bonus, sometimes you'll be pleasantly surprised ("hey cool, it only took one week after all!")

                    Comment


                    • #40
                      Ah yes, the good old "times two and add thirty" rule

                      When I joined ATI I made myself unpopular in a variety of ways. One of them was asking "what level of confidence do you want ?" when someone asked me for a schedule. If you look at the typical distribution of completion times for a given project, as you increase time the probability of completing at that time goes up quickly to a "most likely" value then trails off slowly with a long tail, ie where the time to complete may be 5x, 10x or higher than the most likely time.



                      If you measure the area under the curve you can calculate the probability of the project finishing on or before a certain time. The 50% point (half the time early, half the time late) is normally close to, but not the same as the most likely time to complete.

                      Each project has :

                      - a lower bound (essentially no chance of finishing before this time)
                      - a "most likely" schedule
                      - a 50% confidence or "50/50" schedule, ie half the time you'll finish before, half the time you'll finish after
                      - various "high confidence" points, typically 80% and 90% confidence are used
                      - maximum time is usually unbounded... projects are sometimes just doomed and can suck up resources forever

                      The exact shape of the curve, and therefore the relationship between the different points, is a complex function of risks, task sensitivity to those risks, and task interdependence. I'm not including even scarier things like changes to requirements or priorities during the project.

                      This is where it gets complicated, of course. The "high confidence" schedules are quite far down the tail of the curve, and are much longer than the most likely or 50/50 times.

                      One of the great joys of project management is that when you are managing a portfolio of projects with shared resources you need to use the 50/50 point for each task so that over time your resource usage matches the estimate, but you don't want to make commitments based on the 50/50 point because that means you'll be late half the time. You end up having to keep two schedules - one that you use for internal management and one that you use to make commitments, and the two numbers are usually pretty far apart. Eliyahu Goldratt's Critical Chain describes how to integrate the two schedules in a manageable way.

                      Where am I going with all this ?

                      1. When an individual developer talks about how long an individual task might take, they are usually either talking about the minimum (ie "it will take at least this long") or the "most likely" time. This is what you would normally call a SWAG.

                      2. When we talk about project schedules (a project being a collection of tasks) within a larger rearchitecture initiative we are normally talking about the 50/50 point, which is optimum for allocating resources and figuring out how much work should be bitten off at a time. This is what you would normally call "a plan".

                      3. When something "really needs to be done by a certain time" you need to plan with 80% or 90% confidence, which either means significantly longer schedules or significantly fewer features. This is what you would normally call "a deadline".

                      So... one more time... how long is Gallium3D gonna take ?
                      Last edited by bridgman; 16 October 2009, 07:13 PM.
                      Test signature

                      Comment

                      Working...
                      X