Announcement

Collapse
No announcement yet.

ATi OSS driver confusion?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by rbmorse View Post
    Michael: Could you make Bridgman's #2 above a sticky somewhere? I'm sure it will help a lot of people as the projects continue to progress.

    Yep: http://www.phoronix.com/forums/showthread.php?t=7032
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #12
      >>Just a precision i don't know who is Stefan Marcheau maybe you were thinking to Stephane Marchesin but he now only work on nouveau code And Daniel Stone were the co-guilty guy who started avivo too.

      D'oh !! I knew there were three people but for some reason my brain dredged up Stefan's name not Daniel's. Now I need to figure out who Stefan is

      >>My next question is not what information will be made available but rather what information wont be made available?

      We're still working that out one area at a time, which is why we're being a bit imprecise on the details. Right now it's looking like a pretty small list, mostly things which we can't separate from output protection or digital rights management stuff. UVD will probably be the most contentious "holdback" right now.

      >>And how much abstraction does AtomBIOS actually do?

      AtomBIOS primarily covers initialization, output configuration, modesetting and some display detection. What we're recommending is "make AtomBIOS calls by default, but feel free to go around AtomBIOS where necessary on known chips" and we're trying to provide enough information and support to allow direct programming where necessary.

      Our current plan for 3d is to provide a similar abstraction layer, in open source form, derived from the bottom end of our new OpenGL code base. The plan is to provide this as sample code and to help get 3d running quickly, although if Gallium works out well I expect that code will get re-written pretty quickly to fit into the Gallium model. The main question is whether we will need to help write a shader compiler/assembler or whether the llvm-based compiler being used in Gallium will come along in time. My current assumption is that we will probably help with a relatively basic shader assembler/packer and that llvm will come along before anything fancier is required.

      >>i remember reading somewhere on the forums that 'fglrx will offer superior features and performance'. and that radeonhd will be a generic "fallback" driver in case fglrx doesn't work.

      I see it the other way round. I expect that the open source driver will be the default that everyone starts with and that users will upgrade to the proprietary driver to get additional features, functionality or performance. We are steering OEMs towards the proprietary driver in most cases since they tend to want things we can only deliver with fglrx, but until Linux becomes a bit more of a mainstream offering at OEMs it's too early to say how this will all work out.

      >>so i'm afraid that there will be something important left out of the specs. something that's not 3rd party IP and something that will give fglrx the upper hand.

      Initially we had announced that this *would* likely be the case -- not to give fglrx the upper hand but to protect some technology and protect revenues from a couple of business units such as workstation -- but we are making good progress finding ways to get that protection without holding back 3d information. I think the limiting factors are going to be (a) open source developer's time to work on 3d drivers, and (b) our ability to provide support for understanding and using all the gory details in the chip.

      >>Will gamers have the ssame 3D performance with RadeonHD as with fglrx?

      Honestly, I doubt it. There are some really fine developers in the open source world, quite capable of writing an equally good or potentially even better driver, but none of the developers I have spoken with expect to have even a fraction of the time required to make a top-performing driver. What I do expect is a clean, elegant implementation which runs "pretty fast" but realistically nobody in the open source community is going to have time to work on per-application performance optimization.

      As a result the closed source driver will probably continue to be faster on the most popular apps. The interesting thing, though, is that I expect many users won't care. GPUs are pretty fast these days and 50-80% of the potential performance is still pretty good. For apps which don't justify performance optimization work, it's quite possible that the open and closed source drivers will have similar performance over time. Some of this will depend on how well the open source shader compilers work out -- packing work into the shader arrays optimally does make a big difference.
      Last edited by bridgman; 21 December 2007, 11:50 AM.
      Test signature

      Comment


      • #13
        hmm

        I'm still using a Radeon 9200SE PCI 128MB card.
        While the 3d performance is less than exciting, especially under radeon, the thing that bugs me most is stuff where my 2.8GHz CPU can't keep up with tasks that are ripe to be made parallel.

        I must admit I have no intention to buy another coprocessor until I know it can do /anything/ in a completely open fashion, not just graphics. I believe that open source software will be the most able to take advantage of such a paradigm shift, and that's what I'm looking for. Two separate cards, perhaps with a crossfire link for 3d:
        A video/audio input and output card.
        A 2^x core risc vector co processor with 512MB of GDDR3 on a PCIe 2^(4|5|6) card
        Full open specs for both, with the exception of that useless (to me) dhcp stuff.

        I understand CUDA requires proprietary software, which immediately takes it out of the picture for most FOSS. I'm wondering which company's going to provide us with this kind of hardware at a transistor-for-transistor rates competitive with most GPU's on the market today and the tools we need to showcase it for them with no reliance on any particular CPU architecture.

        AMD could theoretically create a linux distribution specifically designed to take advantage of their hardware in every possible way, and use it to outperform every other consumer level platform in existence. There is nothing between their staff and the code that comprises the Linux operating system and userland at every possible level.
        The trick is to remember what you're selling. /hardware/. We'll help you make the most of it, if you'll help us make the most of it.

        No other machine on the market will be able to run Fyre in real time as the desktop background, or decode HD video in real time with a CPU load of 3%, or perform real time mixing and effects on 64 audio channels without breaking a sweat..
        Every power user needs that, whether they know it or not. They just need to see it to realize that. If you'll let us, we can show them.

        Comment


        • #14
          don't worry, the fglrx devs will mess it up

          there will be no superior fglrx

          if there's a chance to mess it up, fglrx developers won't miss it.

          they are good at keeping releasing cycles, making fun of users, but not developing drivers.

          the only part not so funny is that fglrx has wasted too much time for users. may the trend come to stop some day.

          Originally posted by yoshi314 View Post
          i remember reading somewhere on the forums that 'fglrx will offer superior features and performance'. and that radeonhd will be a generic "fallback" driver in case fglrx doesn't work.

          so i'm afraid that there will be something important left out of the specs. something that's not 3rd party IP and something that will give fglrx the upper hand.

          Comment


          • #15
            In all seriousness, a Cell processor or 4 on a card might be what you are looking for, or any of the modern GPUs. The big question is ease of programming though -- traditional CPUs waste transistors and performance in order to provide an easy programming model where things like pipelining and cache coherency are taken care of for you, mostly invisibly. GPUs are at the other end -- extremely difficult to program by comparison but potentially capable of much higher throughput. Cell processors kinda split the difference -- you are still painfully aware that you are processing on an array, and you have maybe 1/4 the potential throughput for the $$, but they are a bit easier to program efficiently than a GPU.

            This is the big debate in the processing world right now -- ease of programming vs. potential processing throughput, and how independent the processing threads can be. CPUs are good with collaborative threads -- GPUs, in general, are not, and the question is how effectively the programming paradigms (or compiler technology) can change to make use of potentially more effective hardware implementations.

            An off the shelf HD3850 or 3870 gives you 320 parallel RISC cores today, with the caveat that they run fastest when chewing on vectors because of the 5-way superscalar and 16-way SIMD nature of the shader arrays. We are working on making them easier to program (as are other groups) but there will always be a tradeoff between ease of programming and raw throughput. The question is how best to make those tradeoffs.
            Last edited by bridgman; 22 December 2007, 03:05 AM.
            Test signature

            Comment


            • #16
              I hope that Intel will enter gfx market soon with dedicated GPUs not only onboard solutions. The current speed is not really high enough for fast 3d apps, but maybe when they use more parallel execution units it could get faster. Then there will be a real competion for the fastest opensource driver *g*

              Comment


              • #17
                This is the big debate in the processing world right now -- ease of programming vs. potential processing throughput, and how independent the processing threads can be. CPUs are good with collaborative threads -- GPUs, in general, are not, and the question is how effectively the programming paradigms (or compiler technology) can change to make use of potentially more effective hardware implementations.
                i just remembered an article about playstation2 architecture which is very biased towards raw data throughput with vector processing (little system memory and cache but tons of bandwidth and data processing power), and requires a totally different approach than pc does. that proved to be very difficult for developers.

                [ read here : http://arstechnica.com/articles/paedia/cpu/ps2vspc.ars ]

                i think sony got it right with ps2. even though the hardware was very difficult to program it proved to be perfect for multimedia (video, 3d graphics etc.). and ps3 with cell seems more like an evolution than revolution to me, as it looks like upgraded, multicore ps2 in that regard.

                ps2 and ps3 isn't the easiest hardware to program. but by exposing (and avoiding) pc architecture's weakest points in multimedia it seems to be a superior solution.

                perhaps the GPU's should follow the same path.

                Comment


                • #18
                  Originally posted by yoshi314 View Post
                  perhaps the GPU's should follow the same path.
                  I think the GPUs are leading the way, in the sense that they are the most strongly biased towards "maximum performance at the expense of easy programming".

                  When I talk about ease of programming here I'm talking about using them as general purpose computing elements, not running a standard graphics API.
                  Test signature

                  Comment


                  • #19
                    Originally posted by bridgman View Post
                    I think the GPUs are leading the way, in the sense that they are the most strongly biased towards "maximum performance at the expense of easy programming".

                    When I talk about ease of programming here I'm talking about using them as general purpose computing elements, not running a standard graphics API.
                    Bring on message parsing , I know people hope to make programming parallel applications easier but it's always going to be hard. Frameworks can help if your algorithm matches the pattern but otherwise they often get in the way.

                    I'm sickened as Computer Scientist of the programmers who want to be lazy and just hope something works rather than understanding the theory and algorithms so they have a much more confidence in it working. I'm also sickened of how they want a magic fix when we know the problems are bloody 'hard'.

                    Comment

                    Working...
                    X