Announcement

Collapse
No announcement yet.

"Ask ATI" dev thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • I'm not shure if this hasn't been asked already, but can you give us some information how the work in the private branches f?r 3D 6xx/7xx support is progressing?

    Will we have 3D support "very soon" (how soon would be the next question ) after the doc release?

    Comment


    • Sure. First thing I should stress is that we're not just talking about 3D support - the 3D engine is now used for basic 2D acceleration, upload/download, Textured Video *and* 3D. The shadowfb support works nicely but apparently won't co-exist with 3D acceleration so we really need to roll out 2D and video at the same time as (or before) 3D in order to avoid giving you a crappy user experience.

      That said...

      - programming sequences are worked out for all the basic acceleration functions and ready to be integrated into the X driver

      - Alex thinks he has figured out how to use the CB (pixel write logic) to implement 2D-style ROPs natively rather than simulating them with alpha blend; we don't plan to hold up the initial release implementing this though, seems like performance is OK with just solid fill & copy accelerated (ie seems to be enough to avoid hosing performance when we disable shadowfb)

      - Richard's basic arb_f/vp shader assembler seems to be drawing pictures on the screen correctly - setting up the engine state is a lot more complicated on 6xx and up relative to previous generations, so we're going with a shader assembler from day 1

      - drm seems to have enough functionality for EXA and TexVid acceleration; don't think we have tested arb_f/vp assembler over drm yet

      - code has gone from only working on one specific system to working on maybe half of the systems we try, seems to be sensitive to some combination of OS distro and base driver, Matthias and others investigating to try to identify the dependencies

      There is also a bunch of stuff that isn't done (primarily integration into the current driver framework) but that list has gotten a lot smaller over the last couple of weeks. I think the general consensus is that we understand how the chips work now

      One of the open questions is whether the initial implementation should use the same drm ioctls as we do for 5xx or the new ones being created as part of the GEM work being done in DRM by Dave, Jerome and others. The argument for the current ioctls is that they are known to work (but have serious known problems); the argument for the new ioctls is that they fix most of the current known problems but are brand new and probably introduce other problems -- and troubleshooting those problems may introduce other big delays along the way. The real question is whether the first implementation is likely to be rewritten anyways (my guess) - if so then using existing ioctls is fastest, but if the first implementation is likely to last then using the new ioctls is the way to go. It does seem that with passage of time the new ioctls will become the approach of choice anyways, so we are kinda leaning that way.

      Also want to remind that for 6xx/7xx 3D engine it's likely that first release will be code not docs. For previous generations we kinda documented everything and used everything; beginning with 6xx there is a lot of chip functionality we don't use in the closed source drivers (and don't test in production) so separating out just the part that we actually use is the primary task. IP review is being done based on the info we need to make the driver work rather than a document this time, since most of the detailed docco is already out in the form of the shader instruction set manual (June).

      EDIT - I just noticed this is in the "ask the ATI fglrx devs" thread; oh well, thread was already derailed anyways
      Last edited by bridgman; 23 October 2008, 01:30 AM.
      Test signature

      Comment


      • Originally posted by bridgman View Post
        For previous generations we kinda documented everything and used everything; beginning with 6xx there is a lot of chip functionality we don't use in the closed source drivers (and don't test in production) so separating out just the part that we actually use is the primary task.
        Derailing it further but I just have to ask: Why are you putting a bunch of stuff in there you don't need? I thought silicon was rather precious, or are you talking about an ABI for future chips aka "here be dragons and we can't talk about it"? In any case, code that works is better than everything else.

        Comment


        • Thx bridgman!
          That sounds promising.

          Comment


          • Originally posted by Kjella View Post
            Derailing it further but I just have to ask: Why are you putting a bunch of stuff in there you don't need? I thought silicon was rather precious, or are you talking about an ABI for future chips aka "here be dragons and we can't talk about it"? In any case, code that works is better than everything else.
            It seems the chipmaking company is just a front for a secret organisation fighting aliens and defending planet Earth from destruction! The extra silicon must be forming a huge planet-wide force field to defend us, and running distributed computing apps to research the development of giant robots to save us.... (I've been watching too much anime)

            Originally posted by bridgman
            EDIT - I just noticed this is in the "ask the ATI fglrx devs" thread; oh well, thread was already derailed anyways
            hehe - in that case, how do you use UVD/UVD2?

            Comment


            • I would imagine there's quite a bit on the graphics cards that isn't used in linux specifically, and so isn't used with linux drivers. I guess there's also some areas that aren't used anywhere for a specific model - complex devices like video cards aren't redesigned for each new model, and if something (like a math unit or something) works just fine but has extra stuff you don't use, you're often better off using that rather than making (and testing/debugging) a completely new design.

              Comment


              • Originally posted by grantek View Post
                It seems the chipmaking company is just a front for a secret organisation fighting aliens and defending planet Earth from destruction! The extra silicon must be forming a huge planet-wide force field to defend us, and running distributed computing apps to research the development of giant robots to save us....
                Nuts, someone blabbed. Look at this neuralizer for a minute

                Seriously, most high end GPUs include some speculative capabilities based on the vendors best guess of what OSes and applications will be looking for 2 years down the road. With the 6xx the silicon impact was small (and was zero on later chips) but the documentation impact was huge. Normally that is not the case.
                Last edited by bridgman; 23 October 2008, 07:11 PM.
                Test signature

                Comment


                • I was wondering: what is actually the best place to report a bug in the proprietary drivers? I'll try the drivers every month on two different systems and every now and then I would like to report a bug to the driver developers. But again and again I wonder: will my bug report even reach the developers?

                  For example: will a bug report written in the `unofficial driver bugzilla' (found from the unofficial wiki, which is found from the AMD website), ever be seen by a driver developer?

                  Or would I be better of using the `Linux crew driver feedback' found from the AMD website?

                  Comment


                  • Question on pbuffers

                    Hey bridgman!

                    Totally awesome of AMD/ATI to stay in touch with their userbase like this. After years of supporting ... "that other company" I just bought my first ATI card last week, which came 2 days ago. I've heard good things about ATI and their growing relationship with Linux, plus my 7900GS died, hence the switch. Besides, I love AMD and ATI is now uber cool by association.

                    Anyway, I did some looking around and finally selected an HD 4850 for my Ubuntu 8.04 gaming box, intending to switch to any flavor Linux needed to get it running right.

                    Firstly, WOW. That's a nice card.

                    Secondly (the real reason for this post), I'm a WoW gamer so getting 3D up and running was definitely a priority. After wrestling with what turned out to be a very simple driver install (kept Ubuntu), I was able to get WoW running with Catalyst 8.9 but I was dismayed when I started Warcraft and saw the minimap turn all white. Oh Noes!

                    I did some research and found it was said to be a driver issue, and confirmed that research here with sksk's post and your reply (post # 236 and 239 of this thread respectively).

                    As I understand it, for whatever reason the Catalyst drivers stopped supporting pbuffers, which is what Blizzard programmed the minimap to use instead of framebuffers for "inside/outside" transitions, which is causing the glitch. (http://bugs.winehq.org/show_bug.cgi?id=11826)

                    My actual question is, since sksk's question was a few months ago and the issue still seems to be around, will pbuffer support be re-enabled anytime soon and if not is there a hack/workaround to allow or emulate pbuffers?

                    Really I don't care that much about pbuffers, I just want to see the minimap issue fixed. By driver, addon, or hack, I care not.
                    WotLK is coming soon and I need my minimap!

                    Thanks for your time!
                    Rooksr

                    PS. I've read a few posts now where people who had over 2GB RAM running removed all but 2GB and it cleared up a lot of graphic/freezing issues. Since I have you here, is there anything about ATI video card RAM addressing limits/issues in Linux I should know about? Being a new ATI user I'm not aware of a lot of things that may be old hat to others. I'm running 4GB of which only 3GB is recognized. Thanks!

                    Comment


                    • Originally posted by Rooksr View Post
                      PS. I've read a few posts now where people who had over 2GB RAM running removed all but 2GB and it cleared up a lot of graphic/freezing issues. Since I have you here, is there anything about ATI video card RAM addressing limits/issues in Linux I should know about? Being a new ATI user I'm not aware of a lot of things that may be old hat to others. I'm running 4GB of which only 3GB is recognized. Thanks!
                      With the release of 8.10 (linux kernel 2.6.27-7) it appears that the 4GB issue with Intel chipsets (P35,X38,X48) has vanished/been fixed (when using AMD64). The issue with regards to only 3GB being available to you may have to do with the fact that you are using i386 (32-bit) kernel without enabling PAE (this can also be remedied by switching to a BIGMEM kernel).

                      Comment

                      Working...
                      X