Announcement

Collapse
No announcement yet.

Open-Source 2D, 3D For ATI Radeon HD 5000 Series GPUs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Have you already filed a bug on your problems, btw?

    Comment


    • #62
      I hijacked this report for xv:
      https://bugs.freedesktop.org/show_bug.cgi?id=29788

      I tried to write a simple testcase that'd show the gfx corruption I mentioned, but after stripping imagemagick down as far as I could, the result was an uncorrupted screenshot. oops.

      btw:
      Originally posted by monraaf View Post
      It seems that I have to run fglrx first to set the GPU to a sane state before I can use any of the open source acceleration stuff.
      hasn't been a problem for me, the OSS drivers work equally well (or not well) after a cold boot.

      Comment


      • #63
        How about running it with Valgrind if that segfault relates to your case also?

        Comment


        • #64
          (and really, reopening bugs on different issues is a bad habit)

          Comment


          • #65
            Originally posted by bridgman View Post
            OK, that seems like a useful clue. Do you still get hangs eventually when running the open drivers after fglrx ?
            No, for as far as I have tested it I haven't seen any hangs after that.

            Which GPU are you using ?
            This is on a HD 5750.

            Comment


            • #66
              Does anyone know what the plan is (if there is one!) for continued development on this? I just checked the evergreen_accel branch and it appears there hasn't been a commit for 9 days.

              In face of the (seemingly unexpected) corruption and crashes, is it still the case that the AMD guys are being moved off Evergreen and onto HD6000?

              Comment


              • #67
                Originally posted by Wingfeather View Post
                Does anyone know what the plan is (if there is one!) for continued development on this? I just checked the evergreen_accel branch and it appears there hasn't been a commit for 9 days.

                In face of the (seemingly unexpected) corruption and crashes, is it still the case that the AMD guys are being moved off Evergreen and onto HD6000?
                Most significant changes the ATI guys make to their open source driver have to go through legal red tape. This red tape takes many times longer than actually writing the code, from what I hear. It's possible they have many fixes queued up but are waiting for them to be cleared.

                I think the system is broken, TBH. ATI thinks their own 3d driver algorithms are "intellectual property". They're mathematics for crying out loud. This is another instantiation of the Bilski debate, right here in our driver.

                I would rather that ATI go ahead and commit un-obfuscated drivers that contain the most maintainable and efficient algorithms they are aware of to provide the required functionality, with no regard to how much information about the hardware it potentially exposes. It might save Nvidia a week or two on figuring out what ATI is doing, but I have no doubt that if Nvidia really feels that ATI has techniques they lack, they've already taken apart a dozen or more HD5870s in their office, and run a $10,000/license disassembler on the Windows driver.

                It's the kind of double-tongued scenario that we have with most corporate-controlled "openness". They play the openness for the PR and approval of their customers, but it's really a sham, because they intentionally censor and filter everything before they "open" it up. In this case, it's not about the public being withheld information, but rather, correct, efficient, and timely drivers.

                And if you disagree with the above sentiment, then I expect that the asymptotic performance of Mesa3D on Radeon hardware will approach the performance of fglrx within 2% or so in the long run (or hey, even exceed fglrx performance, since you don't have to go through an abstract kernel interface like the one fglrx has). I'll be waiting for you to show me when relative performance parity has been reached - on any radeon hardware. Any at all.

                Just think: if there were no clever algorithms arms race between Nvidia and ATI, they would actually have to compete on more difficult terrain, like customer service, timeliness and license of drivers, price, and hardware bells and whistles. They would still be motivated to develop innovative algorithms (or at least use the ones others have developed), because as soon as they let their guard down, their cards would immediately get outperformed by the competition, and they'd have to find a way to make up for it. And they could still use timing as a way to gain a "landslide" market share over the competition: make a whole bunch of new cards, push out a great new driver with innovations, customers realize the superior solution and buy it... by the time your competitor has figured out what you've done to beat them, then make a release of it, you've already made your sales.

                Of course it makes sense to keep the gory details of the chip design secret, but I think the discrepancy between the closed and open source driver performance is not about that, but rather about software algorithms. Again, Bilski.

                Bring down the iron curtain and give us true openness; cut out the red tape.

                Comment


                • #68
                  Well that reply went awfully off-track.

                  Comment


                  • #69
                    Originally posted by etnlWings View Post
                    Well that reply went awfully off-track.
                    Did it? I provided some background on why there haven't been any commits for 9 days, and why it may be a while until we see any more coming from ATI.

                    To put it apolitically, then, how about this: yes it is being worked on, but you'll have to wait n more days for the code to clear ATI legal before it's released. They'll eventually get it to a point where it's rendering correctly but you've just got to wait.

                    How's that?

                    Comment


                    • #70
                      There's no evergreen accel code waiting for red tape right now. In addition to fixing issues on older asics, I've been tracking down the GPU hangs in the evergreen code. It's a complex, time-consuming process.

                      Comment


                      • #71
                        Originally posted by allquixotic View Post
                        Most significant changes the ATI guys make to their open source driver have to go through legal red tape. This red tape takes many times longer than actually writing the code, from what I hear. It's possible they have many fixes queued up but are waiting for them to be cleared.
                        Huh ?

                        The IP review is about making sure we don't accidentally expose too many "gory details of the chip design" along with the programming info. The focus is on the hardware info itself - registers & bitfields, PM4 packets, data structures, microcode binaries - and the rest of the code is untouched except for the HW info it contains. Once the initial release of programming info goes through IP review and gets released subsequent driver changes are made directly in the public repo without further review, *except* in the cases where additional programming info needs to be exposed.

                        The reasons there have been no commits since the initial release are simple :

                        - we haven't found a fix for the corruption & hangs yet, although we have confirmed that running fglrx first does address them, so Alex is working through the registers to find out which ones make the difference

                        - the problems do not appear on Richard's systems, either at home or at the office, so it's hard for him to "fix them"

                        - the other devs are focusing on getting r600g closer to production readiness rather than jumping on the Evergreen code... this may be a problem for the 7.9 release

                        Nothing to do with IP review. There are no "queued up fixes".

                        Originally posted by allquixotic View Post
                        I think the system is broken, TBH. ATI thinks their own 3d driver algorithms are "intellectual property". They're mathematics for crying out loud. This is another instantiation of the Bilski debate, right here in our driver.
                        Nothing to do with algorithms. I don't know whose system you are describing but it's not ours.

                        Originally posted by allquixotic View Post
                        And if you disagree with the above sentiment, then I expect that the asymptotic performance of Mesa3D on Radeon hardware will approach the performance of fglrx within 2% or so in the long run (or hey, even exceed fglrx performance, since you don't have to go through an abstract kernel interface like the one fglrx has). I'll be waiting for you to show me when relative performance parity has been reached - on any radeon hardware. Any at all.

                        Of course it makes sense to keep the gory details of the chip design secret, but I think the discrepancy between the closed and open source driver performance is not about that, but rather about software algorithms. Again, Bilski.
                        The performance difference between the open and closed drivers is a simple function of available developers -- the Catalyst driver shares a lot of code across multiple OSes and as a result enjoys maybe 50x the number of developers.

                        The "60% to 70% of fglrx" performance estimate was based on the tiny size of the open source driver development community relative to proprietary driver teams, not the amount and nature of programming information we release. Proprietary driver teams share code across 100% of the client PC market (ie all OSes) and are staffed accordingly, while the open source driver teams are essentially staffed to the size of the Linux client PC market.

                        Algorithms only go so far... at some point you just need great honkin' piles of hardware-specific code in order to get the most performance out of a piece of hardware, and the size of the development team *does* make a difference.

                        Comment


                        • #72
                          Originally posted by bridgman View Post
                          - the problems do not appear on Richard's systems, either at home or at the office, so it's hard for him to "fix them"
                          Could you help his effort by giving him a system for which it currently is broken?

                          Comment


                          • #73
                            Sure, if we could find one. First impression was that the problems were being seen with HD58xx but not with lower end hardware, but that turned out not to be the case. There were indications that the problems seemed to be Northbridge-related but that theory isn't holding up either.

                            Right now Alex has one machine that shows the problems and that's it.

                            Comment


                            • #74
                              Normally we rely on user and developer feedback to help determine hardware dependencies for problems, but we aren't seeing much feedback yet. I suspect that when the first user published that they had problems other users simply didn't try the code, which meant that we didn't get as many of the "it works on my system" and "it doesn't work on my system" reports which help to track down problems. We did get the feedback about running fglrx first, which is a big help.

                              The community developers would normally have been testing the code by now but most of them are working on r600g instead.

                              Comment


                              • #75
                                Originally posted by bridgman View Post
                                ...Insightful comments...
                                Sorry, sounds like I was a bit off my rocker this in the day

                                I was under the (perhaps incorrect) impression of the following: once a driver is rendering correctly and seems to nominally support the features it's supposed to, making it work fast is basically an effort that is left to non-ATI developers. If they can reverse engineer fglrx to figure out how to get the last performance out of it, great. If they can come up with the code themselves, great. But because fglrx itself has techniques that ATI isn't comfortable with exposing to potential competitors, they leave these last optimizations out, allowing the driver's ultimate performance level (long term) to be decided by external contributors.

                                Now if that's off base, then I'm sorry. If the ATI legal team would accept a commit from an ATI employee who is putting forth a "crown jewels" style commit that may contain ATI-patented software algorithms (which obviously would have some unique benefit for the features or performance of the driver), that's heartening. What you've claimed is that, if there were only time and manpower to get it done, the open source drivers could be made to be as fast as fglrx, and the legal review team would be fine with that? If I were hearing this from any other person, I would be reluctant to buy it; but coming from you, I accept it.

                                How about a stronger, more abstract claim? Consider whether getting the most out of the hardware (in terms of performance and features) is compatible with the filtering imposed by the legal review process; i.e., the review process would not filter out potentially beneficial code just because it has some sort of proprietary value to ATI.

                                If the two are indeed compatible, that is great. If not, we may reach a point where ATI's open source devs can not contribute further, even if they know what needs to be done. I would be more understanding if there were, in principle, a way to accomplish the same task without falling under the scope of your "IP", but what if -- hypothetically -- there were no other way? The hardware only functions a certain way, and maybe this is far-fetched, but perhaps there is only one best way to accomplish a certain task, or reach a certain level of performance. If that way is under some sort of trade secret or patent protection, I would wager that the legal team would hesitate, if not outright block the contribution, provided that the contribution is coming directly from an ATI employee. No? Then maybe I've misjudged the character of ATI as a for-profit corporate entity with an aggressive "intellectual property" protection policy.

                                I regret that I've interpreted your policies as being bad for software freedom if in fact they are not, which is seeming the more likely alternative. I don't want the big wigs to conclude that those FOSS zealots are more trouble than they're worth, so I'll leave you be.

                                Bah ... Now I feel like I'm just looking for a way to salvage my original argument by making some spin-off claim that may have more validity, but maybe I'm still wrong, so I'll quit while I'm ahead. I am so accustomed to corporate competitiveness screwing over open source projects that I've almost come to expect it. Sometimes people are just trying to get things done. I respect that.

                                Comment

                                Working...
                                X