Announcement

Collapse
No announcement yet.

Radeon Gallium3D Still Long Shot From Catalyst

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Buy lots of RAM and store the apitrace in a Snappy or LZ4-compressed ramdisk That should provide for a faster load time for the apitrace...

    Comment


    • #17
      Originally posted by drag View Post
      I think it's very promising. The Xonontic benchmarks are very pleasing.

      I am guessing from the benchmarks is that there is still some stuff falling back to software that is killing performance for certain things. With some optimization to applications and filling in some missing pieces in the drivers and we are golden. Once open source gets within about 70-80% of proprietary then I'd call it success.
      Indeed. The thing about open source drivers is that they can be debugged. It is possible to find out where they are slow, and then further optimise those parts of the code.

      After adding HiZ and doing some further chasing down of performance bottlenecks in the open source code, performance can be expected to reach perhaps 80% of the closed binary drivers. Since almost no-one needs 200 fps performance, and the difference between 160 fps and 200 fps is all but imperceptible anyway, the perfromance issue with open source drivers will essentially be solved.

      Comment


      • #18
        Bridgman, given than GCN moved to hardware scheduling, I assume lacking an advanced compiler in Mesa becomes less of a bottleneck. How would you estimate the effect of that move?

        E.g. do you see GCN cards getting to 80% of catalyst, where earlier can get 70% etc?

        Comment


        • #19
          Yeah, I don't have any real numbers but from a pure shader compiler POV my guess is that half the gap between open source and proprietary driver might go away with GCN.

          For compute the impact will probably be even greater (since graphics is naturally short-vector work while compute is naturally scalar). We're also picking up some compiler improvements at the same time by using LLVM, so it could get interesting.

          The bigger question is how much of the performance delta today comes from shader compiler rather than things like HyperZ, since the impact of both of them increase with display resolution.
          Last edited by bridgman; 03-24-2012, 12:03 PM.

          Comment


          • #20
            I also think the opposite will happen with nouveau/kepler, because they removed hw scheduling there. Half the fps on a newer gen card there, on a shader-heavy workload?

            Comment


            • #21
              Originally posted by droidhacker View Post
              Sure you can, it just costs about $10 bazallion.
              i don't think so. you can't

              Comment


              • #22
                I've always wondered why ATI/AMD doesn't just hire an additional five developers for OSS development. I'd assume that it would take 6 months of training to get them to the point where they could produce something useful, but we'd see real results by the end of a year, and have a performant replacement for Catalyst in two years.

                JB,

                What's the deal with that? $750k buys a team for two years. Does the revenue from Linux-related-sales not justify the cost? (admittedly, I have no idea how much of AMDs revenue is generated via linux-related sales, no do I understand how your SD org is run). I do know that disappointed customers are far less likely to make subsequent purchases, so this is probably something that should have been done a couple years ago, when gallium was coming about.

                On a slightly related note, I'm a bit disheartened to see everyone working so hard on legacy technology. I really though that we would all have 10-bit/chan monitors by now. I really thought that we would all have ray-tracing now. I really thought that 'everyone' would be able to play back a 1080p Main-Profile H264 file by now. Even if I had one of the dozen 10-bit/chan panels on the market, I doubt that I'd be able to drive the thing with X/Mesa (I could be totally wrong). I don't want to diminish the efforts of everyone working radeon, but when the next CG generation or innovation becomes mainstream, we're going to be back at the starting line again.

                What a strange world we live in.

                F

                Comment


                • #23
                  Originally posted by russofris View Post
                  I've always wondered why ATI/AMD doesn't just hire an additional five developers for OSS development. I'd assume that it would take 6 months of training to get them to the point where they could produce something useful, but we'd see real results by the end of a year, and have a performant replacement for Catalyst in two years.

                  JB,

                  What's the deal with that? $750k buys a team for two years. Does the revenue from Linux-related-sales not justify the cost? (admittedly, I have no idea how much of AMDs revenue is generated via linux-related sales, no do I understand how your SD org is run). I do know that disappointed customers are far less likely to make subsequent purchases, so this is probably something that should have been done a couple years ago, when gallium was coming about.

                  On a slightly related note, I'm a bit disheartened to see everyone working so hard on legacy technology. I really though that we would all have 10-bit/chan monitors by now. I really thought that we would all have ray-tracing now. I really thought that 'everyone' would be able to play back a 1080p Main-Profile H264 file by now. Even if I had one of the dozen 10-bit/chan panels on the market, I doubt that I'd be able to drive the thing with X/Mesa (I could be totally wrong). I don't want to diminish the efforts of everyone working radeon, but when the next CG generation or innovation becomes mainstream, we're going to be back at the starting line again.

                  What a strange world we live in.

                  F
                  the problem with the AMD-OS-Driver team is: they already do have devs for 2% market share but they think linux only do have 1,5% market share.

                  this means in Capitalism thinking they want cut 1 or 2 open-source devs to fit into there market-share calculations.

                  because they don't do "Investment" in "Linux" they only play earning market member.

                  or yes you can count the 0,5% overstaffed as a "Investment"

                  Comment


                  • #24
                    Originally posted by russofris View Post
                    On a slightly related note, I'm a bit disheartened to see everyone working so hard on legacy technology. I really though that we would all have 10-bit/chan monitors by now. I really thought that we would all have ray-tracing now. I really thought that 'everyone' would be able to play back a 1080p Main-Profile H264 file by now. Even if I had one of the dozen 10-bit/chan panels on the market, I doubt that I'd be able to drive the thing with X/Mesa (I could be totally wrong). I don't want to diminish the efforts of everyone working radeon, but when the next CG generation or innovation becomes mainstream, we're going to be back at the starting line again.
                    I think 10bit has been supported for a while:

                    grep -i 10bit *
                    atombios.h:#define PANEL_10BIT_PER_COLOR 0x03
                    radeon_encoders.c: args.v4.ucBitPerColor = PANEL_10BIT_PER_COLOR;

                    Comment


                    • #25
                      I've always wondered why ATI/AMD doesn't just hire an additional five developers for OSS development. I'd assume that it would take 6 months of training to get them to the point where they could produce something useful, but we'd see real results by the end of a year, and have a performant replacement for Catalyst in two years.
                      Well,

                      A) Hiring more programmers does not make a project go faster in the same way that getting more girls pregnant won't get you a child faster.

                      B) AMD needs to be able to justify spending more money to hire more people. 5 experienced programmers cost, when you take into account the extra wages/taxes/government regulation/benefits/administrative overhead you are looked at close to a extra million dollars a year. The people that make these decisions to blow more money are have to be able to prove the benefit because _its_not_their_money_. It's not their money, it's not their bosses' money, it's not their bosses' bosses' money, it's not even the president's money... it's the board of directors' money. If you can't prove you will make more money by spending money then you are wasting _their_ money. Everybody requires a return on investment and this sort of thing will get people fired and in that case you will risk losing all the existing programmers..

                      You have to remember that in a corporation there are any number of different 'mini' corporations. A large organization is ran by delegation (otherwise it can't survive due to the administrative burden) and there will form a number of mostly independent and autonomous units within the larger corporate structure. Nobody is going to be able to answer the question of "why doesn't AMD hire more programmers" without being a manager (or higher) in AMD. This is because we can't know how the budget is setup and how the internal politics work.

                      Comment


                      • #26
                        Originally posted by drag View Post
                        Nobody is going to be able to answer the question of "why doesn't AMD hire more programmers" without being a manager (or higher) in AMD. This is because we can't know how the budget is setup and how the internal politics work.
                        I am a "manager (or higher)" in AMD, and I do answer the question, but when I do people just fill my mailbox up with PMs, so I don't do it very often

                        The bottom line is that (as drag said) the cost/benefit numbers for spending more on Linux need to be favourable *and* be more attractive than the other requests which compete for the same incremental engineering budget.

                        Coming up with good numbers for Linux is complicated by a number of obvious issues :

                        - since most of the distros aren't sold, there are no sales numbers only download numbers, which aren't the same

                        - counting OEM preloads doesn't help to measure the end user OS distribution, partly because Linux users buy Windows preloads either to get the hardware configuration they want or to get the cheap copy of Windows for the cases where they may need to use it

                        - when SKUs *are* sold either with Linux or without any OS there is wild disagreement on what OS they end up with... pirated Windows, some form of Linux, or something else

                        - there's a "clustering" issue... if you use Windows most of the people you know probably use Windows too, so the status quo seems fine... but if you use Linux most of the people you know probably use Linux as well. That makes it obvious to you that companies are spending money in the wrong places and justifies you being sufficiently hostile to the people who *are* trying to good answers that they go away with a bad impression of Linux users and don't come back

                        - all supply chains tend to magnify demand for the highest volume SKUs and minimize or ignore demand for the lower volume SKUs, which has the effect of playing up Windows demand and downplaying Linux demand... and right now I don't think anyone has sufficiently good models to compensate for this

                        So far consumer Linux support from *all* HW vendors has mostly been a function of (a) how much "comes for free" as a consequence of doing engineering work for more tangible markets like 3D workstation, and (b) how much spare cash is washing around to "do nice things" whether they make money or not, and (c) how long each company has been working on the current driver stack and hence the "polish" that stack has been able to build up over the years. Point (c) in particular requires you to look at trends in order to get an accurate picture of what's going on.

                        Going forward there are some obviously interesting things happening. Android is growing in popularity, and is interesting to the embedded market as well as more obvious markets like tablets and smartphones, but the graphics stack is quite different from desktop Linux so the obvious question for you is "if we spend additional money on Android does that count as spending more on Linux even if there is no direct benefit for consumer PC Linux users ?".

                        We have also recently added two experienced developers (three actually, but since one was replacing Richard I'm only counting two) who IMO are and will be as productive or more over the next year or so than a half dozen developers hired and brought up to speed on the job. Just a thought...
                        Last edited by bridgman; 03-25-2012, 12:44 PM.

                        Comment


                        • #27
                          I would rather that AMD invest more effort into somehow lowering the legal review overhead. I'm not sure exactly how that could be done, but surely there are process improvements or communication channels that could be improved. Basically if you can reduce the amount of time and effort required to clear the initial code drops and documentation releases, you don't even really need your own developers after that point -- I mean you do, obviously, but the size of your team doubles or triples as soon as the ASIC is fully out there and documented (except UVD; not even going there). And you don't have to pay a dime for their help. They are personally or commercially motivated to do it.

                          I mean: what financial benefit does Dave Airlie get (aside perhaps from a paycheck if Red Hat is paying him to work on it) from helping you guys with r600g support?

                          What financial benefit does Jerome Glisse get? How about Marek Olsak?

                          I really doubt you guys are paying any of these folks. They either do it because they want to, or because their own employer (i.e. not AMD) is paying them to do it.

                          And the good news is I've seen the number of contributors to the Radeon drivers steadily increase over the years. I thought it might decrease as people get frustrated/tired with it, but it seems that the old hands are mostly sticking around, and the new folks are making a big splash.

                          In effect, the sooner you guys get that first documentation and initial code (for both kernel and userspace) out there for people to build upon, the better off the entire development cycle will be. It still won't get us full OpenGL 2.1 support on release day of the flagship card, but having to wait the better part of a year for the initial code drop is a little ridiculous. And I'm betting a very significant portion of that wait is in the legal department, or in development after receiving feedback from legal. That's what drives me up a wall; I am fine if you guys can only convince management to provide a certain number of programmer seats, and I'm fine if they can only do a certain amount of work within a given time... what's not fine is adding weeks and months to the waiting of the community by completely useless legal overhead (because the legal isn't contributing anything to improve the drivers; all they do is waste time and possibly even take away features or functionality that isn't allowed to be exposed in the open drivers).

                          Comment


                          • #28
                            OK, I've posted all this before but looks like it's time to do it again...

                            1. It's primarily technical review, not legal review. Michael calls it "legal" but ~95% of the time is spent either getting access to senior technical folks or responding to issues/concerns they raise. Most of the review effort relates to either (a) understanding the future impact of what we release, ie "will exposing X set of registers affect security work being done for an OS release a year or two in the future ?" or (b) understanding exactly whose IP is included in the information we want to release (this is a special challenge in the video/audio areas).

                            2. Initial review doesn't take that long. Getting a "pass" from the review does, and typically involves multiple reviews after either changing what we propose to release or researching & building justification for why something unsafe-looking is actually OK to release. In some cases people need to revisit and change their own views on the sensitivity of specific hardware information -- that takes a while at the best of times and takes a really long time when they are 110% busy with other work.

                            3. The actual impact of review on community participation isn't as big as you might think, since most of the developers with time to work on new ASIC support have early access to the code under NDA

                            4. We usually do early releases of partial code so that the delay for release of a working stack is pretty small... ie I expect the delay from "working userspace code for SI" to "public release of same" will be pretty short. There have been three interim releases for SI already -- multiple ring support, GPU VM, and kernel driver -- to keep a lot of the review & revision off the critical path.

                            That said, the release of SI kernel code ended up later relative to the merge window than any of us wanted, although since it's new HW enablement with very low chance of breaking existing functionality I was perhaps less worried about merge window alignment than I should have been.

                            5. Until we have initial support working, a *lot* of the development work requires access to internal HW and SW development teams... once the initial support is in place it's a lot easier for community developers to work independently.

                            6. Remember that the documentation has to be *written*, not just released, and that until we have working initial code for a new ASIC we usually don't understand the details enough to write complete & correct documentation in the first place.

                            Register spec docs are easier to generate but much more expensive to review, since the review effort is pretty much a function of the amount of information being exposed. Releasing initial code first lets us focus the review effort on the core set of information required to make a working driver... as time permits we can go back and release additional information without it being on the critical path for getting initial support into a public repository.

                            That said, we aren't going back and releasing quite as much documentation as is really needed, so we do need to improve there, but I think that will happen automatically now that we're getting caught up with the rate of new hardware introduction.

                            7. One of the systemic challenges we have had is that internal re-orgs and/or changes in senior technical staff for specific areas cause us to lose the go-to contacts who are familiar with open source requirements/reviews and can respond quickly to requests. This is an area we do intend to improve in the future.

                            8. The most effective solution for all this is to start earlier so that more (or all) of the effort happens invisibly to you, before the part is launched. That has been our main focus -- SI was the first GPU core where we were able to get a non-trivial amount of work done before launch, and for the next generation we are starting sufficiently early that we should be able to hide both development and review in the pre-launch window where it doesn't impact users or community developers. The real win is to have all these discussions happen while the hardware is being designed and can still be tweaked... and that's where we are heading.
                            Last edited by bridgman; 03-25-2012, 02:12 PM.

                            Comment


                            • #29
                              Originally posted by bridgman View Post
                              Going forward there are some obviously interesting things happening. Android is growing in popularity, and is interesting to the embedded market as well as more obvious markets like tablets and smartphones, but the graphics stack is quite different from desktop Linux so the obvious question for you is "if we spend additional money on Android does that count as spending more on Linux even if there is no direct benefit for consumer PC Linux users ?".
                              you can have both in the same round if you do it right. linux-Desktop and android share the "Kernel" this means you only need to push the graphic driver into the kernel!

                              problem fixed.

                              in fact andorid is a "Linux" and if the drivers ARE in the KERNEL then it doesn’t matter andorid or ubuntu.

                              Comment


                              • #30
                                That's like putting my pickup truck in my pocket so it's always there when I need it

                                The open source userspace stack is >10x the size of the kernel driver, and the proprietary userspace stack is >50x and approaching the size of the entire Linux kernel. I don't think there would be a lot of joy among the kernel developers if we tried to move all that into kernel space.

                                The multimedia API framework is quite different between Android and X/Wayland. They may converge over time but right now they are very different.
                                Last edited by bridgman; 03-25-2012, 02:52 PM.

                                Comment

                                Working...
                                X