Announcement

Collapse
No announcement yet.

Radeon Gallium3D Still Long Shot From Catalyst

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by russofris View Post
    On a slightly related note, I'm a bit disheartened to see everyone working so hard on legacy technology. I really though that we would all have 10-bit/chan monitors by now. I really thought that we would all have ray-tracing now. I really thought that 'everyone' would be able to play back a 1080p Main-Profile H264 file by now. Even if I had one of the dozen 10-bit/chan panels on the market, I doubt that I'd be able to drive the thing with X/Mesa (I could be totally wrong). I don't want to diminish the efforts of everyone working radeon, but when the next CG generation or innovation becomes mainstream, we're going to be back at the starting line again.
    I think 10bit has been supported for a while:

    grep -i 10bit *
    atombios.h:#define PANEL_10BIT_PER_COLOR 0x03
    radeon_encoders.c: args.v4.ucBitPerColor = PANEL_10BIT_PER_COLOR;

    Comment


    • #22
      I've always wondered why ATI/AMD doesn't just hire an additional five developers for OSS development. I'd assume that it would take 6 months of training to get them to the point where they could produce something useful, but we'd see real results by the end of a year, and have a performant replacement for Catalyst in two years.
      Well,

      A) Hiring more programmers does not make a project go faster in the same way that getting more girls pregnant won't get you a child faster.

      B) AMD needs to be able to justify spending more money to hire more people. 5 experienced programmers cost, when you take into account the extra wages/taxes/government regulation/benefits/administrative overhead you are looked at close to a extra million dollars a year. The people that make these decisions to blow more money are have to be able to prove the benefit because _its_not_their_money_. It's not their money, it's not their bosses' money, it's not their bosses' bosses' money, it's not even the president's money... it's the board of directors' money. If you can't prove you will make more money by spending money then you are wasting _their_ money. Everybody requires a return on investment and this sort of thing will get people fired and in that case you will risk losing all the existing programmers..

      You have to remember that in a corporation there are any number of different 'mini' corporations. A large organization is ran by delegation (otherwise it can't survive due to the administrative burden) and there will form a number of mostly independent and autonomous units within the larger corporate structure. Nobody is going to be able to answer the question of "why doesn't AMD hire more programmers" without being a manager (or higher) in AMD. This is because we can't know how the budget is setup and how the internal politics work.

      Comment


      • #23
        Originally posted by drag View Post
        Nobody is going to be able to answer the question of "why doesn't AMD hire more programmers" without being a manager (or higher) in AMD. This is because we can't know how the budget is setup and how the internal politics work.
        I am a "manager (or higher)" in AMD, and I do answer the question, but when I do people just fill my mailbox up with PMs, so I don't do it very often

        The bottom line is that (as drag said) the cost/benefit numbers for spending more on Linux need to be favourable *and* be more attractive than the other requests which compete for the same incremental engineering budget.

        Coming up with good numbers for Linux is complicated by a number of obvious issues :

        - since most of the distros aren't sold, there are no sales numbers only download numbers, which aren't the same

        - counting OEM preloads doesn't help to measure the end user OS distribution, partly because Linux users buy Windows preloads either to get the hardware configuration they want or to get the cheap copy of Windows for the cases where they may need to use it

        - when SKUs *are* sold either with Linux or without any OS there is wild disagreement on what OS they end up with... pirated Windows, some form of Linux, or something else

        - there's a "clustering" issue... if you use Windows most of the people you know probably use Windows too, so the status quo seems fine... but if you use Linux most of the people you know probably use Linux as well. That makes it obvious to you that companies are spending money in the wrong places and justifies you being sufficiently hostile to the people who *are* trying to good answers that they go away with a bad impression of Linux users and don't come back

        - all supply chains tend to magnify demand for the highest volume SKUs and minimize or ignore demand for the lower volume SKUs, which has the effect of playing up Windows demand and downplaying Linux demand... and right now I don't think anyone has sufficiently good models to compensate for this

        So far consumer Linux support from *all* HW vendors has mostly been a function of (a) how much "comes for free" as a consequence of doing engineering work for more tangible markets like 3D workstation, and (b) how much spare cash is washing around to "do nice things" whether they make money or not, and (c) how long each company has been working on the current driver stack and hence the "polish" that stack has been able to build up over the years. Point (c) in particular requires you to look at trends in order to get an accurate picture of what's going on.

        Going forward there are some obviously interesting things happening. Android is growing in popularity, and is interesting to the embedded market as well as more obvious markets like tablets and smartphones, but the graphics stack is quite different from desktop Linux so the obvious question for you is "if we spend additional money on Android does that count as spending more on Linux even if there is no direct benefit for consumer PC Linux users ?".

        We have also recently added two experienced developers (three actually, but since one was replacing Richard I'm only counting two) who IMO are and will be as productive or more over the next year or so than a half dozen developers hired and brought up to speed on the job. Just a thought...
        Last edited by bridgman; 25 March 2012, 12:44 PM.
        Test signature

        Comment


        • #24
          I would rather that AMD invest more effort into somehow lowering the legal review overhead. I'm not sure exactly how that could be done, but surely there are process improvements or communication channels that could be improved. Basically if you can reduce the amount of time and effort required to clear the initial code drops and documentation releases, you don't even really need your own developers after that point -- I mean you do, obviously, but the size of your team doubles or triples as soon as the ASIC is fully out there and documented (except UVD; not even going there). And you don't have to pay a dime for their help. They are personally or commercially motivated to do it.

          I mean: what financial benefit does Dave Airlie get (aside perhaps from a paycheck if Red Hat is paying him to work on it) from helping you guys with r600g support?

          What financial benefit does Jerome Glisse get? How about Marek Olsak?

          I really doubt you guys are paying any of these folks. They either do it because they want to, or because their own employer (i.e. not AMD) is paying them to do it.

          And the good news is I've seen the number of contributors to the Radeon drivers steadily increase over the years. I thought it might decrease as people get frustrated/tired with it, but it seems that the old hands are mostly sticking around, and the new folks are making a big splash.

          In effect, the sooner you guys get that first documentation and initial code (for both kernel and userspace) out there for people to build upon, the better off the entire development cycle will be. It still won't get us full OpenGL 2.1 support on release day of the flagship card, but having to wait the better part of a year for the initial code drop is a little ridiculous. And I'm betting a very significant portion of that wait is in the legal department, or in development after receiving feedback from legal. That's what drives me up a wall; I am fine if you guys can only convince management to provide a certain number of programmer seats, and I'm fine if they can only do a certain amount of work within a given time... what's not fine is adding weeks and months to the waiting of the community by completely useless legal overhead (because the legal isn't contributing anything to improve the drivers; all they do is waste time and possibly even take away features or functionality that isn't allowed to be exposed in the open drivers).

          Comment


          • #25
            OK, I've posted all this before but looks like it's time to do it again...

            1. It's primarily technical review, not legal review. Michael calls it "legal" but ~95% of the time is spent either getting access to senior technical folks or responding to issues/concerns they raise. Most of the review effort relates to either (a) understanding the future impact of what we release, ie "will exposing X set of registers affect security work being done for an OS release a year or two in the future ?" or (b) understanding exactly whose IP is included in the information we want to release (this is a special challenge in the video/audio areas).

            2. Initial review doesn't take that long. Getting a "pass" from the review does, and typically involves multiple reviews after either changing what we propose to release or researching & building justification for why something unsafe-looking is actually OK to release. In some cases people need to revisit and change their own views on the sensitivity of specific hardware information -- that takes a while at the best of times and takes a really long time when they are 110% busy with other work.

            3. The actual impact of review on community participation isn't as big as you might think, since most of the developers with time to work on new ASIC support have early access to the code under NDA

            4. We usually do early releases of partial code so that the delay for release of a working stack is pretty small... ie I expect the delay from "working userspace code for SI" to "public release of same" will be pretty short. There have been three interim releases for SI already -- multiple ring support, GPU VM, and kernel driver -- to keep a lot of the review & revision off the critical path.

            That said, the release of SI kernel code ended up later relative to the merge window than any of us wanted, although since it's new HW enablement with very low chance of breaking existing functionality I was perhaps less worried about merge window alignment than I should have been.

            5. Until we have initial support working, a *lot* of the development work requires access to internal HW and SW development teams... once the initial support is in place it's a lot easier for community developers to work independently.

            6. Remember that the documentation has to be *written*, not just released, and that until we have working initial code for a new ASIC we usually don't understand the details enough to write complete & correct documentation in the first place.

            Register spec docs are easier to generate but much more expensive to review, since the review effort is pretty much a function of the amount of information being exposed. Releasing initial code first lets us focus the review effort on the core set of information required to make a working driver... as time permits we can go back and release additional information without it being on the critical path for getting initial support into a public repository.

            That said, we aren't going back and releasing quite as much documentation as is really needed, so we do need to improve there, but I think that will happen automatically now that we're getting caught up with the rate of new hardware introduction.

            7. One of the systemic challenges we have had is that internal re-orgs and/or changes in senior technical staff for specific areas cause us to lose the go-to contacts who are familiar with open source requirements/reviews and can respond quickly to requests. This is an area we do intend to improve in the future.

            8. The most effective solution for all this is to start earlier so that more (or all) of the effort happens invisibly to you, before the part is launched. That has been our main focus -- SI was the first GPU core where we were able to get a non-trivial amount of work done before launch, and for the next generation we are starting sufficiently early that we should be able to hide both development and review in the pre-launch window where it doesn't impact users or community developers. The real win is to have all these discussions happen while the hardware is being designed and can still be tweaked... and that's where we are heading.
            Last edited by bridgman; 25 March 2012, 02:12 PM.
            Test signature

            Comment


            • #26
              That's like putting my pickup truck in my pocket so it's always there when I need it

              The open source userspace stack is >10x the size of the kernel driver, and the proprietary userspace stack is >50x and approaching the size of the entire Linux kernel. I don't think there would be a lot of joy among the kernel developers if we tried to move all that into kernel space.

              The multimedia API framework is quite different between Android and X/Wayland. They may converge over time but right now they are very different.
              Last edited by bridgman; 25 March 2012, 02:52 PM.
              Test signature

              Comment


              • #27
                I'm sure if bridgman wasn't in the way all the time, r600g would be in a much better state. I wish this kid would just go away..
                Last edited by idontlikebridgman; 25 March 2012, 02:56 PM.

                Comment


                • #28
                  I figured out how to make radeon better than catalyst: every time quaridarium writes some bullshit, instead of wasting your time replying you should write a couples of lines of code. At such a pace I'm pretty sure radeon will double catalyst's performance in no more than a month.
                  ## VGA ##
                  AMD: X1950XTX, HD3870, HD5870
                  Intel: GMA45, HD3000 (Core i5 2500K)

                  Comment


                  • #29
                    That is a *great* idea !
                    Test signature

                    Comment


                    • #30
                      Originally posted by Qaridarium
                      sure but your argument about the kernel source size is still bullshit.

                      20 years ago the kernel source was so much smaller maybe we should use a time travelling engine to go back in time to be sure the kernel source is smaller...

                      this is just bullshit!

                      and hey don't write code this makes the kernel source bigger ans we all know this is "bad" LOL

                      Bridgman bullshit logic at work..
                      Size isn't the only factor to consider in something like this.

                      I think what you're suggesting is to move all of the 3D stack into the kernel. This would include the GLSL compiler and a lot of other core-Mesa components. The GLSL compiler, for instance, is C++ which is a no-go for the kernel. Then there's that you can't do floating-point operations in the kernel, which are obviously quite important for OpenGL.

                      Also there's the issue of security -- moving thousands of lines of code into kernel space...

                      Comment

                      Working...
                      X