Announcement

Collapse
No announcement yet.

A Valve Linux Developer Managed Another Small Performance Optimization For RADV

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Veerappan View Post

    Yes, please. I need a new laptop, and a 14" Ryzen APU thinkpad would be perfect.

    Just don't gimp it with single-channel memory, spinning drive, crappy screen, or small battery. The 13" Ideapad 720s looked almost good enough to buy until I got to the single-channel memory part of its specs.

    I've got a t440p on my desk at work (haswell quad-core, 16GB ram, 14" 1080p, 480-500GB SSD), and an AMD-based equivalent at home would be just splendid, especially if they can share docking stations, chargers, etc.

    Edit: A little research shows me that the A485 should arrive in Q3. I can probably stretch my (dual-booted) 2009 13" Macbook just a little longer, as long as an end-goal is in sight.
    Sometimes I get the feeling Ryzen laptops are always coming next Q, at least here in Europe. I've been trying to get my hands on a Ryzen laptop since New Year after they were announced, now HP might finally roll out in April, haven't found anything solid on Acer. My current laptop is literally breaking apart, I wanna spend on a Ryzen laptop already

    Comment


    • #32
      Originally posted by duby229 View Post
      Yep, I totally agree. Back in 2007 when AMD really committed to open source the only reason they could was because of community based projects. I just don't get them, their own early involvement happened because they were able to make themselves part of the community. They know how important collaboration is yet when they get a chance to break free from pre-existing dogma they choose a distribution model that makes collaboration really hard. WTF.
      I'll try to answer your "WTF ?" question.

      Short answer

      What you saw in the early days was a small Linux-only team, writing Linux-only code, with the freedom (in fact the mandate) to align fully with existing upstream project practices. What you are seeing now is the open source effort expanding into the rest of RTG software and moving from a small Linux-only team to much larger teams that work across multiple OSes and who have to maintain support for the primarily closed-source world of other OSes. That involves a lot more people, a lot more management, a lot more learning, and a lot more time, but we are getting there.

      Long answer

      When we started back in 2007 the initial goals were less ambitious than what we are trying to do today. The objective was building a fully open source driver that would be upstream-based and which would improve things for consumer/desktop users who were not being well supported by fglrx. Separate drivers written by a separate team, with that team made up almost entirely of experienced open source developers.

      The good thing about that approach was the ability to do all the work in existing upstream projects, totally "by the book". The bad thing was that there were limits on how big the team could get, which translated into limits on how much functionality we could provide.

      The first big step towards integrating open source driver work with the larger SW org was the amdgpu initiative, which basically integrated the lower level code and teams between the open driver and the closed workstation driver. There were some wobbly moments during integration but it settled down OK... that allowed us to go further in terms of features and faster in terms of supporting new hardware, but we still hit limits on how much we could do - the limits were just a lot higher than before. This was the first time a closed-source team had moved to open source development within RTG SW, but it was easier because both teams were already part of the Linux GPU SW org and because the fglrx team was not required to support other (closed source) OSes.

      We did start building the first links between open source kernel teams and closed source userspace teams at this point, but it just involved the closed source components making their code work on the amdgpu kernel rather than the fglrx kernel - they still shipped everything closed source and still developed with a totally closed-source "throw it over the wall" release model.

      Display/DC

      In parallel with this we started working with the display team on a new kernel-compatible code base that could live upstream while still being shareable with other OSes. This was a much bigger conceptual challenge - it was the first time we would be sharing open source code between Linux & other OSes other than header files. It was also a much bigger learning curve - the first time we had a team that was not Linux-specific maintaining code in the kernel, in addition to first time we were sharing code betwen upstream & other OSes - and so it took more time & more adjustments, but with help from the maintainers and a lot of work internally we were able to get everything more-or-less aligned to the point where DC is now upstream and enabled by default.

      For us at least, I think that makes DC the first code base that is able to expose a somewhat community-friendly face while still maintaining a closed-source upstream internally. Before anyone recoils in horror, this is not really that much different from what every HW vendor has to do with new HW support - maintain an internal-only topic branch for a while and constantly update it to reflect evolution of the public upstream before finally moving it from internal to public upstream. It's a pain but it works.

      ROC

      Also in parallel with DC and amdgpu we started building a fully open source compute environment (ROC) and are pretty far along with getting that upstream. In that case it was another all-new effort so we were able to start with upstream-based development models even if upstreaming itself remained hard because of fundamental differences between upstream expectations for memory management ("no process should be able to interfere with any other process") and HPC expectations ("I want process X to be able to use every last scrap of memory and if process Y comes along and can't get enough memory that's too damn bad, it shouldn't be running until X is finished anyways").

      ROC involved new teams but the code was largely Linux-specific and the foundation code was already open source.

      Vulkan

      Vulkan adds yet another level of complexity, since the code is not only shared between OSes like DC but also shared between a number of APIs, most of which are regarded as closed-source-only by the API owners, at least at a big-company-to-big-company level. If this changes over time our lives will get a lot easier, but in the meantime it means that we need to maintain an internal code base that supports a lot more functionality than what we are shipping for Linux Vulkan.

      As with DC, it takes a lot of architectural tweaking to get to the point where one code base can expose both a "community face" and an multi-OS multi-platform tree, but Vulkan has the added challenge of supporting multiple APIs off the same code where those other APIs are nearly all "closed source only". This in turn means that every update to the community tree needs to be extracted from a much larger internal tree.

      At the moment this extraction is done at file level, but it should be possible to do it at commit level instead. The challenge is doing that in at least a partially automated way (which we can do for file-level extract) rather than having to double-process every commit manually.

      One of the internal debates was whether we should hold off on opening the code until we had a community interaction model worked out, or whether we should open as early as possible and work out the community interaction as a next step. There was general agreement that we should open the code rather than wait until we could finish moving to a community-friendly development model, which is why you are seeing "code dump" updates at the moment.

      In order to make this work in the long term we may also have to move to slightly different development models for each subcomponent - at the moment the whole tree is managed more or less the same way. If you think of the subcomponents as...

      - Vulkan API
      - compiler
      - PAL
      - interface between PAL and amdgpu

      ... the first and last components (particularly the last one) should be good candidates for managing in a community-friendly way. At the moment the compiler is also a good candidate, although our intention is to use the same compiler code in other places as well so there may be some ongoing glitches there, but I think we can deal with most of the pain points by continuing to get more aggressive in upstreaming LLVM changes.

      I think it's fair to say that every project using llvm experiences some initial pain until the developers are able to plan their work not only around HW and upstream schedules (plus those of the other OSes they support) but also around LLVM schedules, which is a non-trivial learning curve even on its own. I think we can do it though.

      Last component is PAL, which is the one that has to serve the most masters. Because of the need to support multiple independently evolving APIs, most of which are not used in Linux, I don't think it is ever going to be able to feel "community managed" to the extent that the other sub-components are - but by the same token I also think that PAL is likely to become the primary path for new HW userspace support and knowledge to flow from AMD into the open source world. The boundaries of PAL should stay fairly consistent over time, and so it should be possible to treat it as more of "helper library" like addrlib albeit a larger one requiring finer-grained updates.

      If we can...

      (a) confine "code dump" updates to just the PAL component and try to make even that go away over time,
      (b) expose internal commits (after a bit of sanitizing) rather than one big weekly update for the rest of the components, and
      (c) take advantage of (b) to let community commits flow through the internal tree and re-appear on the other side in a recognizable form

      ... then I think we can make this work. I hope we can, because PAL is going to become increasingly useful over time.

      Does that help answer the question ?

      P.S. During the early days of the project we were also looking at trying to use PAL in (or to replace) radeonsi, but the differences between OpenGL and newer APIs (PAL is focused on newer APIs only) was too great for that to work effectively without substantially rewriting PAL and extending its support footprint to include OpenGL.
      Last edited by bridgman; 26 March 2018, 01:46 AM.
      Test signature

      Comment


      • #33
        Originally posted by bridgman View Post
        (a) confine "code dump" updates to just the PAL component and try to make even that go away over time,
        (b) expose internal commits (after a bit of sanitizing) rather than one big weekly update for the rest of the components, and
        (c) take advantage of (b) to let community commits flow through the internal tree and re-appear on the other side in a recognizable form
        I don't understand the reason for the code dumps. There's clearly individual commits here, why can't each commit be individually sanitized and applied individually? Is that too much legal trouble?

        And (c) kinda frustrates me in that I don't think that's ever going to fly, at least not if the project is meant to eventually have community participation. This requires everything go through the AMD team and I can imagine a power struggle already (though from which side, I'm unsure).

        Comment


        • #34

          Originally posted by computerquip View Post
          I don't understand the reason for the code dumps. There's clearly individual commits here, why can't each commit be individually sanitized and applied individually? Is that too much legal trouble?
          You might have missed the part about "and try to make even that go away over time". The problem is that for PAL many of the commits touch both code that we can release and code that we can't release, so "individually sanitized" is actually more like "individually rewritten". Doing that manually for all the commits (remember PAL covers a lot of different APIs) seems impractically expensive at this point so the question is to what extent it can be automated.

          The first obvious step is to see if we can split commits at development time so that one commit only touches stuff we can release while the other commit touches the rest, but that does make the commit history kind of confusing internally and defeats the idea of having related changes that need to be applied atomically in a single commit. That in turn means we need to formalize some kind of "atomic commit set" in the tools and automation which brings its own kind of messy.

          Originally posted by computerquip View Post
          And (c) kinda frustrates me in that I don't think that's ever going to fly, at least not if the project is meant to eventually have community participation. This requires everything go through the AMD team and I can imagine a power struggle already (though from which side, I'm unsure).
          I don't understand this part. Every project requires that changes go through a central project tree and that every developer rebases to whatever the changes ended up as by the time they were accepted - the only difference is that the maintainer would be an AMD employee, which would not be a first. We would need to make sure that any "invisible trip into the internal tree and back" did not interfere with submissions, but that is a must-have anyways.
          Test signature

          Comment

          Working...
          X