Announcement

Collapse
No announcement yet.

AMDGPU's Scheduler Might Get Picked Up By Other DRM Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by GruenSein View Post
    My experience is quite limited in this context so I am not sure if this makes sense but wouldn't it make sense to offer the generic, self-contained parts (tall order, I know) of the driver such as a scheduler as libraries? Whenever you need to make significant changes, you release a new version. Other driver developers can then choose to upgrade their driver to take advantage of the new version. If they lack the ressources, they could also stick to the previous version and simply submit the ocasional maintainance patch to it. This is not a "one copy" code sharing approach but it invites community feedback and input without tying down the progress of the original developer. The big question here seems to be how modular the driver is and if it lends itself well to swapping out specific parts.
    This is about kernel drivers so the driver is the module. My understanding is they're considering pushing behaviour from the module into common framework (DRM). What you're thinking of largely makes sense for userland

    Comment


    • #22
      Originally posted by bridgman View Post

      If you mean "discussion about this specific topic" that's certainly going to happen. If you mean "discussion about reworking all our code to make it as shareable as possible" that is one of a bunch of background activities that ends up in the "as time permits" bucket simply because there is so much urgent work that needs to be done as well.

      Every time another driver shares the same copy of code that we use it puts a non-trivial burden on anyone needing to make major changes to that code - either in the form of pro-active monitoring of how every other driver uses the code to avoid getting stuck in a situation where we can't make the changes we need without breaking other drivers or in the form of having to rework a bunch of other drivers we know nothing about in order to accomodate changes we need for future hardware, so part of any decision to share a single copy of code needs to be knowing where you are going to find the developer time to fund that extra work. You also need to find the senior developer time for the initial discussion about whether live-sharing a single copy is a good idea in the first place.

      Code sharing ends up as a fairly ad-hoc affair, driven primarily by people identifying candidates for sharing (ie exactly what happened here) followed by a case-by-case determination of how likely it is that the original code will either be able to stay relatively stable or be extended with reasonable effort to support future hardware. Unfortunately the planning horizon for "what we are likely to need" is relatively short in the grand scheme of things, so the decision-making process generally ends up as "I dunno, I guess it should be OK" followed maybe 1/3 of the time by "Oh crap now what are we doing to do... we're probably going to end up having to fork a private copy of that shared code".

      Where that leads, obviously, is the conclusion that the code-sharing decision also needs to consider how likely it is that the shared code is going to get wired into something larger and more complex over time to meet someone else's requirements, with the consequence that "forking our own code back" will end up being a lot more expensive "after sharing" than it would have been to maintain a private copy of the code in the first place unless we rip out a lot of the changes that were made to support other hardware... and of course future hardware plans are one of the things that generally nobody is allowed to talk about except in the vaguest possible terms which makes the discussion even more fun.

      Bottom line I guess is that this is the kind of thing that does get discussed every year at developer conferences, but not even companies 10x-20x our size can afford to pro-actively drive or support these discussions for all of their code. As a result the process does end up being fairly ad-hoc as I mentioned earlier, with maybe a couple of candidates for sharing being "hallway-discussed" each year.
      Thanks a lot for your explaining! People like you should consider putting your experience in a book. Most books about managing code projects I found are just too theoretic and usually don't apply to the real world.

      So as I understand it, FOSS is full of improvisation and mistakes are part of the learning and evolution of the projects itself. Am I right?

      Comment


      • #23
        Originally posted by bridgman View Post

        If you mean "discussion about this specific topic" that's certainly going to happen. If you mean "discussion about reworking all our code to make it as shareable as possible" that is one of a bunch of background activities that ends up in the "as time permits" bucket simply because there is so much urgent work that needs to be done as well.

        Every time another driver shares the same copy of code that we use it puts a non-trivial burden on anyone needing to make major changes to that code - either in the form of pro-active monitoring of how every other driver uses the code to avoid getting stuck in a situation where we can't make the changes we need without breaking other drivers or in the form of having to rework a bunch of other drivers we know nothing about in order to accomodate changes we need for future hardware, so part of any decision to share a single copy of code needs to be knowing where you are going to find the developer time to fund that extra work. You also need to find the senior developer time for the initial discussion about whether live-sharing a single copy is a good idea in the first place.

        Code sharing ends up as a fairly ad-hoc affair, driven primarily by people identifying candidates for sharing (ie exactly what happened here) followed by a case-by-case determination of how likely it is that the original code will either be able to stay relatively stable or be extended with reasonable effort to support future hardware. Unfortunately the planning horizon for "what we are likely to need" is relatively short in the grand scheme of things, so the decision-making process generally ends up as "I dunno, I guess it should be OK" followed maybe 1/3 of the time by "Oh crap now what are we doing to do... we're probably going to end up having to fork a private copy of that shared code".

        Where that leads, obviously, is the conclusion that the code-sharing decision also needs to consider how likely it is that the shared code is going to get wired into something larger and more complex over time to meet someone else's requirements, with the consequence that "forking our own code back" will end up being a lot more expensive "after sharing" than it would have been to maintain a private copy of the code in the first place unless we rip out a lot of the changes that were made to support other hardware... and of course future hardware plans are one of the things that generally nobody is allowed to talk about except in the vaguest possible terms which makes the discussion even more fun.

        Bottom line I guess is that this is the kind of thing that does get discussed every year at developer conferences, but not even companies 10x-20x our size can afford to pro-actively drive or support these discussions for all of their code. As a result the process does end up being fairly ad-hoc as I mentioned earlier, with maybe a couple of candidates for sharing being "hallway-discussed" each year.
        I also propose to start a conversation about AMD selling retail MXM Gpus.

        Comment


        • #24
          Originally posted by artivision View Post
          I also propose to start a conversation about AMD selling retail MXM Gpus.
          Why? Are there still laptops using that interface?

          Comment


          • #25
            Originally posted by timofonic View Post
            Thanks a lot for your explaining! People like you should consider putting your experience in a book. Most books about managing code projects I found are just too theoretic and usually don't apply to the real world.
            Thanks - I have just had the opportunity to make more mistakes than most people

            Originally posted by timofonic View Post
            So as I understand it, FOSS is full of improvisation and mistakes are part of the learning and evolution of the projects itself. Am I right?
            Yep, although I would say all software development is like that. FOSS development just happens in public so you often get better discussion before making decisions and better opportunities to learn afterwards. The ad-hoc part is more "what gets worked on" than "how it gets worked on".
            Last edited by bridgman; 12-02-2017, 06:15 PM.

            Comment


            • #26
              Originally posted by schmidtbag View Post
              Hopefully, AMD won't see this as inadvertently helping competition.
              Don't worry, this was our intention all along, that's why we created and have been maintaining the scheduler code in a separate directory from the rest of the driver code. I hope Intel will join in as well.

              Comment


              • #27
                Originally posted by MrCooper View Post
                Don't worry, this was our intention all along, that's why we created and have been maintaining the scheduler code in a separate directory from the rest of the driver code. I hope Intel will join in as well.
                I figured this was the case. Glad to see open-source working the way it should.

                Comment

                Working...
                X