Announcement

Collapse
No announcement yet.

Asahi Linux May Pursue Writing Apple Silicon GPU Driver In Rust

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Originally posted by Quackdoc View Post

    how are either of these two things risks? risks to what? peoples egos?
    I think you do know what a risk is? An uncertainty? And for a company, risk to profits?

    As a company: Rust in the kernel, does is work? And the tooling? How much time will it take (more or less compared to a driver in C). How much does that uncertainty cost?
    Writing a driver for an undocumented GPU: can it be reverse engineered? Up to what level (video decode, power management)? How much time will it take. How much does that uncertainty cost?

    I think you will agree that if it takes 10 years to create a usable driver then it's likely a waste of time and money if the plan was to spend 1 - 2 years?

    Comment


    • #82
      Originally posted by ferry View Post

      I think you do know what a risk is? An uncertainty? And for a company, risk to profits?

      As a company: Rust in the kernel, does is work? And the tooling? How much time will it take (more or less compared to a driver in C). How much does that uncertainty cost?
      Writing a driver for an undocumented GPU: can it be reverse engineered? Up to what level (video decode, power management)? How much time will it take. How much does that uncertainty cost?

      I think you will agree that if it takes 10 years to create a usable driver then it's likely a waste of time and money if the plan was to spend 1 - 2 years?
      I usually take the definition that a risk is "an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives". Once you identify a risk then you need to understand its probability and more importantly its impact.

      Your previous post only suggested a risk but mentioned none. Now you mention some specific risks which is clearer. Let's look into them:
      • Rust in the kernel, does is work?
      Consider that the Rust patches for the Kernel are experimental but expected to be merged by Linus Torvalds soon, probably in Linux 6.1, it seems that they work. It is early to say if it will succeed in the long term. That's why they are experimental after all.
      • And the tooling?
      What tooling is missing? Please clarify.
      • How much time will it take (more or less compared to a driver in C)
      This is a question, not a risk. Anyway, it seems that more experience is required to understand it. If we were defining a risk on the project using a C driver estimation and falling short I would say it is high for first projects in a new language (probability) and impact probably medium. As mitigation the estimation should add a factor to account for that and not assume it is going to be the same without further experience in the technology. There is nothing wrong to try something new if you are aware of it.
      • How much does that uncertainty cost?
      Again, you are using questions instead of defining risks. A risk could be that the uncertainty introduced by lack of tooling or experience increases cost of developing a driver. Since this is being done by a community project in their own time it seems that the impact would be low. At most they could get discouraged and cancel it.
      • Writing a driver for an undocumented GPU: can it be reverse engineered?
      The article mentions that "I have a prototype driver written in Python" already running, so the question is already answered and a risk triggering seems low or at least mitigated.
      • Up to what level (video decode, power management)?
      Good questions for the developer. We could assume their a risk there but only the developer can provide more details.
      • How much time will it take. How much does that uncertainty cost?
      These are just repetition of above questions.

      Overall it seems the risks are low or medium impact at most. If this was a for profit project I would say it would be higher, but that would also depend on many other factors, including the talent, the scope of the project, timeline, etc.

      Comment


      • #83
        Originally posted by darkonix View Post

        I usually take the definition that a risk is "an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives". Once you identify a risk then you need to understand its probability and more importantly its impact.

        Your previous post only suggested a risk but mentioned none. Now you mention some specific risks which is clearer. Let's look into them:
        • Rust in the kernel, does is work?
        Consider that the Rust patches for the Kernel are experimental but expected to be merged by Linus Torvalds soon, probably in Linux 6.1, it seems that they work. It is early to say if it will succeed in the long term. That's why they are experimental after all.
        • And the tooling?
        What tooling is missing? Please clarify.
        • How much time will it take (more or less compared to a driver in C)
        This is a question, not a risk. Anyway, it seems that more experience is required to understand it. If we were defining a risk on the project using a C driver estimation and falling short I would say it is high for first projects in a new language (probability) and impact probably medium. As mitigation the estimation should add a factor to account for that and not assume it is going to be the same without further experience in the technology. There is nothing wrong to try something new if you are aware of it.
        • How much does that uncertainty cost?
        Again, you are using questions instead of defining risks. A risk could be that the uncertainty introduced by lack of tooling or experience increases cost of developing a driver. Since this is being done by a community project in their own time it seems that the impact would be low. At most they could get discouraged and cancel it.
        • Writing a driver for an undocumented GPU: can it be reverse engineered?
        The article mentions that "I have a prototype driver written in Python" already running, so the question is already answered and a risk triggering seems low or at least mitigated.
        • Up to what level (video decode, power management)?
        Good questions for the developer. We could assume their a risk there but only the developer can provide more details.
        • How much time will it take. How much does that uncertainty cost?
        These are just repetition of above questions.

        Overall it seems the risks are low or medium impact at most. If this was a for profit project I would say it would be higher, but that would also depend on many other factors, including the talent, the scope of the project, timeline, etc.
        If you read back the thread you'll see the specific risks are mentioned before. And also distinction was made between commercial and hobby projects.

        And well we are not writing a manual on project management here are we. I agree you can start with probability and multiply by impact (measured in money) and then sum for all risks. Then you arrive at Expected value (in $ or €) and Variance (in $ or €). If you want to take it far you can take the cost of risk into account (or calculate the price to insure the cost of failure which is basically the same) using Black and Scholes option theory.

        If this were a commercial project (this is my point) introducing 2 large risks would make such an insurance prohibitively expensive. Or, in normal words: it's doomed.

        Comment


        • #84
          Originally posted by ferry View Post

          If you read back the thread you'll see the specific risks are mentioned before. And also distinction was made between commercial and hobby projects.

          And well we are not writing a manual on project management here are we. I agree you can start with probability and multiply by impact (measured in money) and then sum for all risks. Then you arrive at Expected value (in $ or €) and Variance (in $ or €). If you want to take it far you can take the cost of risk into account (or calculate the price to insure the cost of failure which is basically the same) using Black and Scholes option theory.

          If this were a commercial project (this is my point) introducing 2 large risks would make such an insurance prohibitively expensive. Or, in normal words: it's doomed.
          The issue with your method of reasoning is you can argue that anything is too risky because you are not properly qualifying your metrics. For every point you list you don't say how much of a risk something is, and to be honest you don't really know. All we have to work on is what we know right now with concrete technical facts and in this regard there is almost no risk in Rust considering its already been used very successfully in other projects.

          And from a different angle depending on what metrics are a priority, you can actually argue that C is more riskier then Rust if you for example care about bugs because both from a technical and statistical PoV its *much* easier to create bugs with C then Rust.
          Last edited by mdedetrich; 17 August 2022, 10:55 AM.

          Comment


          • #85
            Originally posted by ferry View Post
            As a company: Rust in the kernel, does is work? And the tooling? How much time will it take (more or less compared to a driver in C). How much does that uncertainty cost?
            Rust adoption is driven by the risk of security vulnerabilities and bugs. It categorically doesn't allow a large class of memory and concurrency bugs in the first place.
            If for some reason a company doesn't want safer code, they can simply not compile in any rust. There's no risk to existing users.

            Originally posted by ferry View Post
            Writing a driver for an undocumented GPU: can it be reverse engineered? Up to what level (video decode, power management)? How much time will it take. How much does that uncertainty cost?
            The M1 GPU has already been reverse engineered and a nearly-standards-compliant 100% open source openGL driver already exists. Yes, it's written in python and is very slow running over USB 2.0, but everything required to write the "real" driver in rust is already known and well documented. There's no risk there.

            Originally posted by ferry View Post
            I think you will agree that if it takes 10 years to create a usable driver then it's likely a waste of time and money if the plan was to spend 1 - 2 years?
            Open source driver already exists. The rewrite in rust won't take long.

            Comment


            • #86
              Originally posted by ferry View Post

              C language and GCC is a proven tool for device driver development. Rust is not a proven tool for device driver development, in fact there is no kernel code at all, except some very trivial example.

              Writing a driver for undocumented hardware is challenging, doing so with an experimental compiler is ... hobby-ism?
              Proven by who? Proven by government? Proven by vulnerabilities?

              Comment


              • #87
                I feel like half of phoronix has never worked a real job and doesn't understand what investing in new technologies future benefits is.

                Comment


                • #88
                  Originally posted by Quackdoc View Post
                  I feel like half of phoronix has never worked a real job and doesn't understand what investing in new technologies future benefits is.
                  Sounds about right lol.

                  But I'll say more than that, half of the companies out there don't understand investing in new technologies either

                  Comment

                  Working...
                  X