Announcement

Collapse
No announcement yet.

Linus Torvalds On Linux 6.8 DRM: "Testing Is Seriously Lacking"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by caligula View Post

    Yes they would probably scale, but my experience with GitLab has shown that it has a huge overhead. E.g short CI pipelines run really long. It's not that well optimized. The same system would build the system in much shorter time, even if you mount the docker images yourself and don't just build with native system dev tools.
    That may be, but it's still much better than what we have now (i.e. manual human review, if at all)

    Comment


    • #42
      Originally posted by varikonniemi View Post
      because it's much easier to enforce something on a single outside developer than a megacorporation that sponsors you.
      I agree with your point. Nevertheless, given that Linus has complained about Intel and their poor practices before now (although never to the level of ire he has aimed at nVidia) if they were going to throw their toys out of the pram and pull funding, I think they would have done so by now.

      Comment


      • #43
        Judgmental jerks on this forum are so depressingly quick to assume the worst in people so they can spew their predictable biased rants. Dave Airlie responded Sunday night/Monday (click 'next' in the LKML link):
        the fix was in a pull request in my inbox about an hour after I sent the PR, it just wasn't marked urgent and it passes all my usual test builds.

        It turns out there is a Kconfig bug without EXPERT that was masking this in my builds, hope to get that fix in soon.

        ... I'm not seeing the c in h, you reading that backtrace correctly?

        It was built test in a few scenarios by different people and in CI, but it does appear the Kconfig screwup was masking people from seeing the actual bug. We had a report a few days ago and a fix was posted, just not marked as urgent and since I never saw the build fails here I didn't escalate it.​
        ​So the submitter actually did build and test and did run CI.

        Originally posted by skeevy420 View Post
        Well, the problem isn't that Linux lacks automation, it's that Intel seems to lack any automation or testing before they send their stuff out to Linus for review. Apparently, they couldn't even be bothered compile their patches before they sent them out.
        Wrong.
        Originally posted by NotMine999 View Post
        If that maintainer heads up the code branch where this code belongs, then I would demote that maintainer all the way back to mailing list janitor.
        ​You don't have to be a judgmental jerk, start by reading the Wikipedia article on the Fundamental Attribution Error.

        Comment


        • #44
          As I know, Linus uses a workstation based on an AMD processor and it looks like such systems have no integrated Intel GPU. Why has he started to build the new driver from Intel on his system at all? Is this just curiosity or a part of his day-to-day workflow: to build a new kernel with all the drivers in the tree?

          Comment


          • #45
            Originally posted by B.eq View Post
            As I know, Linus uses a workstation based on an AMD processor and it looks like such systems have no integrated Intel GPU. Why has he started to build the new driver from Intel on his system at all? Is this just curiosity or a part of his day-to-day workflow: to build a new kernel with all the drivers in the tree?
            I'm certain he has a standard config to just build all drivers, in order to catch stuff like this. However, do note that this is the Xe intel driver, so it's designed to run their discrete ARC graphics cards. So technically speaking it is possible Linus is actually using it on that machine - although I imagine there's no way that's actually what's happening.

            Comment


            • #46
              Originally posted by qarium View Post

              github is closed source and controlled by microsoft.... gitlab is opensource and indepentend from Microsoft.

              it would be insane to go with github.
              but gitlab sucks.

              Comment


              • #47
                Originally posted by reba View Post

                For our projects we host a GitLab instance that gets very heavy traffic, be it pulls/commits, CI/CD, hooks, schedules, etc. and it's very reliable and clean to use.

                GitLab (the hoster) would propaby host halo projects like the kernel for free and throw in support and cake.

                GitHub OTOH is pretty no-go FMPOV, from a usability standpoint and because they've been bought.
                linus has a github account already. i'm not saying it's the best option, but to pretend it's not a realistic is silly.

                Linux kernel source tree. Contribute to torvalds/linux development by creating an account on GitHub.

                Comment

                Working...
                X