Announcement

Collapse
No announcement yet.

DRM Updates Submitted For Linux 4.11, Torvalds Explodes Over Code Quality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    As always +1 for linus, if you don't reject bad code early and violently it becomes a binary cancer later on. And this should help airled to stay sharp on commits before pushing them to linus.

    no criticism for airlied here, he do a great job but sometimes people get too comfortable and need to shaken every once in a while to stay sharp and that is always good to keep linux stable and solid

    Comment


    • #42
      Originally posted by M@yeulC View Post

      Wasn't it caused by hugepage defrag?
      Code:
      echo never > /sys/kernel/mm/transparent_hugepage/[I]defrag[/I]
      seems to give quite good results for me. I wish this problem was solved in default configurations.
      that's an interesting trick. i'll have to try it out.

      Comment


      • #43
        Originally posted by notanoob View Post
        Personally I would rather be corrected rather than continue in error
        correction comes
        Originally posted by notanoob View Post
        Why use pointers in a memory constrained function or object like the kernel?
        this is really stupid question. without pointers you can only have pass-by-value, i.e. you can't modify original and you incur copy overhead. and pointer is one word, while pointed to object is usually larger
        Originally posted by notanoob View Post
        I think a better question is why red hat is employing noobs (or intentionally sabotaging kernel code). Pick your poison for which is worse.
        best question is why noobs call themselves notanoobs

        Comment


        • #44
          Originally posted by nanonyme View Post
          The tinydrm stuff *did* land rather late
          it is a new driver, who cares?

          Comment


          • #45
            Originally posted by carewolf View Post
            It didn't compile. It is a rather big flaw in software.
            it didn't compile on linus machine. so maybe it is a big flow in linus' machine?

            Comment


            • #46
              Originally posted by robclark View Post

              to be fair, it sounded like a kconfig issue (ie. it compiles properly if you have the right kernel config, but not otherwise.. so probably missing some "selects" in Kconfig file).. that sort of thing is rather easy to mess up, given how much build-time configurability there is in the kernel.
              This kind of mistake that is relatively common, the developer forgets about the prep work/environment changes needed to build it, causing the famous "works on my machine" error.

              Comment


              • #47
                Originally posted by andrei_me View Post

                This kind of mistake that is relatively common, the developer forgets about the prep work/environment changes needed to build it, causing the famous "works on my machine" error.
                Or you throughly test it on all configurations. Are asked in review to do some minor changes and even do a sanity check of those changes by checking on your own machine, but then afterwards it turns out it broken something on a configuration you otherwise had checked before the trivial change (probably due to a typo in some big-endian who cares RISC specific code)

                Surprisingly how often that can happen

                Comment


                • #48
                  I can understand Linus' frustration - I've dealt with a lot of crappy code (including my own) in the past few years. Basic best practices were discovered atleast 20 years ago. It's frustrating when the same mistake is made again and again, which results in easily avoidable bugs, and makes future changes difficult.

                  Comment


                  • #49
                    Finally! It's been a long time since the last Torvalds shit show! I was almost starting to think that maturity toned down his attitude. Thank god I was wrong!

                    Comment


                    • #50
                      Originally posted by yoshi314 View Post

                      https://kernelnewbies.org/Linux_4.10...2c71a859dcc184

                      did you try that? Also, that is not the cpu scheduler's fault when i/o is concerned.


                      Personally i think Linus makes the right decisions, as they prevent technical debt increase in the future. Not merging some dubious code now means that next time it comes up, it will be in much better shape and less end users will complain about build issues or instability.
                      If linus and crew had actually merged pretty much any of the buffered aio patches iowait wouldn't occur as often and many less chunks of coal would've been burned in the meantime.
                      Sometimes these people get a bug up the bum, bikeshed endlessly, and make no forward progress in thought over time.
                      It's certainly not as though this process is fool proof. The kernel had, and has, so much crap that gets merged and then later, sometimes not more than just a couple of years, you'll see them removing said code, wondering why it was merged in the first place.
                      ​​​​​​What linux really needs is a linux2, that scraps all the versioned syscalls, all the crap implementations of crap ideas that they, and us, consequently, had to live with because it either has some users or it can't really be fixed, and, for Frigg's sake, keep security and RT in mind from the beginning! These aren't things that are going away, and RT has been slowly making its way up from the radioactive cellars, to your pocket.

                      Comment

                      Working...
                      X