Announcement

Collapse
No announcement yet.

OpenJDK Java's Native Wayland Support Progressing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Weasel View Post
    I don't give a shit about your benchmarks. It's objectively worse, uses more power, has potential to be slower, and has increased code size (for compiled languages).

    If you insert automatic checks even where they are not needed you need to become a better programmer.

    Ok, here's the simple rundown:

    1) Software with no bounds checks in some parts where input must be validated: security issue, but can be fixed.
    2) Software with automatic bounds checks everywhere, even when input is known to be good (for example because it was fucking validated by a parent function already): hopeless, will FOREVER remain shit quality code with redundancy.
    3) Software with proper bounds checks only where validation is required: quality software.

    (1) can be fixed. (2) is a lost cause, it will forever remain shit. (3) is what everyone should aim for, of course.
    The problem that in cases of 1) and 3) programmers almost always end up being wrong which is why Rust and Java have a default of bounds checks. No one gives a shit about theoretically perfect programmers that create perfectly software, that doesn't exist and people that have such an attitude are people with Dunning Kruger effect.

    Comment


    • #42
      My likely only and limited use-case for this would be Old School Runescape, but this would have to be implemented before they release their C++ client.

      Comment


      • #43
        Originally posted by Weasel View Post
        I don't give a shit about your benchmarks. It's objectively worse, uses more power, has potential to be slower, and has increased code size (for compiled languages).
        If you use the iterator API, process the array in reverse, or stick something like assert!(array.len() > max_index_you_access); before the actual work (Rust eliminates debug_assert! from release builds, not assert!), then LLVM optimizers will collapse the bounds checks to just one initial lookup like what you'd get in C with the terminal condition on a for loop over a counted/non-sentinel-terminated/Pascal-style string/array.

        Comment


        • #44
          Originally posted by mdedetrich View Post
          The problem that in cases of 1) and 3) programmers almost always end up being wrong which is why Rust and Java have a default of bounds checks.
          You're missing the point. I'm not using the term "good" or "real" programmer to imply someone who doesn't make mistakes. I'm using it as actual good practice; validating input (but not input in your own code helpers / internal functions for example) should be standard practice, but only where is needed like that.

          How about code review huh? How about, you know, having a proper test suite that reveals what happens with invalid inputs? That too alien concept for you? It's ok, you're one of the low quality outsourced programmers who just want to "get shit done asap".

          Software that's crudely coded just to get it out there asap is, by DEFINITION, low quality software. Some people code for the beauty of software rather than the end result, especially in open source software, which is why most closed source software is so shit quality to begin with, attitudes like yours.

          Oh I'm aware there's tons of CVEs on even popular open source libraries (fixed or not), but they only have themselves to blame for not having strict quality control. I mean you need that quality control regardless of bounds checks; bounds checks is just one out of many issues with not validating input.

          Comment


          • #45
            Originally posted by ssokolow View Post
            If you use the iterator API, process the array in reverse, or stick something like assert!(array.len() > max_index_you_access); before the actual work (Rust eliminates debug_assert! from release builds, not assert!), then LLVM optimizers will collapse the bounds checks to just one initial lookup like what you'd get in C with the terminal condition on a for loop over a counted/non-sentinel-terminated/Pascal-style string/array.
            Such bounds checks should only be enabled in debug builds and all asserts should be removed in release builds, IMO. Then you should obviously run test suite that stresses your code with invalid inputs on the debug build and see what happens. This isn't just to prevent bounds checks obviously, it's standard quality control if you truly care about software. It's also to prevent regressions in program logic.

            Comment


            • #46
              Originally posted by Weasel View Post
              Such bounds checks should only be enabled in debug builds and all asserts should be removed in release builds, IMO. Then you should obviously run test suite that stresses your code with invalid inputs on the debug build and see what happens. This isn't just to prevent bounds checks obviously, it's standard quality control if you truly care about software. It's also to prevent regressions in program logic.
              Sounds not unlike "We don't need Rust. Just run sanitizers on your C or C++ code and test thoroughly"... something which Microsoft, Apple, Google, Mozilla, etc. failed to make sufficient after pouring a lot of time, energy, and money into it.

              Comment


              • #47
                Originally posted by Weasel View Post
                How about code review huh? How about, you know, having a proper test suite that reveals what happens with invalid inputs? That too alien concept for you? It's ok, you're one of the low quality outsourced programmers who just want to "get shit done asap".

                Software that's crudely coded just to get it out there asap is, by DEFINITION, low quality software. Some people code for the beauty of software rather than the end result, especially in open source software, which is why most closed source software is so shit quality to begin with, attitudes like yours.

                Oh I'm aware there's tons of CVEs on even popular open source libraries (fixed or not), but they only have themselves to blame for not having strict quality control. I mean you need that quality control regardless of bounds checks; bounds checks is just one out of many issues with not validating input.
                There's a very simple answer to why we need something like Rust:

                Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. -- Edsger W. Dijkstra, "The Humble Programmer" (1972)

                (An excerpt from an argument in favour of stronger type systems and other methods of formal proving.)

                Comment


                • #48
                  Originally posted by ssokolow View Post
                  Sounds not unlike "We don't need Rust. Just run sanitizers on your C or C++ code and test thoroughly"... something which Microsoft, Apple, Google, Mozilla, etc. failed to make sufficient after pouring a lot of time, energy, and money into it.
                  By "test" I meant automated unit tests, especially for invalid input.

                  You're missing the point completely. For example Rust's borrow checker helps against invalid memory errors, mistakes, by the programmer. Mistakes happen, but not validating input is a design flaw rather than a mistake. Validating external input, that is.

                  If you still don't understand the difference, consider this: So, we do runtime bounds checks for a mistake that shouldn't be there in the first place, by design (validating untrusted input is part of the contract and design, this can't be "mistaken"). What happens if the input is invalid? Assertion triggers, crashing the app intentionally, or even worse if it's privileged (kernel mode, like a driver) takes down your system.

                  Do you think this is fine by design? Sure, it's not supposed to happen, the alternative is worse in some aspects (security vulnerability), but the point is that this ought to be fixed at the design stage and the proper way is to add PROPER bounds checks and fail in proper way, depending on what the API does. (if it's a file format, for example, complain it's an invalid file)

                  You need MANUAL code to handle this situation. And an automatic bounds check will keep the code in perpetual shit state, forever.

                  Comment


                  • #49
                    Originally posted by Weasel View Post
                    By "test" I meant automated unit tests, especially for invalid input.
                    I refer again to the Dijkstra​ quote. A type system can categorically rule out entire classes of bugs. No amount of testing short of spending billions of years feeding in every possible input can do the same. There will always be the possibility that testing one more input could have revealed a bug.

                    Originally posted by Weasel View Post
                    You're missing the point completely. For example Rust's borrow checker helps against invalid memory errors, mistakes, by the programmer. Mistakes happen, but not validating input is a design flaw rather than a mistake. Validating external input, that is.
                    Rust's borrow checker is part of a system for ensuring that invariants only need to be manually preserved locally... that you can write code which enforces that, if X passed validation, it will remain valid as it passes through the program by making it a compiler error for refactoring in one place to break the invariant a programmer in another place had learned to expect.

                    Hence my perspective that programmers can be very skilled in C and C++ in isolation, but they still can't write correct C or C++ code in groups.

                    Originally posted by Weasel View Post
                    If you still don't understand the difference, consider this: So, we do runtime bounds checks for a mistake that shouldn't be there in the first place, by design (validating untrusted input is part of the contract and design, this can't be "mistaken"). What happens if the input is invalid? Assertion triggers, crashing the app intentionally, or even worse if it's privileged (kernel mode, like a driver) takes down your system.

                    Do you think this is fine by design? Sure, it's not supposed to happen, the alternative is worse in some aspects (security vulnerability), but the point is that this ought to be fixed at the design stage and the proper way is to add PROPER bounds checks and fail in proper way, depending on what the API does. (if it's a file format, for example, complain it's an invalid file)

                    You need MANUAL code to handle this situation. And an automatic bounds check will keep the code in perpetual shit state, forever.
                    What you just said can equally easily be used as an argument against protected mode operating systems. Rust's panics are a program-internal counterpart to segfaults which catch programmer error more reliably and in a way that's structured enough that it's safe to allow stack unwinding and RAII cleanup.

                    They're an acknowledgement that programmers are inherently fallible.

                    Otherwise, why wouldn't you just keep using MS-DOS and Windows 3.1x and Classic MacOS with their un-protected memory systems?

                    Comment


                    • #50
                      Originally posted by ssokolow View Post
                      Rust's borrow checker is part of a system for ensuring that invariants only need to be manually preserved locally... that you can write code which enforces that, if X passed validation, it will remain valid as it passes through the program by making it a compiler error for refactoring in one place to break the invariant a programmer in another place had learned to expect.

                      Hence my perspective that programmers can be very skilled in C and C++ in isolation, but they still can't write correct C or C++ code in groups.
                      Sounds like a skill issue. Having proper design and API documentation/contracts must be difficult, I guess. (and yes I'm talking about internal APIs to the project, not exposed to others)

                      Originally posted by ssokolow View Post
                      They're an acknowledgement that programmers are inherently fallible.

                      Otherwise, why wouldn't you just keep using MS-DOS and Windows 3.1x and Classic MacOS with their un-protected memory systems?
                      Logical fallacy. You can execute code that has no relationship with the rest and crash your entire system. It's not just about bugs, it's about malice as well.

                      i.e. that's about "external APIs" or code that needs to be validated. How do you validate a program deliberately crashing itself accessing invalid memory? Using protected mode.

                      But protected mode doesn't protect an own process from itself, and that's fine. It's quite literally using the "design" I advocate for. So idk what your point is, it proves mine.

                      Comment

                      Working...
                      X