Announcement

Collapse
No announcement yet.

OpenJDK Java's Native Wayland Support Progressing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ssokolow
    replied
    Originally posted by Weasel View Post
    Sounds like a skill issue. Having proper design and API documentation/contracts must be difficult, I guess. (and yes I'm talking about internal APIs to the project, not exposed to others)
    OK, it's a skill issue. That means the world's supply of sufficiently skilled programmers can't meet demand, even with Microsoft and Google and the like trying very hard to hire them.

    I'm reminded of this quote:

    Honestly, after more than 25 years of C (and C++), I’ve become very frustrated with the average C code I seen in the wild. OpenSSL is fairly typical, in a lot of ways. So much C code has buffer overflows, numeric overflows, memory leaks, double frees, undefined behavior, and an an endless number of bugs. There are exceptions—djb’s code is quite good, dovecot seems reasonable, OpenBSD audits aggressively—but when I dive into most C code, I expect problems… I’m tired. I don’t want to rely on programmers practicing constant, flawless vigilance.

    -- emk @ https://www.reddit.com/r/rust/commen...not_a/ds0u68p/
    Originally posted by Weasel View Post
    But protected mode doesn't protect an own process from itself, and that's fine. It's quite literally using the "design" I advocate for. So idk what your point is, it proves mine.
    Which is where Rust comes in. It does protect a process from itself as long as you assign use of unsafe to the programmers who know how to use it correctly, design APIs such that no set of inputs causes your unsafe code to violate memory safety, and slap a #[forbid(unsafe_code)] onto the modules that less skilled programmers are allowed to touch.

    Leave a comment:


  • Weasel
    replied
    Originally posted by ssokolow View Post
    Rust's borrow checker is part of a system for ensuring that invariants only need to be manually preserved locally... that you can write code which enforces that, if X passed validation, it will remain valid as it passes through the program by making it a compiler error for refactoring in one place to break the invariant a programmer in another place had learned to expect.

    Hence my perspective that programmers can be very skilled in C and C++ in isolation, but they still can't write correct C or C++ code in groups.
    Sounds like a skill issue. Having proper design and API documentation/contracts must be difficult, I guess. (and yes I'm talking about internal APIs to the project, not exposed to others)

    Originally posted by ssokolow View Post
    They're an acknowledgement that programmers are inherently fallible.

    Otherwise, why wouldn't you just keep using MS-DOS and Windows 3.1x and Classic MacOS with their un-protected memory systems?
    Logical fallacy. You can execute code that has no relationship with the rest and crash your entire system. It's not just about bugs, it's about malice as well.

    i.e. that's about "external APIs" or code that needs to be validated. How do you validate a program deliberately crashing itself accessing invalid memory? Using protected mode.

    But protected mode doesn't protect an own process from itself, and that's fine. It's quite literally using the "design" I advocate for. So idk what your point is, it proves mine.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by Weasel View Post
    By "test" I meant automated unit tests, especially for invalid input.
    I refer again to the Dijkstra​ quote. A type system can categorically rule out entire classes of bugs. No amount of testing short of spending billions of years feeding in every possible input can do the same. There will always be the possibility that testing one more input could have revealed a bug.

    Originally posted by Weasel View Post
    You're missing the point completely. For example Rust's borrow checker helps against invalid memory errors, mistakes, by the programmer. Mistakes happen, but not validating input is a design flaw rather than a mistake. Validating external input, that is.
    Rust's borrow checker is part of a system for ensuring that invariants only need to be manually preserved locally... that you can write code which enforces that, if X passed validation, it will remain valid as it passes through the program by making it a compiler error for refactoring in one place to break the invariant a programmer in another place had learned to expect.

    Hence my perspective that programmers can be very skilled in C and C++ in isolation, but they still can't write correct C or C++ code in groups.

    Originally posted by Weasel View Post
    If you still don't understand the difference, consider this: So, we do runtime bounds checks for a mistake that shouldn't be there in the first place, by design (validating untrusted input is part of the contract and design, this can't be "mistaken"). What happens if the input is invalid? Assertion triggers, crashing the app intentionally, or even worse if it's privileged (kernel mode, like a driver) takes down your system.

    Do you think this is fine by design? Sure, it's not supposed to happen, the alternative is worse in some aspects (security vulnerability), but the point is that this ought to be fixed at the design stage and the proper way is to add PROPER bounds checks and fail in proper way, depending on what the API does. (if it's a file format, for example, complain it's an invalid file)

    You need MANUAL code to handle this situation. And an automatic bounds check will keep the code in perpetual shit state, forever.
    What you just said can equally easily be used as an argument against protected mode operating systems. Rust's panics are a program-internal counterpart to segfaults which catch programmer error more reliably and in a way that's structured enough that it's safe to allow stack unwinding and RAII cleanup.

    They're an acknowledgement that programmers are inherently fallible.

    Otherwise, why wouldn't you just keep using MS-DOS and Windows 3.1x and Classic MacOS with their un-protected memory systems?

    Leave a comment:


  • Weasel
    replied
    Originally posted by ssokolow View Post
    Sounds not unlike "We don't need Rust. Just run sanitizers on your C or C++ code and test thoroughly"... something which Microsoft, Apple, Google, Mozilla, etc. failed to make sufficient after pouring a lot of time, energy, and money into it.
    By "test" I meant automated unit tests, especially for invalid input.

    You're missing the point completely. For example Rust's borrow checker helps against invalid memory errors, mistakes, by the programmer. Mistakes happen, but not validating input is a design flaw rather than a mistake. Validating external input, that is.

    If you still don't understand the difference, consider this: So, we do runtime bounds checks for a mistake that shouldn't be there in the first place, by design (validating untrusted input is part of the contract and design, this can't be "mistaken"). What happens if the input is invalid? Assertion triggers, crashing the app intentionally, or even worse if it's privileged (kernel mode, like a driver) takes down your system.

    Do you think this is fine by design? Sure, it's not supposed to happen, the alternative is worse in some aspects (security vulnerability), but the point is that this ought to be fixed at the design stage and the proper way is to add PROPER bounds checks and fail in proper way, depending on what the API does. (if it's a file format, for example, complain it's an invalid file)

    You need MANUAL code to handle this situation. And an automatic bounds check will keep the code in perpetual shit state, forever.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by Weasel View Post
    How about code review huh? How about, you know, having a proper test suite that reveals what happens with invalid inputs? That too alien concept for you? It's ok, you're one of the low quality outsourced programmers who just want to "get shit done asap".

    Software that's crudely coded just to get it out there asap is, by DEFINITION, low quality software. Some people code for the beauty of software rather than the end result, especially in open source software, which is why most closed source software is so shit quality to begin with, attitudes like yours.

    Oh I'm aware there's tons of CVEs on even popular open source libraries (fixed or not), but they only have themselves to blame for not having strict quality control. I mean you need that quality control regardless of bounds checks; bounds checks is just one out of many issues with not validating input.
    There's a very simple answer to why we need something like Rust:

    Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence. -- Edsger W. Dijkstra, "The Humble Programmer" (1972)

    (An excerpt from an argument in favour of stronger type systems and other methods of formal proving.)

    Leave a comment:


  • ssokolow
    replied
    Originally posted by Weasel View Post
    Such bounds checks should only be enabled in debug builds and all asserts should be removed in release builds, IMO. Then you should obviously run test suite that stresses your code with invalid inputs on the debug build and see what happens. This isn't just to prevent bounds checks obviously, it's standard quality control if you truly care about software. It's also to prevent regressions in program logic.
    Sounds not unlike "We don't need Rust. Just run sanitizers on your C or C++ code and test thoroughly"... something which Microsoft, Apple, Google, Mozilla, etc. failed to make sufficient after pouring a lot of time, energy, and money into it.

    Leave a comment:


  • Weasel
    replied
    Originally posted by ssokolow View Post
    If you use the iterator API, process the array in reverse, or stick something like assert!(array.len() > max_index_you_access); before the actual work (Rust eliminates debug_assert! from release builds, not assert!), then LLVM optimizers will collapse the bounds checks to just one initial lookup like what you'd get in C with the terminal condition on a for loop over a counted/non-sentinel-terminated/Pascal-style string/array.
    Such bounds checks should only be enabled in debug builds and all asserts should be removed in release builds, IMO. Then you should obviously run test suite that stresses your code with invalid inputs on the debug build and see what happens. This isn't just to prevent bounds checks obviously, it's standard quality control if you truly care about software. It's also to prevent regressions in program logic.

    Leave a comment:


  • Weasel
    replied
    Originally posted by mdedetrich View Post
    The problem that in cases of 1) and 3) programmers almost always end up being wrong which is why Rust and Java have a default of bounds checks.
    You're missing the point. I'm not using the term "good" or "real" programmer to imply someone who doesn't make mistakes. I'm using it as actual good practice; validating input (but not input in your own code helpers / internal functions for example) should be standard practice, but only where is needed like that.

    How about code review huh? How about, you know, having a proper test suite that reveals what happens with invalid inputs? That too alien concept for you? It's ok, you're one of the low quality outsourced programmers who just want to "get shit done asap".

    Software that's crudely coded just to get it out there asap is, by DEFINITION, low quality software. Some people code for the beauty of software rather than the end result, especially in open source software, which is why most closed source software is so shit quality to begin with, attitudes like yours.

    Oh I'm aware there's tons of CVEs on even popular open source libraries (fixed or not), but they only have themselves to blame for not having strict quality control. I mean you need that quality control regardless of bounds checks; bounds checks is just one out of many issues with not validating input.

    Leave a comment:


  • ssokolow
    replied
    Originally posted by Weasel View Post
    I don't give a shit about your benchmarks. It's objectively worse, uses more power, has potential to be slower, and has increased code size (for compiled languages).
    If you use the iterator API, process the array in reverse, or stick something like assert!(array.len() > max_index_you_access); before the actual work (Rust eliminates debug_assert! from release builds, not assert!), then LLVM optimizers will collapse the bounds checks to just one initial lookup like what you'd get in C with the terminal condition on a for loop over a counted/non-sentinel-terminated/Pascal-style string/array.

    Leave a comment:


  • Espionage724
    replied
    My likely only and limited use-case for this would be Old School Runescape, but this would have to be implemented before they release their C++ client.

    Leave a comment:

Working...
X