Announcement

Collapse
No announcement yet.

Google Engineer Shows "SESES" For Mitigating LVI + Side-Channel Attacks - Code Runs ~7% Original Speed

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by kravemir View Post

    Golang is the way, seriously,... It compiles to native code depending only on libc, and provides memory safety (ie. dangling pointers in C/C++), if it had generics, then it would rank above Java and C++ for desktop applications, and system services/daemons. Try deploying some hello world web server in docker for Java, JavaScript/node, and golang,... All of these languages provide memory safety, but, golang would be the only one starting at 6MB memory usage (and for simple app using DB, it won't even go over 20MB) in docker stats.

    Still, Java rocks regarding development's ecosystem, but Java is not HW resource friendly, which doesn't matter to business, as programmer's salary is higher, than (their) hardware costs,...
    Can't agree with you more about compilers like golang being the way to go.

    I know people like me are in the minority by far, but I don't like garbage collectors.
    So I have hopes that rust, zig or jai will be the "one".

    Comment


    • #22
      Originally posted by tuxd3v View Post

      But I believe that the future is VLIW..
      This is very interesting and I like the idea.
      So basically the compiler does the pipe-lining. It would solve this whole mess.
      Are there any working CPUs that one can buy ? Would be great if there were something like an RPI with this technology.
      Last edited by Raka555; 03-21-2020, 05:33 AM.

      Comment


      • #23
        I would not trust that, if only because now that we are finding vulnerabilities in AMD hardware now that stem a substantial ways back in the architectures life-cycle. it's only a matter of time before before something of real meat comes along (or a previous attack proves to be useful). Which was kind of expected, because as much as AMD 'fanbois' crow on how the hardware skipped the whole meltdown madness, they are not perfect.
        You are spreading FUD and trying to discredit AMD with possible future problems that may never come. This is trolling.

        So far, there is no real, exploitable vulnerability for AMD. Period.

        Intel is riddled with them.

        Practically, we shouldn't use Intel for anything at this point unless the workload is 100% trustworthy.

        Comment


        • #24
          Originally posted by Raka555 View Post

          These are compiler changes. Linus has no say in it
          Maybe so.

          I would hope, since I have not looked into this GCC feature for myself, that there will be a compiler flag that can be set/unset (enable/disable) so a user can choose to use these compiler-based mitigations or not.

          That way if Linus doesn't want these changes when GCC compiles Linux, he can unset the compiler flag (disable the feature) that controls this feature.

          On the other hand, if there is no way for the compile user to control GCC's implementation of these changes, then I would expect the semi-solid waste to impact the rotary oscillator in a few mailing lists.

          Comment


          • #25
            Originally posted by Raka555 View Post

            Can't agree with you more about compilers like golang being the way to go.

            I know people like me are in the minority by far, but I don't like garbage collectors.
            So I have hopes that rust, zig or jai will be the "one".
            First time seeing zig and jag languages,.. will take a bit better look later, in free time.

            Well, I hated garbage collection almost ten years ago,... However, that was from user's perspective ignoring related costs to "faster" memory management,... So, there are various ways to do automatic (language guaranteed) memory management. Each has got different advantages (performance) and costs (performance, and need of programmer's assistance and correctness). Reference counting offers immediate freeing after reference count goes to zero, but is prone to memory leaks based on cyclical references, which aren't reachable from any thread anymore, so needs programmer's assistance. Weak references could solve that issue, but mechanism is more bit complex and impacts performance a bit (linked list of weak references to "object"), and also could lead to unwanted freeing of still useful "objects", also needs programmer's assistance. Unique pointers are the most restricting ones,... And, so on,... Garbage collection offers easiest way to write safe code, which doesn't have memory leaks, but it has probably the highest HW resource costs, as memory goes higher, than needed, as garbage collection is being run on some thresholds, and also consumes some computing power during garbage collection. ..

            So,... it's matter of taste and mainly of use-case (type of software). Business will make the bet on the safest and easiest way (currently, winning Java and similar languages). System, and desktop application, programming usually goes for stable and predictable performance (ie. not garbage collection). So, Rust-like languages might be the best for common system/desktop applications, and golang-like languages for customer's specific web server applications.

            Comment


            • #26
              Originally posted by Raka555 View Post

              This is very interesting and I like the idea.
              So basically the compiler does the pipe-lining. It would solve this whole mess.
              Are there any working CPUs that one can buy ? Would be great if there were something like an RPI with this technology.
              Not sure about general availability, but there are Russian VLIW CPUs series - Elbrus. These systems are mainly for military and state sectors. CPUs come with 1,4 and 8 core variants and can execute up 25 instructions per tact. I think thsese systems are available for business too, but compared to average x86 computer they would be more expensive. No surprise really, production scale is just not the same.

              Correction, 5th arch revision allows up 50 operations per tact per core, including 8 integer and 24 floating point.
              Last edited by blacknova; 03-21-2020, 06:34 AM.

              Comment


              • #27
                I read that as SNESES and was wondering how Super Nintendos were involved. Is leaving article with a sad since they're not

                Comment


                • #28
                  Originally posted by Raka555 View Post

                  This is very interesting and I like the idea.
                  So basically the compiler does the pipe-lining. It would solve this whole mess.
                  Are there any working CPUs that one can buy ? Would be great if there were something like an RPI with this technology.
                  From the VLIW Wikipedia page:

                  Outside embedded processing markets, Intel's Itanium IA-64 explicitly parallel instruction computing (EPIC) and Elbrus 2000 appear as the only examples of a widely used VLIW CPU architectures.

                  Comment


                  • #29
                    Originally posted by tuxd3v View Post

                    It could be that arm and mips have some designs not affected,
                    But I believe that the future is VLIW..

                    We cannot afford CPUs to have performance of micro-controllers( when working correctly ), nowadays..
                    VLIW has nothing do with anything here. You're mixing up computer architecture concepts.

                    Comment


                    • #30
                      Originally posted by Raka555 View Post

                      This is very interesting and I like the idea.
                      So basically the compiler does the pipe-lining. It would solve this whole mess.
                      Are there any working CPUs that one can buy ? Would be great if there were something like an RPI with this technology.
                      No. VLIW has nothing to do with anything you are thinking of.
                      A VLIW machine is probably still a dynamically scheduled machine.
                      VLIW was a bullshit word as a way of mixing the two worlds (RISC and CISC) by making an even bigger mess.
                      Most if not all VLIW architectures have gone Dodo by now. Large DSPs are mostly dead. MicroDSP's can still use VLIW.

                      What you're probably thinking of is a fully static scheduled machine like Intels EPIC (IA-64 Itanium).
                      There you have full explicit control over instruction scheduling. I definitely differ between EPIC and VLIW.
                      To me, Itanium is a VLIW/EPIC machine.

                      VLIW/EPIC does not change the fact of sidechannels in a CPU. It only changes where ILP is extracted.

                      Comment

                      Working...
                      X