Originally posted by NobodyXu
View Post
https://docs.microsoft.com/en-us/arc...-native-c-code Yes back in 2005 here is embedding .net interpreter in any application that supports C/C++ interface.
But that is still no where near the support of compiling C++ to wasm.
Originally posted by NobodyXu
View Post
C and C++ does not have a valid code safety. Also some code that builds with gcc does not build with clang the 128bit integer differences and others. Emscripten allows you to use gcc-llvm and clang for C and C++ to get to wasm yes the clang and gcc issue of using different format 128bit integer differences and others raise their head.
The reality is ssokolow idea that wasm fixes foreign function interface is wrong. The same problems with jvm, .net, parrot and others. Yes there you end up restricting the language due to mandating foreign function interface leading to miss match problems(.net/java) or you don't restrict it so having incompatibilities yes WASM is have incompadiblities(wasm/parrot and others).
Programming Languages are not in agreement on lots of things and implements like clang vs gcc are not in agreement on lots of things. Rust is having to have a gcc backend so that rust is compatible with gcc built C. I do expect wasm to fracture same ways that when mixing languages/implementations compilers you will have issues.
Originally posted by NobodyXu
View Post
The ARM MTE you are kind of referencing to is not a bullet proof as one would hope. ARM MTE it still possible to buffer overflow as to allocations of memory next to each other could end up with the same tag.
Yes ARM MTE has cost 4 bits of extra memory for every 16 bytes of memory. Yes 16 possible values for the key collisions are absolutely likely.
We are not yet seeing a proper fix to these problems in silicon MTE is improvement over nothing but its not the fix.
No arm did not end up using 128bit address. Different mainframes that uses 128 bit address for proteciton. One is first 64 bits does not change when the memory is allocated and contains the end of the allocation. Yes this is simple when cpu progress the 128 bit address it first checks that the second half is not less than the first half by at least 8 byte then using the 8 byte end value stored at the first half location to validate that the address in the second half is not pass that. Yes this has a performance cost for mainframe users where errors are really bad this is not a problem. This also provides that metadata about a value can be aligned before the start. The mainframe one prevents buffer overflows and buffer overreads .
Next there is pointer and offset again pointer points to allocation size. Then there allocation table-pointer. Yes these are all have a 128 bit address split it in two. First halve pointing to something to validate the pointer operation.
There are ways to make CPU instructions not allow buffer overflow problems. All the ones that absolutely work have a performance cost. MTE and other tagging like it is the trade off of safety for performance as in less protection from buffer overflows but still better than nothing yet has very low performance overhead due to no add/subtract and no having to access other memory.
I have seen this problem enough times its a lot harder to sell me a magical fix. Lot of ways I see the only fix is that major CPU decide to include instructions for pointers and storage that allow extra meta data that is enforceable by the CPU then programming languages writers will have to alter to make their languages more suitable.
Comment