You don't. The new version has had all encryption abilities stripped out. Anyone downloading it now can only Decrypt existing volumes, not make new ones. Presumably to guarantee that existing bugs and vulnerabilities can not be exploited to decrypt new volumes because people assumed it was still secure
Originally Posted by batGnat
Last edited by deadite66; 05-30-2014 at 03:41 AM.
There are solutions to this problem: https://www.schneier.com/blog/archiv...ring_trus.html
Originally Posted by zanny
Additionally, if you're worried about your CPU containing bugs, creating one out of individual gates has about the same difficulty (or less) as writing a compiler. (Admittedly, it will be about as powerful as a calculator, but you only need it for the compiler-verification process.) Individual gates are too simple to backdoor without obvious side-effects.
The reason there's so much fuss over what the NSA has been doing is that a lot of it wasn't authorized by court order.
Originally Posted by Luke
Given the recent revelations that they've been bugging Cisco's hardware, I'd say it's perfectly reasonable to question if they've been targeting compilers.
OpenCores.org has quite a bunch of CPU cores under varying licensing.
Originally Posted by rdnetto
(Again, in that case, you need to know VHDL and be able to audit any hardware description file, and you need also to trust or test/audit any hardware on which you're going to synthetise an implementation).
Well, sadly in fact, NSA won't need a backdoor: Bitlock has been reported NOT to properly clear memory regions holding its keys. (It's susceptible to cold boot attacks, etc.)
Originally Posted by Vistaus
NSA saves "high value" exploits for "high value" targets
This link shows that by targetting broadly distributed compilers with open source they would risk being caught. GCC and anything else with open source and pre-existing binaries predating an attack by NSA would mean the attack could be proven to have taken place. Therefore, the more infected copies distributed, the higher the risk that some hacker will find the attack and force a rebuild of the entire compiler line from code predating the atttack.
Originally Posted by rdnetto
There is evidence that "high value" but detectable attacks by both NSA and the FBI are held back most of the time, reserved for high value targets. Think of it this way: If I were to release a compiler designed to put a keylogger known only to me in Cryptsetup with an eye towards cracking encrypted neo-Nazi websites, if I released it to everyone someone other than the Nazis might find it, and then the Nazis read it here, on twitter, and then it's all over their own boards and they switch to another compiler and I am out of the game. If I instead talk to a personal friend (or a date) working at the distro the Nazis get their compiler from to sign it with the distro's key but send it only to the Nazis, it works unless the Nazis themselves find it.
The NSA is also capable of thinking in this manner. Example: if they put keylogging chips into ALL keyboards, their "tailored operations division" or TAO would not need to intercept keyboards shipped by distributors to known enemies of the US regime to install their custom rf-enabled keyloggers, as they would already be present and waiting for remote activation. The disadvantage would be that some hardware hacker somewhere would find the chips and blow the whistle. The same is true for malicious NSA-installed BIOS code: it gets installed by TAO into machines being delivered to known or suspected enemies of the NSA's bosses. That way it takes a crack at the Guardian's reporters without getting caught by someone working on Coreboot reverse-engineering the original BIOS.
Also, if the NSA uses keyloggers as their main countermeasure to encryption, the need to screw with compilers is reduced. Still, I would assume that closed-source compilers, for which the test you linked to is impossible, would be malicious until proven otherwise, along with closed crypto, closed kernels, etc. Even China's MSS uses hardware keyloggers as much as possible.
Two words: flame virus. This is exactly what happened (although it is unclear whether it was the NSA or some other intelligence agency). The thing went undetected for at least 5 years because it was only infecting computers in a specific area. It was even able to exploit a hole in Microsoft security to distribute itself via windows update.
Originally Posted by Luke
A virus can't distribute hardware keyloggers
That's another attack. A hardware keylogger cannot be distributed by software. Even a 3-d printer cannot make a computer chip, much less covertly install it an existing keyboard. One exception might be software sent to a factory that made keyboards, but in that case all the keyboards would be modfied and one or more would be found. Thus, the TAO interceptions of hardware in shipment and my advice to buy randomly on the spot with cash only.
Originally Posted by TheBlackCat
Malicious BIOS "updates" have been distributed by attack programs, but this involves having to first determine exactly what motherboard and chipset are to be attacked prior to the attack, as a failed BIOS flash that bricks the board makes the attack useless for surveillance. If this attack was easy, the NSA would not bother intercepting computers being shipped to state-level and equivalent opponents to install malicious BIOS code.