Call Depth Tracking Mitigation Updated For Linux In Better Mitigating Retbleed
As for what Peter has been working on with this Call Depth Tracking code, he explained with the v2 patch series:
This version is significantly different from the last in that it no longer makes use of external call thunks allocated from the module space. Instead every function gets aligned to 16 bytes and gets 16 bytes of (pre-symbol)
padding. (This padding will also come in handy for other things, like the kCFI/FineIBT work.)
Prior to these patches function alignment is basically non-existent, as such any instruction fetch for the first instructions of a function will have (on average) half the fetch window filled with whatever comes before. By pushing the alignment up to 16 bytes this improves matters for chips that happen to have a 16 byte i-fetch window size (Intel) while not making matters worse for chips that have a larger 32 byte i-fetch window (AMD Zen). In fact, it improves the worst case for Zen from 31 bytes of garbage to 16 bytes of garbage.
As such the first many patches of the series fix up lots of alignment quirks.
The second big difference is the introduction of struct pcpu_hot. Because the compiler managed to place two adjacent (in code) DEFINE_PER_CPU() variables in random cachelines (it is absolutely free to do so) the introduction of the per-cpu x86_call_depth variable sometimes introduced significant additional cache pressure, while other times it would sit nicely in the same line with preempt_count and not show up at all.
In order to alleviate this problem; introduce struct pcpu_hot and collect a number of hot per-cpu variables in a way the compiler can't mess up.
As more background information on Call Depth Tracking for mitigationg Retbleed:
Aside from these changes; the core of the depth tracking is still the same.
- objtool creates a list of (function) call sites.
- for every call; overwrite the padding of the target function with the accounting thunk (if not already done) and adjust the call site to target this thunk.
- the retbleed return thunk mechanism is used for a custom return thunk that includes return accounting and does RSB stuffing when required.
This ensures no new compiler is required and avoids almost all overhead for non affected machines. This new option can still be selected using:
on the kernel command line.
The Return-Stack-Buffer (RSB) is a 16 deep stack that is filled on every call. On the return path speculation will "pop" an entry and takes that as the return target. Once the RSB is empty, the CPU falls back to other predictors, e.g. the Branch History Buffer, which can be mistrained by user space and misguides the (return) speculation path to a disclosure gadget of your choice -- as described in the retbleed paper.
Call depth tracking is designed to break this speculation path by stuffing speculation trap calls into the RSB whenver the RSB is running low. This way the speculation stalls and never falls back to other predictors.
The assumption is that stuffing at the 12th return is sufficient to break the speculation before it hits the underflow and the fallback to the other predictors. Testing confirms that it works. Johannes, one of the retbleed researchers, tried to attack this approach and confirmed that it brings the signal to noise ratio down to the crystal ball level.
The benchmark results are looking very promising:
All the details and the newest Call Depth Tracking v2 patches for the Linux kernel via this mailing list thread.