Joking. But seriously, on Linux you can bypass mitigations with basically no security impact if you’re not a cloud provider and get a significant performance boost. But depends on your CPU and risk profile, of course.
Running untrusted Javascript code from the internet without security mitigations is a bad idea. It’s maybe excusable for servers but it still increases the risk of container break out if one of the 100 containers you’re running is attacked.
Yeah… I mean, I did hedge by saying “depends on your CPU and your risk profile”, but I understand your point and will edit my comment to caution readers before playing with foot finding firearms.
From my understanding it’s a mixed bag. Some of those vulnerabilities were little more than theoretical exploits from within high levels of trust, like this one. Important if you’re doing a PaaS/IaaS workload like AWS, GCP etc and you need to keep unknown workloads safe, and your hypervisor safe from unknown workloads.
Others were super scary direct access to in-memory processes type vulnerabilities. On Linux you can disable certain mitigations while not disabling others, so in theory you could find your way to better performance at a near zero threat increase, but yes, better safe than sorry.
Agreed, shouldn’t affect performance. But also depends on how they see best to patch the vulnerability. The microcode patch mechanism is the currently understood vector, but might not be the only way to exploit the actual underlying vulnerability.
I remember the early days of Spectre when the mitigation was “disable branch prediction”, then later they patched a more targeted, performant solution in.
So almost no security impact and no performance change?
You must be new here.
Joking. But seriously, on Linux you can bypass mitigations with basically no security impact if you’re not a cloud provider and get a significant performance boost. But depends on your CPU and risk profile, of course.
Running untrusted Javascript code from the internet without security mitigations is a bad idea. It’s maybe excusable for servers but it still increases the risk of container break out if one of the 100 containers you’re running is attacked.
Yeah… I mean, I did hedge by saying “depends on your CPU and your risk profile”, but I understand your point and will edit my comment to caution readers before playing with foot finding firearms.
From my understanding it’s a mixed bag. Some of those vulnerabilities were little more than theoretical exploits from within high levels of trust, like this one. Important if you’re doing a PaaS/IaaS workload like AWS, GCP etc and you need to keep unknown workloads safe, and your hypervisor safe from unknown workloads.
Others were super scary direct access to in-memory processes type vulnerabilities. On Linux you can disable certain mitigations while not disabling others, so in theory you could find your way to better performance at a near zero threat increase, but yes, better safe than sorry.
I don’t think this will affect performance unless you depend on having to quickly update the CPU microcode multiple times a second.
I apologize for being glib.
Agreed, shouldn’t affect performance. But also depends on how they see best to patch the vulnerability. The microcode patch mechanism is the currently understood vector, but might not be the only way to exploit the actual underlying vulnerability.
I remember the early days of Spectre when the mitigation was “disable branch prediction”, then later they patched a more targeted, performant solution in.