I was testing out automatic disk decryption with the TPM and secure boot in a Qemu/KVM VM. I had Arch set up with a UKI, I followed this process to enable secure boot and enroll the keys which all worked fine. I then used systemd-cryptenroll to unlock the drive automatically which again worked great.
This is the part where I then messed up and I’m not quite sure how or why. I wanted to check that disabling secure boot prevented unlocking of the drive, so I enabled the boot menu in the VM settings, entered it and reset the secure boot settings. As expected I then needed to enter the password again. I then wanted to re-enable it so I re-ran sbctl enroll-keys -m to re-enroll the keys, and rebooted as my UKI was already signed. And that was that, VM completely dead. No matter whether I try to boot from the virtual disk, the virtual CD drive, or even the virtual network adapter all I get is a black screen with “Display output is not active”. I can’t even enter the firmware menu again because I no longer get that prompt.
It doesn’t matter that this happened and I don’t need to fix it because it was just a throwaway VM which I’ve now deleted, but I would like to know what caused it so I can avoid potentially bricking real hardware in the future


Could be a UEFI bug in the VM itself;
https://wiki.archlinux.org/title/Unified_Extensible_Firmware_Interface/Secure_Boot#Using_your_own_keys
Could also be that you didn’t sign your boot image since that command seems to load the secure boot signing key into the UEFI firmware, if you cleared other signing keys then potentially no code can load. You would have to load the keys for whatever UEFI firmware vendor is used (presumably that made by the VM software maker) or sign it yourself, etc.
I’d have thought that would happen the first time I enrolled the keys though since I used the exact same command both times?