I have a virtualized environment (Armv8) where I have a virtual machine (EL1 and EL0) alongside some "native" applications running in EL0. I am using 2nd stage translation for isolation of both the VM and the native apps, and the VM is controlling the EL1, EL0 translation regime. However, one native application requires larger address sizes than the maximum intermediate physical address size. I would like to run these applications with only one translation stage so they could use address sizes larger (48) than the physical address sizes (44).
Is it possible to disable stage 2 translation from the hypervisor before executing this application and re-enable it before executing the VMs? Does anyone see any potential problems with this approach and is anyone aware of is it possible to achieve this differently?
Thanks in advance!
The "native" application is executing directly under the hypervisor, not in a VM, correct?
If so, what you've described is effectively what hypervisors such as KVM do. There is a host OS+hypervisor which runs at EL0/2, using S1 translation only. Then there are VMs which run at EL0/1, with S1+S2 translation. Where the S2 translation is providing isolation between VMs, and between the host and the VMs.
Not a KVM expert, but I believe KVM works on Armv8.0-A (so pre-VHE) devices.
You could treat the native application as pseudo VM, which just isn't subject to S2 (you should still give it a unique VMID). Or, when running the native app you could set HCR_EL2.TGE, which disables entry to EL1 and allows EL2 to host an app at EL0. There are subtilise to both approaches, so do read up on the precise effects of TGE in particular.
Thanks for the quick reply and the explanation, I will look into it further.