

If I’m reading your example right, I don’t think that would satisfy three either. Three copies of the data on the same filesystem or even the same system doesn’t satisfy the “three backups” rule. Because the only thing you’re really protecting against is maybe user error. I.e. accidental deletion or modification. You’re not protecting against filesystem corruption or system failure.
For a (little bit hyperbolic) example, if you put the system that has your live data on it through a wood chipper, could you use one of the other copies to recover your critical data? If yes, it counts. If no, it doesn’t.
Snapshots have the same issue, because at the root a snapshot is just an additional copy of the data. There’s additional automation, deduplication, and other features baked into the snapshot process but it’s basically just a fancy copy function.
Edit: all of the above is also why the saying “RAID is not a backup” holds true.
Might be a bit late on this, but ProxMox doesn’t really handle assigning threads to the e/p cores. That’s handled by the kernel and as long you’re running kernel version 6.1 or greater you should be good on that front.
If you really need to, you can also pin specific VMs to specific cores. So that if you’ve got something that always needs the performance it can always run on the p-cores and things that aren’t as demanding can always run on e-cores.
That said, especially if you’re over provisioning, it’s probably better to let the scheduler in the kernel handle thread assignments.