I currently have a 24/7 linux old-office-PC-turned-server for self-hosting, and a desktop for mostly programming and playing games (linux as a host + a windows VM with a passed-through GPU). The server’s i5-3330 is usually at ~10-15% usage.

Here’s the actual idea: what if, instead of having a separate server and desktop, I had one beefy computer that’d run 24/7 acting as a server and just spun up a linux or windows VM when I needed a desktop? GPUs and USB stuff would be passed through, and I could buy a PCIe SATA or NVMe controller I could also passthrough to not have to worry about virtualized disk overhead.

I’m almost certain I could make this work, but I wonder if it’s even worth it - would it consume less power? What about damage to the components from staying powered 24/7? It’d certainly be faster accessing a NAS without the whole “Network-Attached” part, and powering on the desktop for remote access could just be a command over SSH instead of some convoluted remote WoL that I haven’t bothered setting up yet.

I’d love to hear your thoughts on this.

Edit 2 months later: Just bought a 7950X3D and use the 3D V-cache half of it as a virtualized desktop with the other cores used for running the host and other VMs. Works perfectly when passing through a dedicated GPU, but iGPU passthrough is very difficult if not impossible since I couldn’t manage it.

Edit even later-er: iGPU passthrough is possible on ryzen 7000 after all, everything works great now.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    edit-2
    1 year ago

    I personally do something similar and have several VMs on my main computer that perform various functions. As they are not particularly resource intensive, I have never had an issue with it. I also went the lazier route and run games directly on the hypervisor, not in a VM.

    For you, GPU passthrough is the main hurdle. It is surmountable, but it isn’t as simple as other parts of VM setups. If you can get that part working well, everything else should fall into place.

    Also, for the sake of your own sanity, do not try to ‘share’ the GPU between the hypervisor and a VM. Use the onboard GPU for the hypervisor (or a baby add-in GPU if you don’t have onboard).