Attached: 1 image
✨ We got a bunch of Steam games to run on Asahi Linux!!! ✨
Most of them run at a solid 60FPS and all of them are playable on my M2 Pro~ 🚀
All running on a krun microVM with FEX and full TSO support 💪
I was not expecting Party Animals to run! That's a DX11 game, running with the classic WineD3D on our OpenGL 4.6 driver! 🤯
Watch the stream:
▶️ https://youtube.com/live/JT9a_MrFV18
So how does that work given that most Steam games are x86/x64 and the M2 is an ARM processor? Does it emulate an x86 CPU? Isn’t that slow, given that it’s an entirely different architecture, or is there some kind of secret sauce?
Definitely going to incur a performance hit relative to native code, but in principle it could be perfectly good. It’s not like the GPU is running x86 code in the first place. On macOS, Apple provides Rosetta to run x86 Mac apps, and it’s very, very good. Not sure how FEX compares.
Man ,idk why everyone is being a dick to you. A MicroVM is a virtual machine that only emulates a small subset of an entire OS (a container basically). FEX is an x86 emulator. So yes, this is emulation.
So my real question is really about this: common wisdom is that emulating a whole CPU architecture is a performance killer. Does that apply here, and are they just running games that can take the hit? Or phrased differently: given that it’s emulated, could this ever have near-native (CPU) performance, or nah?
Probably not, but it might be good enough. I’m not an expert in architecture emulation by any stretch, but it might work best for older games where modern performance is a non issue either way.
To answer your question, I’m sure for the x86_64(not separate things btw) specific code it might use some sort of emulator or a translation layer. Idk WTF “microVM with FEX” is, maybe that’s it?
But for the DX11 part, that’s just the normal DirectX to Vulkan/OpenGL translation layer e.g. WineD3D.
There’s actually nothing that special about DirectX on ARM, it’s the same API. The translation layer just takes those API calls from DirectX11 and translates them to the equivalent in OpenGL, and then the Asahi Linux OpenGL driver takes of actually executing those commands on the GPU.
You asked how it works, the post states how it works.
You also asked if it’s slow, which is clearly answered in the post (though you didn’t quote that part).
You also asked if there’s some “secret sauce” allowing it to be fast, which is also a weird question since everything used is listed in the post.
If something wasn’t clear to you, why not specifically ask about it? Even in this comment, you still don’t specify what you don’t understand. What kind of answer are you expecting to get?
So how does that work given that most Steam games are x86/x64 and the M2 is an ARM processor? Does it emulate an x86 CPU? Isn’t that slow, given that it’s an entirely different architecture, or is there some kind of secret sauce?
Emulation.
Definitely going to incur a performance hit relative to native code, but in principle it could be perfectly good. It’s not like the GPU is running x86 code in the first place. On macOS, Apple provides Rosetta to run x86 Mac apps, and it’s very, very good. Not sure how FEX compares.
Virtualization actually, dont know why though
Why not click the link and find out? It’s literally a Mastodon post, you don’t even have to read much.
The post doesn’t answer the questions, it’s why I asked.
It says:
Now I know some of these words, but it does not answer my question.
Man ,idk why everyone is being a dick to you. A MicroVM is a virtual machine that only emulates a small subset of an entire OS (a container basically). FEX is an x86 emulator. So yes, this is emulation.
Thanks.
So my real question is really about this: common wisdom is that emulating a whole CPU architecture is a performance killer. Does that apply here, and are they just running games that can take the hit? Or phrased differently: given that it’s emulated, could this ever have near-native (CPU) performance, or nah?
Probably not, but it might be good enough. I’m not an expert in architecture emulation by any stretch, but it might work best for older games where modern performance is a non issue either way.
To answer your question, I’m sure for the x86_64(not separate things btw) specific code it might use some sort of emulator or a translation layer. Idk WTF “microVM with FEX” is, maybe that’s it?
But for the DX11 part, that’s just the normal DirectX to Vulkan/OpenGL translation layer e.g. WineD3D.
There’s actually nothing that special about DirectX on ARM, it’s the same API. The translation layer just takes those API calls from DirectX11 and translates them to the equivalent in OpenGL, and then the Asahi Linux OpenGL driver takes of actually executing those commands on the GPU.
You asked how it works, the post states how it works. You also asked if it’s slow, which is clearly answered in the post (though you didn’t quote that part). You also asked if there’s some “secret sauce” allowing it to be fast, which is also a weird question since everything used is listed in the post.
If something wasn’t clear to you, why not specifically ask about it? Even in this comment, you still don’t specify what you don’t understand. What kind of answer are you expecting to get?
deleted by creator
You can Google the words you don’t know, and find out that it does in fact answer your question.
Yeah god forbid people have some interesting discussion on this platform, right?
Just to offer some support, you’re right and those are good questions