- cross-posted to:
- amd@lemmy.world
- aicompanions@lemmy.world
- selfhosted@lemmit.online
- cross-posted to:
- amd@lemmy.world
- aicompanions@lemmy.world
- selfhosted@lemmit.online
But in all fairness, it’s really llama.cpp that supports AMD.
Now looking forward to the Vulkan support!
I’ve been using it with a 6800 for a few months now, all it needs is a few env vars.