But in all fairness, it’s really llama.cpp that supports AMD.

Now looking forward to the Vulkan support!