:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf,...
Right, you can download any publicly available model and run it without using the internet. Caveat is that you do need a relatively fast machine to make it performant.
For reference the oldest card I have that Vulkan supports is an RX 560 that I bought in 2017 (I’m on GNU/Linux w/ amdgpu and the RADV mesa driver aka. “The Default”). Most medium models on it run at around 6 - 10 Tokens/s. Some crawl to below 6 Tokens/s though and become slower the longer the answer they output is, probably because parts of the model is in RAM since that card has “only” 4GB of VRAM. Models that fully fit in VRAM are a lot faster.
I can run Qwen 2.5 Coder 14B Q4_k_m on CPU at only a little above 1 t/s but it’s worth it when I just want to have it look at whatever code I have without disclosing it with corporations that don’t have my best interests in mind.
Hm so it downloads fixed models and works without an internet connection? Interesting.
Right, you can download any publicly available model and run it without using the internet. Caveat is that you do need a relatively fast machine to make it performant.
For reference the oldest card I have that Vulkan supports is an RX 560 that I bought in 2017 (I’m on GNU/Linux w/ amdgpu and the RADV mesa driver aka. “The Default”). Most medium models on it run at around 6 - 10 Tokens/s. Some crawl to below 6 Tokens/s though and become slower the longer the answer they output is, probably because parts of the model is in RAM since that card has “only” 4GB of VRAM. Models that fully fit in VRAM are a lot faster.
I can run Qwen 2.5 Coder 14B Q4_k_m on CPU at only a little above 1 t/s but it’s worth it when I just want to have it look at whatever code I have without disclosing it with corporations that don’t have my best interests in mind.