My office computer has a Ryzen 7 5700, RX 580x, and 32gb of ram. Running ollama with deepseekv2 or llama3 is much slower than chatgpt in the browser. Same with my newer, more powerful home computer.
What kind of hardware do you need to run with comparable responsiveness to chatgpt? How much does it cost? Presuming such hardware is commercial, where do you find it?
You’d need basically a small server rack filled with datacenter GPUs. Expect mid to high 5 digit numbers.
But: running smaller models on a typical gaming GPU is quite doable.
deleted by creator