The MI25 is a great deal for hobbiest even if the power draw is high, but would it work with local models like falcon or llama?
I know they have a different memory bus size, but I’m unsure if this would fundamentally cause problems for open source models.
I guess it depends on what you mean by usable, I think people have had success with ROCM, it’s not as solid as CUDA of course but it’s been more than usable