After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.
I’m really curious who at AMD thought it to be a great idea to develop a CUDA compatibility layer but not to release it. As stated, the release was only made because AMD ended financial support.
Probably a way to save face and not have AMD directly do it.
The problem is that if we make CUDA the standard, then they put nVidia in control of a standard. nVidia could try to manipulate the situation in future versions of CUDA by reworking it to fuck with this implementation, giving AMD a shaky name in the space.
We saw this happen with Wine, where although probably not deliberately, MS made Windows compatibility a moving and very unstable target.
That is something tolerable by open source communities, but isn’t something that will fly for official support.
The problem is that if we make CUDA the standard, then they put nVidia in control of a standard. nVidia could try to manipulate the situation in future versions of CUDA by reworking it to fuck with this implementation, giving AMD a shaky name in the space.
I get that but why woulde they fund development of ZLUDA for two years?
Reverse engineering CUDA can bring other benefits. It allows AMD to see what nVidia is doing right and potentially implement it in their own tech. Having not only documentation but a working implementation can help wonders in this regard.
Or maybe they did want to use it but was scared of getting SLAPPed by Nvidia, so instead let the dev open source it.
Basically it means that AMD is now a possible contender for the rather large market of basically scientific researchers and private industry who have CUDA based/oriented software to do ‘AI’ driven development or research on huge banks of GPUs.
Probably this initial implementation still has some kinks to iron out, but it could eventually result in Nvidia not having a functional monopoly in that market.
Also its neat from a hobbyist perspective if youre looking to do some kind of small version of CUDA based stuff along the same lines.
Another common AMD W. So glad I got away from Nvidia. This will help my local work with LLMs nicely.
I’d say it’s more like they’re failing upwards. It’s certainly good for AMD, but it seems like it happened in spite of their involvement, not because of it:
For reasons unknown to me, AMD decided this year to discontinue funding the effort and not release it as any software product. But the good news was that there was a clause in case of this eventuality: Janik could open-source the work if/when the contract ended.
AMD didn’t want this advertised or released, and even canned this project despite it reaching better performance than the OpenCL alternative. I really don’t get their thought process. It’s surreal. Do they not want to support AI? Do they not like selling GPUs?
I really don’t get their thought process. It’s surreal.
Maybe they see it as something that would undermine their effords in increasing ROCm/HIP adoption? (But why fund its development for two years then? I agree with you: It all seems so weird!)
Can someone please explain like I’m five what the meaning and impact of this will be? Past posts and comments don’t seem to be very clear. As someone who uses both Linux and macOS professionally for design, this could be a massive game changer for me.
If you already have a cuda workflow and want to use an AMD card, you can do that with this library.
That includes stuff like Stable Diffusion that recommended nvidia cards because it uses CUDA to accelerate image generation?
So does it work with off the shelf software or is it something the developer has the patch in?
So does it work with off the shelf software or is it something the developer has the patch in?
The point of a drop-in replacement is that no patching is required but in reality the software was released in incomplete form.
ok, I get that much. what I’d like to know, if you’re willing to explain: what’s it going to be like deploying that on, say, a Mac workstation? a pop_os workstation? (edit: such as: how, can I on macOS, will I work with after effects, etc.)
thanks for your time
Your question is legitimate, but chances are that you will need to find the answers yourself by reading the docs.
CUDA is when a program can use the NVIDIA GPU in addition to the CPU for some complicated calculations. AMD now made it possible to use their cards for it too.
I know what CUDA does (as someone who likes rendering stuff, but with AMD cards, I’ve missed it). I’m trying to figure out, realistically, how I can easily deploy and make use of it on my linux and Mac workstations.
the details o’ve come across lately have been a bit… vague.
edit: back when I was in design school, I heard, “when Adobe loves a video card very much, it will hardware accelerate. We call this ‘CUDA’."
You can’t use it with programs that aren’t specifically coded to use it. Outside of hash cracking, AI training and crypto mining, few programs are.
If you mean from a developer perspective, you need to download the CUDA libraries and read through the documentation.
deleted by creator