Would it be possible to run AI on an AMD GPU instead of Nvidia?

  • glibg10b
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    9 months ago

    Yes, it’s about as easy as on Nvidia

    • remotelove@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      9 months ago

      Not nearly as flexible though.

      I have a 7900XTX I would like to use for AI. I tried to get ollama working on my windows gaming PC this last weekend through Docker and WSL, but that was a pain.

      It seems that pytorch might have worked but I still need to try out TensorFlow. Both of those ecosystems seem a hair fractured when it comes to AMD.

      While I am sure it can be done, its nowhere near the point and click experience like it was on my laptop with an Nvidia card.

      • cm0002@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        9 months ago

        Mostly because of Nvidia’s dominance with CUDA, don’t stop trying to make it work though, don’t reward Nvidia for their BS lol

        • remotelove@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          9 months ago

          I’ll keep trying, but I don’t know how far I’ll get.

          I am still going to build a dedicated machine for AI work, so I am not sure if I want to break my gaming rig too hard. All of that mess would be so much easier on a Linux rig with none of the windows fluff getting in the way.

    • keepthepace@slrpnk.net
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      9 months ago

      That’s the opposite of the feedback I got. AMD claims to support all of the transformers library but many people report this to be a lie.

      I am in no love of companies that establish de-facto monopolices, but that is indeed what NVidia has right now. Everything is built over CUDA, AMD has a lot of catch-up to do.

      I have the impression that Apple chips support more things than AMD does.

      There are some people making things work on AMD, and I cheer to them, but let’s not pretend it is as easy as with Nvidia. Most packages depend on cuda for gpu acceleration.

      • genie@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        This is especially true in the research space (where 90% of this stuff is being built :)

        • keepthepace@slrpnk.net
          link
          fedilink
          arrow-up
          2
          ·
          9 months ago

          Can’t wait! But really, this type of things is what makes it hard for me to cheer at AMD:

          For reasons unknown to me, AMD decided this year to discontinue funding the effort and not release it as any software product. But the good news was that there was a clause in case of this eventuality: Janik could open-source the work if/when the contract ended.

          I wish we had a champion of openness but in that respect AMD just looks like a worse version of NVidia. Hell, even Intel has been a better player!

          • remotelove@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            9 months ago

            I just got DirectML working with torch in WSL2 which was fairly painless.

            I am wondering if that isn’t a better option than trying to emulate CUDA directly. Love it or hate it, Microsoft did do a fairly good job wrangling in different platforms with DirectX.