• Tak
      link
      fedilink
      arrow-up
      55
      arrow-down
      2
      ·
      edit-2
      4 months ago

      deleted by creator

      • MrScottyTay@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 year ago

        I’m not Linux and i have the same sentiment. Fuck nvidia. If rather give my money to some other company. I am using an nvidia gpu now, but it’s the 1050ti I got an age ago. I’ll run it into the ground before i upgrade and i won’t be getting an nvidia one when i do

      • Gabu@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        AMD’s support for AI is just fine, you just have to choose a path - if you’re on Linux, use their CUDA translation software (ROCm), if you’re on Windows, use DirectML.

        • ylai
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          edit-2
          1 year ago

          AMD’s support for AI is just fine

          This is quite untrue, especially if you do actual research and not just run other people’s models. For example, ROCm is missing in many sparse autograd frameworks, e.g. pytorch_sparse, or having a viable alternative to Nvidias MinkowskiEngine. This is needed if you do any state-of-the-art convnets with attention-like sparsity.