I’m new to the field of large language models (LLMs) and I’m really interested in learning how to train and use my own models for qualitative analysis. However, I’m not sure where to start or what resources would be most helpful for a complete beginner. Could anyone provide some guidance and advice on the best way to get started with LLM training and usage? Specifically, I’d appreciate insights on learning resources or tutorials, tips on preparing datasets, common pitfalls or challenges, and any other general advice or words of wisdom for someone just embarking on this journey.

Thanks!

  • BaroqueInMind@lemmy.one
    link
    fedilink
    arrow-up
    3
    ·
    8 months ago

    OLlama is so fucking slow. Even with a 16-core overclocked Intel on 64Gb RAM with an Nvidia 3080 10Gb VRAM, using a 22B parameter model, the token generation for a simple haiku takes 20 minutes.

    • xcjs@programming.dev
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

      On my RTX 3060, I generally get responses in seconds.

      • kiku123@feddit.de
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        I agree. My 3070 runs the 8B Llama3 model in about 250ms, especially for short responses.

    • Zworf@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 months ago

      Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it’s the 8B model because the intermediate sizes aren’t out yet and 70B simply won’t fly on a single GPU.

      But it really screams. Much faster than I can read. PS: Ollama is just llama.cpp under the hood.

      Edit: Ah, wait, I know what’s going wrong here. The 22B parameter model is probably too big for your VRAM. Then it gets extremely slow yes.

        • Zworf@beehaw.org
          link
          fedilink
          arrow-up
          2
          ·
          7 months ago

          It depends on your prompt/context size too. The more you have the more memory you need. Try to check the memory usage of your GPU with GPU-Z with different models and scenarios.

      • xcjs@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        8 months ago

        It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?

    • xcjs@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

      I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?

        • xcjs@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)

          • BaroqueInMind@lemmy.one
            link
            fedilink
            arrow-up
            2
            ·
            8 months ago

            My setup is Win 11 Pro ➡️ WSL2 / Debian ➡️ Docker Desktop (for windows)

            Should I install the nvidia drivers within Debian even though the host OS already has drivers?

            • xcjs@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              8 months ago

              I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)

              https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.

              You may also run into performance issues within WSL due to the virtual machine overhead.

              • BaroqueInMind@lemmy.one
                link
                fedilink
                arrow-up
                2
                ·
                8 months ago

                I did indeed follow that guide already, thank you for the respect; I am an idiot and installed both the nvidia WSL driver on top of the host OS driver _as well as the Cuda driver. So I’ll try again with only that guide and see what breaks.