I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
  • robber
    cake
    OP
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    Sounds like a rather frustrating journey for you.

    • snekerpimp@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      It has been. I started in this because I liked picking up kick ass enterprise hardware really cheap and playing around with what it can do. Used enterprise hardware is so damn expensive now, it’s cheaper and easier to do everything with consumer products and use the rx6700 in my gaming rig. Just don’t want that running llms and always on.