There was a post asking people their opinions about Edge and many people seemed to liked the idea of Edge and seemed to be ok having it on Linux (Blasphemy)

Also, can we all agree how fast Edge went from joke to a threat? I mean, it’s good now alright! It was good back then, but it’s better now. Money man!!! Money! Personally I hate MS, but I can’t help but see the fact that there is no alternative to Bing GPT and many features Bing offers on Linux.

If there is an open source ChatGPT how would it look? Who would bear the costs? How would we solve the server problem? i.e., it would take a ton of server space and bandwidth. Just wondering.

I am pretty sure MS products will improve greatly due to their integration with GPT what do us poor folks on Linux do?

Just want to know the answers, I don’t want to discuss (aka can’t comment, I need to study), but just curious!

  • lloram239@feddit.de
    link
    fedilink
    arrow-up
    32
    ·
    edit-2
    1 year ago

    what do us poor folks on Linux do?

    Run llama.cpp and any of the models listed here, that stuff has been around for months.

    TheBloke has a lot of models converted to GGUF format which you need for llama.cpp.

    Quick Start Guide (requires Nix, otherwise compile llama.cpp manually):

    $ GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/TheBloke/guanaco-7B-GGUF
    $ cd guanaco-7B-GGUF
    $ git lfs pull --include=Guanaco-7B.Q4_0.gguf
    $ nix run github:ggerganov/llama.cpp -- -m Guanaco-7B.Q4_0.gguf --instruct
    > Write haiku about a penguin
     A penguin walks on ice,
     Takes a plunge in the sea,
     Hides his feet from me!
    
    • RickyRigatoni
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      a package manager that can pull, build, and run from git with one command is pretty neat

    • 257m
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I ran it on my pc with a gtx 1070 with cuda enabled and compiled with the cuda compile hint but it ran really slowly how do you get it to run fast?

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        To make use of GPU acceleration you have to compile it with the proper support (CUDA, OpenCL, ROCM) and add --gpu-layers 16 (or a larger number, however much your VRAM can handle). If that’s not enough, than the GPU/CPU is probably to slow.

        You can try a smaller model, those run faster, but give worse results.

        • 257m
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Thanks I might try that out later.