• sturlabragason@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    8
    ·
    edit-2
    3 个月前

    You can download multiple LLM models yourself and run them locally. It’s relatively straightforward;

    https://ollama.com/

    Then you can switch off your network after download, wireshark the shit out of it, run it behind a proxy, etc.