wuphysics87 to Privacy · 2 个月前Can you trust locally run LLMs?message-squaremessage-square22fedilinkarrow-up172arrow-down16file-text
arrow-up166arrow-down1message-squareCan you trust locally run LLMs?wuphysics87 to Privacy · 2 个月前message-square22fedilinkfile-text
I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?
minus-squareEmberleaflinkfedilinkarrow-up9·2 个月前I found this and actually like it better than Alpaca. Your mileage may vary, but they claim that it’s 100% private, no internet necessary: GPT4All
minus-squarestink@lemmygrad.mllinkfedilinkEnglisharrow-up1·2 个月前It’s nice but it’s hard to load unsupported models sadly. Really wish you could easily sideload but it’s nice unless you have a niche usecase.
minus-squareEmberleaflinkfedilinkarrow-up2·2 个月前I’m not sure what you mean by ‘hard to load’. You find the model you want, you download it, then load it up to chat. What’s the issue you’re having?
minus-squarestink@lemmygrad.mllinkfedilinkEnglisharrow-up1·2 个月前At parents for the week but IIRC I had some sort of incompatible extension with a model I downloaded off huggingface
I found this and actually like it better than Alpaca. Your mileage may vary, but they claim that it’s 100% private, no internet necessary:
GPT4All
It’s nice but it’s hard to load unsupported models sadly. Really wish you could easily sideload but it’s nice unless you have a niche usecase.
I’m not sure what you mean by ‘hard to load’. You find the model you want, you download it, then load it up to chat. What’s the issue you’re having?
At parents for the week but IIRC I had some sort of incompatible extension with a model I downloaded off huggingface