wuphysics87 to Privacy · 6 hours agoCan you trust locally run LLMs?message-squaremessage-square7fedilinkarrow-up130arrow-down12file-text
arrow-up128arrow-down1message-squareCan you trust locally run LLMs?wuphysics87 to Privacy · 6 hours agomessage-square7fedilinkfile-text
I’ve been play around with ollama. Given you download the model, can you trust it isn’t sending telemetry?
minus-squareJack@slrpnk.netlinkfedilinkarrow-up3·5 hours agoCan’t you run if from a container? I guess the will slow it down, but it will deny access to your files.
minus-squaremarcie (she/her)linkfedilinkarrow-up8·5 hours agoyeah you could. though i dont see any evidence that the large open source llm programs like jan.ai or ollama are doing anything wrong with their program or files. chucking it in a sandbox would solve the problem for good though
minus-squareSeekPie@lemm.eelinkfedilinkarrow-up4·edit-23 hours agoYou could use “Alpaca” flatpak and remove the internet access with flatseal after having downloaded the model. (Linux) Or deny the app’s access to internet in app settings. (Android)
Can’t you run if from a container? I guess the will slow it down, but it will deny access to your files.
yeah you could. though i dont see any evidence that the large open source llm programs like jan.ai or ollama are doing anything wrong with their program or files. chucking it in a sandbox would solve the problem for good though
You could use “Alpaca” flatpak and remove the internet access with flatseal after having downloaded the model. (Linux)
Or deny the app’s access to internet in app settings. (Android)