☆ Yσɠƚԋσʂ ☆ to Programmer Humor • 1 year agoDeploying LLMs to production be likeimagemessage-square2arrow-up149arrow-down14
arrow-up145arrow-down1imageDeploying LLMs to production be like☆ Yσɠƚԋσʂ ☆ to Programmer Humor • 1 year agomessage-square2
minus-square𝒍𝒆𝒎𝒂𝒏𝒏linkfedilink4•11 months agoYeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset? Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode
minus-square☆ Yσɠƚԋσʂ ☆OPlink-3•11 months agoProbably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.
Yeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset?
Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode
Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.