suoko@feddit.it to AI · 1 year ago👾 LM Studio - Discover and run local LLMs - Linux beta version now available 🐧lmstudio.aiexternal-linkmessage-square6fedilinkarrow-up110arrow-down14file-textcross-posted to: hackernews@lemmy.smeargle.fanshackernews@derp.foo
arrow-up16arrow-down1external-link👾 LM Studio - Discover and run local LLMs - Linux beta version now available 🐧lmstudio.aisuoko@feddit.it to AI · 1 year agomessage-square6fedilinkfile-textcross-posted to: hackernews@lemmy.smeargle.fanshackernews@derp.foo
minus-squareylailinkfedilinkarrow-up1·1 year agoThe question is not support. It is clear that LM Studio has nothing custom in term of actual inference code. The curiosity is that they are still stuck at LLaMa and Falcon, etc., giving Mistral’s performance.
The question is not support. It is clear that LM Studio has nothing custom in term of actual inference code. The curiosity is that they are still stuck at LLaMa and Falcon, etc., giving Mistral’s performance.