☆ Yσɠƚԋσʂ ☆ to technology@hexbear.netEnglish · 6 months agoLlama 3-V: Matching GPT4-V with a 100x smaller model and 500 dollarsaksh-garg.medium.comexternal-linkmessage-square4fedilinkarrow-up111arrow-down11cross-posted to: hackernews@lemmy.smeargle.fanstechnology@lemmygrad.mltechnology
arrow-up110arrow-down1external-linkLlama 3-V: Matching GPT4-V with a 100x smaller model and 500 dollarsaksh-garg.medium.com☆ Yσɠƚԋσʂ ☆ to technology@hexbear.netEnglish · 6 months agomessage-square4fedilinkcross-posted to: hackernews@lemmy.smeargle.fanstechnology@lemmygrad.mltechnology
minus-squareumbrellalinkfedilinkEnglisharrow-up3·6 months agohonestly if i dont have access to the training data, eg. cant compile it myself, i cant fully call it open. but yeah, i feel zucc knows he wouldnt be able to compete otherwise.
minus-squareamphibian [she/her]@hexbear.netlinkfedilinkEnglisharrow-up2·6 months agoThat’s true but then by those standards is there any competitive open source AI? I don’t think Mistral, Phi, Qwen, or Gemma have easy access to that data either but I could be wrong.
honestly if i dont have access to the training data, eg. cant compile it myself, i cant fully call it open.
but yeah, i feel zucc knows he wouldnt be able to compete otherwise.
That’s true but then by those standards is there any competitive open source AI? I don’t think Mistral, Phi, Qwen, or Gemma have easy access to that data either but I could be wrong.