InternetPirate@lemmy.fmhy.ml to Singularity | Artificial Intelligence (ai), Technology & Futurology@lemmy.fmhy.mlEnglish · edit-21 year agoMicrosoft LongNet: One BILLION Tokens LLM — David Shapiro ~ AI (06.07.2023)youtube.comexternal-linkmessage-square8fedilinkarrow-up17arrow-down11file-textcross-posted to: models@lemmy.intai.tech
arrow-up16arrow-down1external-linkMicrosoft LongNet: One BILLION Tokens LLM — David Shapiro ~ AI (06.07.2023)youtube.comInternetPirate@lemmy.fmhy.ml to Singularity | Artificial Intelligence (ai), Technology & Futurology@lemmy.fmhy.mlEnglish · edit-21 year agomessage-square8fedilinkfile-textcross-posted to: models@lemmy.intai.tech
minus-squareBehohippy@lemmy.worldlinkfedilinkEnglisharrow-up3·1 year agoAlso not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
minus-squareMartineski@lemmy.fmhy.mlMlinkfedilinkEnglisharrow-up3·1 year agoYeah, long term memory where ai can access only what it needs/wants is the way.
minus-squareLuovahulluus@lemmy.worldlinkfedilinkEnglisharrow-up2·edit-21 year agoFor now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.
Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
Yeah, long term memory where ai can access only what it needs/wants is the way.
For now, I’d be happy with an AI that had access to and remembered the beginning of our conversation.