• bamboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 months ago

    We’re already seeing a slight leveling off compared to what we had previously. Right now there is a strong focus on optimization, getting models that can run on-device without losing too much quality. This will both help make LLMs sustainable financially and energy-wise, as well as mitigate the privacy and security concerns inherent to the first wave of cloud-based LLMs.