• actually-a-cat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    That’s what llama.cpp and kobold.cpp do, the KV cache is the last thing that gets offloaded so you can offload weights and keep the cache in RAM. Although neither support SuperHOT right now.

    MQA models like Falcon-40B or MPT are going to be better for large context lengths. They have a tiny KV cache so even blown up 16x it’s not a problem.