I’ve been playing with both the Thumb and the Unexpected keyboards. I like 'em both but, man, I have to admit I’d like them more if they had that top bar that predicts what you might be. Is that just a no-go from a privacy perspective? Can that functionality be local?

(I also wouldn’t mind a good voice typing feature)

  • Tja@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    10 months ago

    It can and it will. That is one of the uses of “NPUs” I’m most excited about.

    Basically you can run an (potentially open-source) small LLM on the phone using whatever context the keyboard has access to (at a minumim, what you’ve typed so far) and have the keyboard generate the next token(s).

    Since this is comptationally intensive the model has to be small and you need dedicated hardware to optimize it, otherwise you would need a 500W GPU like the big players. You can do it for 0.5W locally. Of course, adjust your expectations accordingly.

    I don’t know any project doing it right now, but I imagine that Microsoft will integrate in SwiftKey soon, with open source projects to follow.

    • kevincox
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      10 months ago

      I think you hugely estimate what it takes to complete and correct a few words. Maybe you would want some sort of accelerator for fine tuning but 1. You probably don’t even need fine tuning and 2. You can probably just run it on the CPU while your device is charging. But for inference modern CPUs are by far powerful enough.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        10 months ago

        Yeah, modern arm CPUs can run at 3GHz and play PS4 level games, but I don’t want my phone to become a handwarmer every time I want to typefvvn a quick email…

        And of course, I’m not talking about correcting “fuck” to “duck”, I’m talking about ChatGPT level prediction. Or llama2, or gemini nano, or whatever…