With local models and inference like llama.cpp, I wish the modder rather spent his energy with models that are locally run, and possibly even fine-tuned to the in-game world. Instead, this mod requires a metered API that needs billing and always-on network connection, while just serving a generic language model with little in-game knowledge.
probably better to get it working first and then optimize. most users probably won’t have the performance headroom to run both.
excited about the possibillites of a true radiant quest system especially combining this with vr and voice inputs
-
ChatGPT-driven NPC Experiment 2 - YouTube https://www.youtube.com/watch?v=UVNZ3_FwqJE
-
ChatGPT in Skyrim VR | Mantella - Lip Sync & In-Game Awareness Update - YouTube https://www.youtube.com/watch?v=Gz6mAX41fs0
-