Imagine a game like “the sims” where you can adjust how autonomous the sims you control are. I could see Ai being used to control that.
Or having an elder scroll game were you just respond however you want and the npc adapts to it.
Imagine a game like “the sims” where you can adjust how autonomous the sims you control are. I could see Ai being used to control that.
Or having an elder scroll game were you just respond however you want and the npc adapts to it.
AI in games (using code for entities to make non-player decisions) is about being good enough, cheap enough. It’s just like how games determine their physics. The existence of large scale “black box” AI like OpenAI does not reflect what’s good or cheap. It can’t play chess. You think it’s going to understand The Sims and make reasonable choices in that system?
They’ve already created well tuned system to give your Sims in obtaining their needs. It leads to you having to manage the chaos, and that’s what the fun is. To better hone that is to have the AI play the game for you. And even that, if efficiency of play is the goal, is better done by TASbot and machine learning.
That generic black box style of AI like popular LLMS is like creating a hammer. Now everyone is treating every problem like nails. AI decisions making in games is like washing windows; don’t use a hammer.
The problem is that “AI” is a poorly defined, very vague, and widely used term. Most people here have assumed you meant LLMs because everyone pitches those as ways to solve everything. “Oh, irer up an agent, give it instructions, and let it make requests that are context dependent”. Then, like everyone says here, that usually turns into people testing boundaries and breaking your game. So that makes it both “not good enough” and “not cheap enough”.
Now, look at AI with the term “machine learning” in mind and it’s different. Games like ARC Raiders use machine learning to teach NPCs movement behavior, and to train AI voices like Siri so they can’t add things without further paying people. They think that up-front investment is worthwhile. But those are both far cries from “uploading it to Claude or ChatGPT and see what happens”. Especially when you would have to teach that black box AI your system anyway, for it to use it. And you’re already doing that with current “good enough, cheap enough” bespoke methods, for much cheaper, and they’re good enough.