Tuesday, July 10, 2012

Thoughts about game entity AI

I've been thinking a lot about how to implement the behavior of entities (people and creatures).  Many mainstream strategy games involve the player giving specific orders to everybody under their control, and the entities subsequently use an A* path-finding algorithm to navigate toward the area indicated until they are in range to do their assigned task.

One important aspect of A* is that it requires a specific destination.  That's fine for giving specific orders to entities, but I want to avoid making a micro-management game.  I'd rather have general commands like "I need people to hunt over here", "cut down lumber here", "build a wall around this area", "plough these plots and then plant cabbages"...  Then each entities decides what to do, given several potential jobs to do, hazards to avoid, and their own set of skills, likes, dislikes, fears, etc. 

From what I understand of AI, this kind of behavior boils down to the following:
  1. Determine what options there are to choose from.
  2. Calculate a "score" for each option.
  3. Choose the option with the best score.
If I were to use A* path-finding, the entities would have to first choose a destination.  This could be done by looking up choices in close proximity, but that would exhibit what I call the "great wall" problem - An entity may find something interesting to do within a distance of just a few steps, and decide that it is the most compelling goal.  However, if there is a large barrier in between (a wall, a deep river, etc), it will potentially take the entity much longer to get there and finish the task than something else that is a further distance away.

One possible solution would be to divide the map into "sectors" of some sort, so that opposite sides of a wall are in different sectors, but then there's the problem of splitting, merging, and otherwise updating the sector table as the entities modify the environment (mine through the mountains, build walls, et. al.)

The other possible solution is to have the decide-what-to-do function do a traversal of the area around the entity to find things of interest (probably using Dijkstra's algorithm), but that defeats the purpose of A*, since its whole point is to do an optimized traversal toward a specific destination (A* is really just a modification of Dijkstra's).

So at that point in my thinking, A* was out, and it seemed like Dijkstra's might work.  Dijkstra's algorithm does an even breadth-first traversal.  It could be implemented to take a number representing how much depth to explore, then collect information about things found within the explored space.  The entity could then take the list of potential things to do, along with the path to each thing.  It would make a decision based on some math to score each option.  If the chosen destination is adjacent to the entity's current location, do the task (attack, build, mine, sow, whatever), otherwise move along the calculated path.

Let's say there are multiple things about an entity's environment that will influence its behavior.  For example, assume that an entity can have a "hunter" skill.  Further assume that there exist a hostile creature (say a dragon) that can be successfully hunted by a group, but is likely to defeat a lone hunter.  How does a hunter know whether it's a good idea to pursue the dragon?  There's a risk/reward balance to be struck here - certainly it would be a good thing to fill the storehouse with cured dragon steaks for the winter, but it would be a bad thing to be roasted by the dragon.

Using the traversal algorithm, the decision algorithm invoked for a hunter could determine that there are multiple other hunters in the area.  It could do some math to determine how likely the hunt is to succeed, and could make a decision to advance or retreat (maybe throwing in the specific hunter's fear of dragons and taste for dragon steaks - fight or flight syndrome).  The advance or retreat would further influence the other hunters' behavior.

Ok, so far it's feasible.  The advance just means taking a step toward the dragon, as previously determined by the path-finding algorithm.

But what about the retreat?  The entity has a clear indication of how to advance, but not necessarily how to retreat.  It would be nice to have an obvious way to determine how to get away.

Somewhere in this thought process, two things came to mind, both related to emergent behavior.  Years ago I did an implementation of Boids in Python.  Also, at some point in the past I read a blog post about how the pac-man ghosts' behavior was implemented (sorry, no link, I don't remember where it was, but it's been discussed quite a bit, google around for more info).

For more info about Boids, read the wiki article, but basically it is an algorithm that simulates a flock of birds by emergent behavior - Each bird's movement is influenced by three factors:  try to maintain general proximity to the center of the flock, try to match the average velocity of nearby birds, and try to avoid immediate proximity of other birds (avoid collisions).  Giving each entity a few simple rules results in some really interesting overall behavior of the flock.

The discussion about implementing ghost behavior in pac-man centered around the concept of pac-man leaving behind a "scent" that dissipates over time.  In persuit mode, the ghosts would simply move to whatever adjacent location had the highest "scent", or make a random decision at intersections if there was no scent to follow.  In retreat mode, they would do the opposite.

So this lead me to think, what if every entity in the game has a "scent".  I think I would do it different than the pac-man example - the scent would be calculated by traversing a certain distance away from each entity, periodically updated to account for changes in entity locations and how the environment can be traversed.  Then, each entity can make immediate decisions based upon calculations made to the scents around them.  In the hunter example, each hunter would observe the scents for each location adjacent to their own.  Assume that a hunter finds that the dragon scent is strongest in front of him.  If there is insufficient scent from other hunters, the math that scores the location will determine that it is not a compelling choice.  Further, the most compelling adjacent location will be the one that leads further away from the dragon.

I think that the "scent" concept will work for a variety of things.  People could enjoy being around each other, but avoid crowds.  Somebody with skills as a miner and a blacksmith might have to decide between working on a project at the anvil or hollowing out a new found coal vein.  I imagine that there could be a situation in which one entity really doesn't like bunnies, but bunnies really like him, so in the absence of hunger or other such pressing needs, the bunnies end up chasing him around.

There is still plenty that needs to be worked out.  The concepts of "likes", "fears", etc., and how those attributes influence the scoring logic will be interesting to work on.

7 comments:

  1. Have you seen this,

    http://roguebasin.roguelikedevelopment.org/index.php/The_Incredible_Power_of_Dijkstra_Maps

    ReplyDelete
    Replies
    1. Thanks for sending that link. No, I hadn't read that before, but it's exactly what I'm thinking about.

      Delete
  2. If I remember correctly, the "scent" idea is how one component of The Sims works but I forget the details. Checkout "collaborative diffusion".

    Another approach is "affordances" where an NPC has a need and the objects that satisfy it broadcast a message that they can meet the need. (This is also in The Sims). So, you might have lumberjacks who have a "chop wood" need and trees that broadcast "choppable wood". Having a current location and a goal, you can revert to A*.

    ReplyDelete
    Replies
    1. Ah ha! That's where I read the pac-man example: http://scalablegamedesign.cs.colorado.edu/wiki/Collaborative_Diffusion

      Thanks a bunch, it's been years since I read that, and I'd forgotten the term.

      I'm not sure I follow the "affordance" approach. If there are multiple choppable trees, how does the lumberjack decide upon which one is the goal in order to run A*?

      Delete
    2. How do you decide which tree you to chop in real life? You see forest and decided "I will go to the nearest tree to me at current moment that is in straight line". On your way (for example if you avoiding some kind of obstacle) you can see other tree that at current moment much nearer then previous one and you will decide "Ok, I will take that one".

      Delete
    3. * Seeing a forest - what does this entail? How does the entity differentiate between a small stand of trees in one direction and a large group of trees in the opposite? How does the entity know that there's a reasonable path to get to the forest?

      * Go to the nearest tree - sure, once you have a list of potential target trees, you can just pick the one with the lowest linear distance, but there may not be a reasonable way to get to it.

      * Changing destination to another tree that ends up being closer - It's possible that the path-finding algorithm will lead nearer to another tree, but it's also possible that it will take you farther away from other trees. Image a situation in which the person sees a tree across the river - just a few meters away. There are several trees just 20 meters north. The bridge is 100 meters to the south, so the pathfinding algorithm takes him that direction, away from a potentially better choice.

      I think that the collaborative diffusion approach solves these problems.

      Delete
  3. A* is often used to path to a specific destination, but it doesn't require that. It only needs a lower bound and an end-test. In the case of searching for a tree to cut, the lower bound is the straight-line distance to the nearest tree, and the end-test is reaching *any* tree. For fleeing from danger, the lower bound is fleeing in a straight line, i.e. (- safe-distance (min-key #(distance % self) enemies)), and the end-test is being a safe distance from every enemy.

    A* also easily handles unusual moves like attacking an obstacle, and multiple end conditions like being either close enough to attack *or* far enough away to be safe. (The pathing algorithm can choose between fight and flight!)

    If you have a threatmap, you can add that to the cost function to make pathing automatically avoid danger.

    A threatmap also provides an easy way to take the presence of allies into account when deciding whether to attack: attack a target only if the total friendly threat is greater than the total enemy threat (possibly times a caution factor). This makes reasonably intelligent behavior without much work. (You can handle courage/fear as an adjustment to the caution factor, e.g. the frightened hunter won't attack without a 3:1 advantage.)

    ReplyDelete