One important aspect of A* is that it requires a specific destination. That's fine for giving specific orders to entities, but I want to avoid making a micro-management game. I'd rather have general commands like "I need people to hunt over here", "cut down lumber here", "build a wall around this area", "plough these plots and then plant cabbages"... Then each entities decides what to do, given several potential jobs to do, hazards to avoid, and their own set of skills, likes, dislikes, fears, etc.
From what I understand of AI, this kind of behavior boils down to the following:
- Determine what options there are to choose from.
- Calculate a "score" for each option.
- Choose the option with the best score.
One possible solution would be to divide the map into "sectors" of some sort, so that opposite sides of a wall are in different sectors, but then there's the problem of splitting, merging, and otherwise updating the sector table as the entities modify the environment (mine through the mountains, build walls, et. al.)
The other possible solution is to have the decide-what-to-do function do a traversal of the area around the entity to find things of interest (probably using Dijkstra's algorithm), but that defeats the purpose of A*, since its whole point is to do an optimized traversal toward a specific destination (A* is really just a modification of Dijkstra's).
So at that point in my thinking, A* was out, and it seemed like Dijkstra's might work. Dijkstra's algorithm does an even breadth-first traversal. It could be implemented to take a number representing how much depth to explore, then collect information about things found within the explored space. The entity could then take the list of potential things to do, along with the path to each thing. It would make a decision based on some math to score each option. If the chosen destination is adjacent to the entity's current location, do the task (attack, build, mine, sow, whatever), otherwise move along the calculated path.
Let's say there are multiple things about an entity's environment that will influence its behavior. For example, assume that an entity can have a "hunter" skill. Further assume that there exist a hostile creature (say a dragon) that can be successfully hunted by a group, but is likely to defeat a lone hunter. How does a hunter know whether it's a good idea to pursue the dragon? There's a risk/reward balance to be struck here - certainly it would be a good thing to fill the storehouse with cured dragon steaks for the winter, but it would be a bad thing to be roasted by the dragon.
Using the traversal algorithm, the decision algorithm invoked for a hunter could determine that there are multiple other hunters in the area. It could do some math to determine how likely the hunt is to succeed, and could make a decision to advance or retreat (maybe throwing in the specific hunter's fear of dragons and taste for dragon steaks - fight or flight syndrome). The advance or retreat would further influence the other hunters' behavior.
Ok, so far it's feasible. The advance just means taking a step toward the dragon, as previously determined by the path-finding algorithm.
But what about the retreat? The entity has a clear indication of how to advance, but not necessarily how to retreat. It would be nice to have an obvious way to determine how to get away.
Somewhere in this thought process, two things came to mind, both related to emergent behavior. Years ago I did an implementation of Boids in Python. Also, at some point in the past I read a blog post about how the pac-man ghosts' behavior was implemented (sorry, no link, I don't remember where it was, but it's been discussed quite a bit, google around for more info).
For more info about Boids, read the wiki article, but basically it is an algorithm that simulates a flock of birds by emergent behavior - Each bird's movement is influenced by three factors: try to maintain general proximity to the center of the flock, try to match the average velocity of nearby birds, and try to avoid immediate proximity of other birds (avoid collisions). Giving each entity a few simple rules results in some really interesting overall behavior of the flock.
The discussion about implementing ghost behavior in pac-man centered around the concept of pac-man leaving behind a "scent" that dissipates over time. In persuit mode, the ghosts would simply move to whatever adjacent location had the highest "scent", or make a random decision at intersections if there was no scent to follow. In retreat mode, they would do the opposite.
So this lead me to think, what if every entity in the game has a "scent". I think I would do it different than the pac-man example - the scent would be calculated by traversing a certain distance away from each entity, periodically updated to account for changes in entity locations and how the environment can be traversed. Then, each entity can make immediate decisions based upon calculations made to the scents around them. In the hunter example, each hunter would observe the scents for each location adjacent to their own. Assume that a hunter finds that the dragon scent is strongest in front of him. If there is insufficient scent from other hunters, the math that scores the location will determine that it is not a compelling choice. Further, the most compelling adjacent location will be the one that leads further away from the dragon.
I think that the "scent" concept will work for a variety of things. People could enjoy being around each other, but avoid crowds. Somebody with skills as a miner and a blacksmith might have to decide between working on a project at the anvil or hollowing out a new found coal vein. I imagine that there could be a situation in which one entity really doesn't like bunnies, but bunnies really like him, so in the absence of hunger or other such pressing needs, the bunnies end up chasing him around.
There is still plenty that needs to be worked out. The concepts of "likes", "fears", etc., and how those attributes influence the scoring logic will be interesting to work on.