I recall someone saying something like that a game designer has fourteen game ideas per day, and maybe one of them is any good. This is a story of one of those thirteen ideas. I made a partial prototype and an incomplete GUI mockup, but that's as far as I got before pulling the plug.
The premise was more or less inspired by Alien. One enemy, multiple player characters waiting to be ambushed. The view would've been a bit more like Space Hulk (Electronic Arts, 1993): first-person maze views for each player, but instead of tiles being walls, the walls were the borders between the tiles.
The Space Hulk incarnation I was referring to; my plans had similarly multiple characters' views open at once.
The player characters would've first scouted the maze in an attempt to find equipment (weapons) to attack the enemy before the enemy was set free in the maze. Then the players would've had to set up an ambush to defeat the enemy.
The enemy type might not even have been told to the player before the round.
I don't like writing AI for enemies. I'm not good at it. In particular, I'm not good at it when coding in assembler. For that reason, I wanted to try making it differently.
Let's have one enemy versus several player characters in a maze. A bit like a team of adventurers (at first without weapons) going against the Minotaur. How should the enemy appear to behave? How would an enemy good at ambushing appear to behave like?
To me, they would hide away from the player, flank them and pick them one at a time. If the player plays well, they ambush the enemy before the enemy can ambush them.
Doing that would be something I'm not going to try implementing in ASM.
But then I thought about using probability distributions for this, basically using partially observable Markov models. Simulate simultaneously all possible enemies, weed out the worst performing ones (the ones that wander into the player's sight), and the remainder would appear to be the best enemies. Whether thru guile or luck, it makes no difference. (Maybe?)
I'll get into the nitty gritty detail a bit later, but in short: the idea is to have a probability distribution over where the enemy can be, have the player's observations affect this distribution and determine "who is better in ambushing" by how these distributions are changed: this would determine which side spots their opponent first.
Once the enemy was observed, their AI would change to a simpler one, such as moving randomly around until spotting the player, and then rushing at the player in an attempt to trample them.
Or if the enemy wasn't a minotaur but something else, the "simple" AI would do something else.
Remember, I'm doing this on an 8-bit computer, so there were many simplifications. I originally came up with this idea for modern computers, but I don't write games for them, do I?
Basically, I had a probability distribution defined over a three-dimensional space: X-coordinate, Y-coordinate, and direction (N/E/S/W); each of these triplets was called a state. The probability of each state was the probability of the enemy being in that tile, facing that direction. If the maze was an 8x8 grid and the cardinal directions were the available directions, the total state space would be 8*8*4=256 elements in size.
The enemy wouldn't be able to walk through walls or teleport around the maze. It wouldn't be likely make a U-turn in a corridor either, and would prefer going ahead than making a turn. This here would define the transition probabilities: what is the probability of the enemy transferring from state s1 to state s2 (assuming the enemy was in state s1).
Now, to update the idea of the position of the enemy for the next "turn", the probability of the being in state s_1 would be the sum of probabilities of the enemy arriving to it from another state, weighted by the probability of the enemy being in that state.
Over time, the enemy's location would disperse. Imagine dropping a drop of dye into a glass of water, and it would spread and get diluted, but not go past the walls of the glass. That's not much unlike what would happen here.
And now we get to the partially observable part. I first heard of POMM in relation to robot navigation. They'd have a probability distribution over their location, and if they spot a landmark, they can say they most likely are not in places where they cannot see the landmark. If you see something like the Eiffel tower, then you're more likely to be in Paris or Las Vegas instead of Rotherham.
Here, the model would be fed observations and these would affect the belief where the enemy was. If a player watches a corridor and the game determines it's too early for the player to see the enemy, the states set in those corridors would have their probabilities changed to 0. If the game determined it's too early for the enemy to ambush the enemy, the states that would see the player would have their probabilities set to 0.
One way to sample a random number from a finite collection is to use inverse transform sampling. Sample a random number between 0 and 1 from the uniform distribution, then subtract from that the probability of each element one at a time and when the result dips below zero, that latest one is the sampled value from whatever distribution you have defined over the elements.
My plan was to have such a random value between 0 and 1, and when the player could see a state with nonzero probability for the enemy to be in it, I would decrease that analogue by the probability that was "removed" from the distribution, and when it got below zero, the enemy was spotted in that tile going in that direction. If the player was watching the only way out of a cul-de-sac where the enemy had spawned, this would happen sooner than if the player was staring at a wall.
If the threshold for detecting the enemy was 70%, then 70% of all possible enemies that were the worst in hiding from the player were considered "too dumb or unlucky".
Obviously the distribution from where the value between 0 and 1 were sampled (for both the player and the enemy) wouldn't have been uniform, but I ditched this plan before I got to think about that more deeply.
Once the enemy's exact position was determined this way, the game would shift to a very simple AI. Once the enemy was out of sight and the AI didn't have a still-ongoing plan (like, ramming forwards until hitting the next wall), the game would shift back to using the probability distribution but initializing it with the last tracked enemy's location as the initialization.
While I never got to the point of the prototype being playable, I can infer something how the game would've been like.
I think you could argue that now the enemy's skill is nothing but luck, and I would agree that is a valid point.
I actually made a prototype for updating the probabilities. It involved having 256 states, each given as a multiple of 1/256, and optimizing the multiplication routine. And it worked fast enough that it would've been usable in a game.
But that's all it did: it beeped when it was finished with updating the distributions, and kept doing that in an eternal loop.
I don't think this would've been a fun game. So much modern game design is about giving the player the edge to make the game not "seem unfair" to them. The coyote time (the player can jump still even if they've run over the edge of a cliff), the first shot of an enemy never hitting the player, ... This concept would've made it difficult to apply such crutches. At best, it would've allowed the player to hear the enemy spotting the player (like the alerted guard -effect from Metal Gear).
The second is that an 8x8 maze doesn't have much room for set pieces. The game would also revert into the player spotting the ideal location for ambushing the enemy, which would make predesigned levels pointless unless there were many of them, because a maze with 64 tiles is also a small one. With random-generated mazes, there'd be little point in playing the game repeatedly either.
A maze I drew in Libreoffice Calc quickly as an example.
This idea wouldn't work well outside of tile-based systems either where the characters don't move tiles at a time and make straight turns. Or maybe the rules to modify the enemy's state distribution could be adjusted to suit "partial" observations. Still, I doubt the idea could work on "modern" games.
I suppose this might also be applicable to modelling multiple enemies at the same time. At minimum, the memory used by the states would need to be twice as much if there were two enemies, and the update rules for the distribution would become different if they couldn't exist at the same location at the same time (i.e., the distributions weren't independent).
And finally -- when the players finally encountered the enemy, I wasn't sure how to implement that encounter. The player has spent time finding gear, mapping out the maze, positioning themselves, and now that they're ready to take on the enemy... reducing it to a single launched arrow would be an anticlimax. Plus there should be some allowance for the player to fail as well, since fumbling around with the commands at a critical moment would make it punishingly difficult. Allowing a freeze time to give commands to the characters would, by contrast, make it too easy.
This last part was the final straw.
At first, I had an idea of the location actually being a spaceship, and instead of just attacking the players, the enemy could also sabotage ship systems. But since I ended up trying to make this an 8-bit game, I gave up on "arbitrary" maps that weren't 8x8x4=256 states in size.
I loved planning and prototyping this. The code I found for multiplying 16- and 8-bit numbers had a bug, so I had to fix that. When the multiplier used at most 3 bits, I could optimize that routine even further. To avoid losing probability mass to rounding errors, I made a quick dirty fix, and even that brief moment of triumph felt great.
This is pretty much how my projects often go, actually. I'm more interested in overcoming the technical hurdles than in actually creating the content to finish the game.
Probably the best time I had with games that year, actually. And I don't remember what year that was!