LearnBot and Code Reflection

I've started on the AI that will learn to play a game running in TankEngine.  I've cleverly named it LearnBot and through the magic of code reflection I've made some progress towards allowing LearnBot to scan the object it is supposed to monitor and learn about what it can do and what variables it has.  The code itself took a couple iterations to get working, but here is the discover function that gathers and compiles data.  Note that the monitor object referenced in the code is the object that LearnBot is supposed to learn about.

	 * Discovers variables and functions of the attached object.
	public function discover() {
		//Array of strings to hold all the fields we discover.
		discoveries = Type.getInstanceFields(Type.getClass(monitor));
		//Output what we have gathered.
		trace(Type.getClassName(Type.getClass(monitor)) + "");
		//Create some temporary arrays for functions, variables, and objects (f, v, and os).
		var f:Array<String> = new Array<String>();
		var v = new Array<Dynamic>();
		var os = new Array<Dynamic>();
		//Loop through each of our discoveries
		for (i in 0...discoveries.length) {
			//Is it a function?  If so, add it to the function array.
			if (Reflect.isFunction(Reflect.field(monitor, discoveries[i])) ) {
			//Is it an object?  If so, add it to the object array.
			} else if (Reflect.isObject(Reflect.field(monitor,discoveries[i]))){
			} else {
			//Otherwise, it is a variable.  Add it to the variable array.
		//Print some output.
		for (i in 0...f.length) {
			trace(f[i] + " is a function.");
		for (i in 0...os.length) {
			trace(os[i] + " is an object.");
		for (i in 0...v.length) {
			trace(v[i] + " is a variable of type " +  Type.typeof(Reflect.field(monitor, v[i])));

Can the computer learn the rules to a game?

Now that my cleverly named TankEngine is in a state that I can start to mess around with it I have to approach the tricky topic of how I am going to write an AI that can learn the rules to the game that it is playing.  I'm going to be working off the top of my head here and I'm not sure what is going to work and what isn't, but I think that I have an approach that might work.  

Rewards based training

My wife is a licensed dog trainer and during her courses one of the big pushes was a rewards-based training system.  The thinking goes that positive reinforcement and an encouragement of good traits in a dog wil lead to a better behaved and more balanced dog in the long run than one that is only told what not to do.  My thought is that if I can assign rewards to certain behaviors (destroying another tank gets a reward, finishing a game without being destroyed gets a reward, etc.) I can encourage those behaviors.  I also need to be able to make the AI have aversions to certain actions (aversion to getting hit by a bullet, aversion to shooting at nothing, etc).  I don't want to outright forbid anything though, because maybe it makes sense for a tank to get hit with a bullet if it means destroying the opponent.  

The next thing the AI will need to do is figure out how to play the game.  My AI is going to have a couple different states that it can be running in.  First, it can be in Discovery mode.  In discovery mode, the AI will just experiment and gather data.  It will run the functions that it is allowed to and record all the variables that are visible.  The next state is the Analyze state, which will be after the game is over.  This will look over the data collected and try to find relationships.  This should allow the AI to learn things like when I press the up key, the x and y location change based on the angle.  The result of the Analyze phase should be a list of rules.  The next phase will be the Test phase.  In the test phase the AI will check the rules that it has learned to see how accurate they are.  

The final state of the AI will be Run, which will use the rules that it has created to try to achieve the most rewards.  

I have no idea if this approach is going to work, but I think the theory is sound.  

How will the AI know what it can do?

My goal is to train the AI with the smallest amount of human interaction possible.  So I am going to use code reflection to allow the AI to discover what it can and cannot do.  Code reflection is a powerful tool that allows an object to examine its own code.  Haxe has a Reflect object that can be used to learn about variable names and object types.  My plan is to have the AI first look at all the variables and functions that it has access to.  Then it will have to give a list to a human so they can tell the AI what it is allowed to use and modify.  If we don't have this step we would get strange results like the AI just changing its own score variable to get the rewards.  


Started TankEngine


I've put in some work on the new engine I'm writing for testing and training AIs.  So far it isn't much to look at, but the important thing is that I met most of my design goals.  

This is a display of the data that I gathered.  To mess around with it, use the following keys:

W - Play

S - Stop

A - Rewind

D - Forward

Also, not shown, is the ability for all the tanks to shoot bullets and destroy one another.  If you're intersted in the poorly documented source, it is here.

How it works

The actual heavy lifting is done by the game class.  The MainMenuState just creates a game class, creates some tanks with locations and facings, and then starts running.  Each time the step() command is run on the game class it sees what each tank wants to do that round and then calculates all the new positions of the objects.  Then it resolves contacts and destroys anything that needs destroying and checks if there are victory conditions (in this case, all but one tank is destroyed).  After the round is completed it creates an object called RoundInfo that stores all the information for the round.  I repeatedly call game.step() until either I hit my Max rounds (contained in Reg.ROUND_LIMIT) or the game reports it is over.  

Now I have a game object that contains an array of RoundInfo objects that has the positions and the game state for each round of the game.  It is all just numerical data and not very interesting, so I needed a DisplayGame object to take my raw data and turn it into something graphical that I can look at.  My DisplayGame object has a single static fucntion called displayGame().  If you supply this with the game and the round number it returns an FlxSprite object with a graphical representation of that round.  A couple simple lines of code in the MainMenuClass to let you loop through these returned objects and we are good to go.

AI Test engine

My new question to myeslf is can I write an AI that can play a game without actually knowing what the rules are or how to play.  The first step will be for me to write a simple game for the computer to try to learn.  To do that, I need a game engine...

The engine that I need has some special requirements.  I looked at some existing engines but most of them are more complicated than I need (or actually want), or are in a style that I don't want to use.  Here's my short list of requirements that I'm sure will change as I think about it more and run into problems...  


  • Simple game with minimal calculations needed - I may need to run thousands of games if I want to train with a genetic algorithm or something, so small savings will add up to large amounts of time over thousands of games
  • Run without visuals - I don't want to watch every game as it is being played, because there might be thousands.  Plus, visuals take processing power, so this goes along with the first requirement
  • Ability to record and watch games at a later date - I do want to have the ability to watch a game that has already been calculated so I can see how the AI is performing
  • Ability to play a game against the AI - I want to be able to play the game myself against some AI opponents that I have written
  • The inputs for the game entities should be abstracted so there is a unified way for each AI to interact with the objects.  This will allow me to plug a different AI mdule in and have the tank still work.  So a game could have multiple AIs and human players at the same time

Nice to haves

  • The engine should be able to randomly generate new maps the game will be played on
  • The engine should be easily expandable so new rules and modes can be added

A couple good examples of engines that I'm looking for are MarioAI (a great site I'd highly recommend checking out) and MIT's Battlecode.  I don't want to use these engines for a variety of reasons.  I don't want to do a side scroller and the Battlecode engine isn't as easy to customize as I'd like.  So I've decided to write my own. 

The game I've settled on is a simple game of Tanks.  Multiple tanks will move around a map and shoot at each other until they blow all the other tanks up.  Maybe later on I'll give them swivel turrets and have teams, but to start each tank will just fire in the direction it is facing.

Fear the mighty tank  Fear the tank!

The main game entity will be a tank, which will have a location, and a facing.  The tank will have 5 main abilities.  They can rotate clockwise, rotate counterclockwise, move forward, move backwards at a reduced rate, or fire.  This way, the output from whatever AI I end up writing or using will output one or more of these 5 possible behaviors.  Later on I might have to add more (swivel turret for instance) but to see if this is even feasable I'll start with just these 5.

Tanks can also shoot bullets.  These will travel in a straight line for some duration of time before disappearing.  If they hit a tank, it dies (the tank, I mean.  But actually, the bullet also).

 Actually, don't fear the tank.  Fear the bullet!

Pretty basic, but it should get the job done.