Making a good algo

While I may be asking a question, I hope that this becomes a topic page from which everyone can find a point of reference.

Since it seems as though, as explained by multiple people, machine learning is off the table due to large time constraints and a lack of teachable situations. As such, I was wondering what people recommend for producing a good algo. My recent experience is of any change resulting in a worse algo than the starter, so what would you recommend?

1 Like

Even so it is a very long shot, I will still give a try to a machine learning approach. We will see how it goes, but if one day they manage to produce a faster game engine, it would be very nice.

For your last question, I didnā€™t do much: I made my first algo following my intuition. Then all I did was watching its matchs and patching the flaws. It is true that sometimes it goes worse, but in the end the last version I published was way more better than the first one.
But I also feel that a better approach should exist so if someone has a good idea, I will gladly hear it :wink:

I was wondering, did you create a defensive structure or have your algo automatically make one?

I designed it if itā€™s your question. The algo simply follows my insctruction on which part to construct and when

I personally found tinkering in the browser play-by-hand version way to slow to be able to analyze strategies. I recommend coding it out and then testing designs that way.

The current starter-algo has a static mapping that it is always trying to complete (the C1 diagram) and then expands beyond that if it can. You could do something similar by trying out different designs that are effective and then getting your bot to match whatever design you created (I believe this is what @arnbyā€™s solution does).

The above strategy has several problems:

  1. The algo is static and does not react to changes in the opponentā€™s attack strategy
  2. The potential to improve oneā€™s defense after destruction is limited - rather than saying ā€˜oh, this was destroyed and I can get a better defense by adding over hereā€™ it is simply constantly trying to reach the same state. Basically, itā€™s a situation where a destroyed defense is almost always objectively bad, rather than opening up new strategies
  3. It is limited to the number of strategies you can test, which is time-consuming and not conclusive at all

These are just some of what came to the top of my head at the moment. All of this being said, a static algo is probably the best first step when understanding the game at first and getting the feel for possible strategies.

Personally, I imagine a smart algo would constantly be analyzing two main things from the enemy: where the enemy is targeting and where the enemyā€™s weakest points are. Then, build a defense that tries to defend the target areas while placing reinforcing units on the path to where the enemy is weakest. Iā€™ll leave it up to the reader to decide the best way to determine this information :smiley:.

Regarding machine learning, you should also remember that machine learning does not have to be applied to everything all at once. I agree that running a training set on the current game engine is probably too slow (maybe offset by parallelism, havenā€™t tried it yet), but one solution could be to create mini gameā€™s personally to train mini AIā€™s for different parts of the game. No idea if this would practically work, but it may be worth a shot.

4 Likes

I donā€™t think it is correct to say that ā€œmachine learning is off the tableā€. Most of the computational burden associated with machine learning algorithms comes in at the learning stage, not during evaluation, so it isnā€™t necessarily that restrictive if the server limits your computing resources.

You probably arenā€™t going to be able to train and serialize a giant neural network that takes as input the entire raw game state and outputs an optimal action, but there are more mundane applications.

Here is one idea applied to deciding whether or not to sell a tower: For each tower you control, store the history of itā€™s stability and on each turn fit a simple curve to this history. Extrapolate the curve to determine how many turns away the tower is likely to be destroyed. If it is likely to be destroyed within 2 turns, sell the tower. This is a simple ā€œmachine learningā€ application that can be applied at each turn of the game, and can also be tuned in an offline sandbox.

2 Likes

How would one go around storing the data of stability each turn? I attempted to make something that would store the data of each enemy EMP set of movements to attempt to predict the movement of an EMP later in the game, however, I was only able to store the last turn worth of movement before it reset itself.

You can save individual data:

def __init__(self):
	super().__init__()
	random.seed()
	
	self.prevHP = 20

def on_turn(self, turn_state):
	print("Last round's HP was:")
	print(self.prevHP)
	
	#Your code here
	
	#Put the current hp in the var for next round
	self.prevHP = game_state.my_health

I use this in my code, and it works.


You can also theoretically store complete game states similarly:

def __init__(self):
	super().__init__()
	random.seed()
	
	self.prevGame_State = None

def on_turn(self, turn_state):
	game_state = gamelib.GameState(self.config, turn_state)

	if  not self.prevGame_State is None: #Make sure this variable contains a game state
		print("Last round's HP was:")
		print(self.prevGame_State.my_health)
	else:
		print("There is no previous game state yet")
	
	#Your code here
	
	#Pass the current reference of game_state into the previous gamestate var
	self.prevGame_State = game_state

You can store each turnā€™s game state in an array if you want, but it will get big fast.

1 Like

Indeed, anytime you want to store some persisted state you should think of objects and classes. Here is a skeleton implementing the strategy I described above:

class Destructor:
    def __init__(self, location):
        self.location = location
        self.stability = [30.]
        return

    def update(self, stability):
        """Call this on every destructor every turn."""
        self.stability += [stability]
        return

    def turns_until_death(self):
        turns = extrapolate_stability(self.stability)
        return turns

Then every time you place a destructor, instantiate one of the above objects and save it in a list. On each turn, run through and update their stability (removing the object from the list if it no longer exists), and then query the turns_until_death function, and if it returns less than 2 sell the tower.

3 Likes

Thanks for the idea @RJTK ! :slight_smile:

I just noticed: the stability of a Destructor is 75 and not 30