I wanted to make this as a public place to both ask about and give advice on strategy.
To start the thread off, I would like to say that I have some ideas for strategies, all of which revolve around bulk buying ‘troops’.
Does anyone know how to both spawn defensive troops periodically, predicting the best place to spawn them?
I have had some ideas about implementing this, and as a result have not really produced any algo’s yet since I am pretty much building a library to evaluate this very thing. I don’t want to share to much but I will say for me it involves trying to predict where the “important” pieces of the map are, and then deciding how to turn those to my advantage. I’ll leave off with the question of what makes a certain part of the map more important than another, and how you define that will determine the type of strategy you take.
I hope this wasn’t too vague and provides seeds for ideas :).
I too want to keep some secrecy, I’m at the point where I’ve began to write the move set to a separate variable which I hope I can then use with my machine learning algorithm but I fear that it will take too long to process and make predictions.
Yeah, after I made the process to run multiple games at once it confirmed for me that machine learning based on the current game engine simply isn’t an option (at least from what I can see). There are ways to get around this, specifically something that has been mentioned before about making mini-game engines to replicate one aspect of the game. For example, a machine learning approach just to evaluate whether a spawning point is good or bad one.
I think a really good option for this is this choosing a point AI. A fitness function is fairly clear and you can run a ‘mini-game’ simply by generating a single game_state and then running the navigation with it. This (I think/hope) would eliminate the problem of having enough data.
The challenge that arises from this method is generating a valid mini-game. You could run a certain state, but the whole point of the ML algorithm is to solve a much higher dimensional system then we couldn’t solve alone, and by running the state without the other variables, you are removing much of what makes the ML approach worth it. In addition, as you run through mini-games, you have to define what states to run. Therefore generalizing the algorithm becomes very difficult and overfitting becomes very easy.
Essentially, while I think one can overcome the challenges of not running enough games, the solution (that I have thought of so far) requires one to remove relevant data and to fabricate data; all of which makes it difficult to make an AI that will succeed.
I would love to hear if people have solutions either for some of the problems I talked about here or just other things they thought about doing :).