ML game engine

I have been looking at implementing ml for terminal but with ~3.5 / a game its impossible to make it happen.
We need an engine capable of ruining 100s of games a second to be able to make this happen in a few hours of training.

The current engine makes it really difficult for us who want to use machine learning to create new strategies as the first task would be to recreate this engine to be able to run faster which would detract me and lots of others from attempting ML on this interesting task.

I know there where several attempts early on in the first season that attempted ML but lots of which found that this issue of training time made it impossible. I hope the C1 team can look into this and offer the community a new engine that is much quicker than the current one.

Thank you

If you want to make that happen you might have to make an approximated copy of the game (with some large simplifications), so that the data used to perform the simulation is also used by the machine learning algorithm. If you don’t allow the destruction of firewalls during the action phase for example (but in an extra resolve phase afterwards), you can guarantee an efficient path finding algorithm. I am currently stuck with work, but I have some interest in looking into this further at a later point.

For me the most interesting point is what and how much data should be per-processed and stuffed into the ML algo. This is a similar consideration as to how machine learning is combined with image processing for pattern recognition in videos.

Anyway, I’m interested to see some different ideas/opinions on this. I think the topic has been discussed several times before, but I’m happy to continue the conversation on this.

1 Like

Thanks for your reply
Yes i agree, there needs to be simplifications to the current engine to make it run faster but if everyone who wanted to create an ml algo needed to create their own version of the game it would seriously reduced the number of participants who would go down that deep rabbit hole.

Plus, i think its the algo design that is really interesting in this competition so it would make sense for us to request a faster engine for the game so the participants can focus on the ml side and trust that the engine is optimised maybe one based on c++.

So, whilst its possible that i can spend weeks trying to replicate this game and the path finding algorithms and all the rules, I think it would be beneficial to the whole community if we can get like a cut down “for ml” kind of engine that runs much quicker.

Same, I am also really interested to see what people come up with and what can be achieved especially with something like reinforcement learning like a mini version of what Google did with their AlphaGo Zero but that as far as I can tell, it took around 4M games before anything reasonable started to happen so thats why i was hoping we can get a standardised for the community engine that is much quicker than the current one.

Perhaps you can convince someone to open source their C++ engine. I’ve been working on mine and it’s almost completed (very buggy, but not crashing). I’m not convinced I’d want to open source it though…

1 Like

Yes that could be one of the options but I do think that there is a case to be made in asking C1 for one that comes from them directly so that they can verify that it would work like the original but with boosted performance (maybe by cutting off some of the features ml doesn’t require?).

Now, I don’t know if that is possible, or how much extra work load this would add to the dev team but this would most likely if implemented allow for more people to join in and hopefully bring in new algos that have more variety and make a positive contribution to the competition

I am not sure whether it is necessary for C1games to provide the speed-up. I actually quite like the fact that they provided sub optimal code, so that when players need more speed, they can put in the effort to optimize the game themselves. There are some participants who put in a stupid amount of effort and time to optimize their action phase simulators so they could completely analyze the game in real time. I am pretty sure that if the developers wanted they could easily speed up the engine themselves, but they didn’t.

Machine learning doesn’t need to be just deep learning by the way. There are some principles found in the field of research of machine learning that could be applicable here, where you only use ML for some specific problems, rather than the entire game. To some extend this is actually a more interesting programming challenge. If you make a bot like alpha go, the programmer doesn’t need to know anything about the game. Essentially the guys from alpha go could clone the game, and with enough computations immediately become completely unbeatable. It better be really difficult to pull of a strategy like that :wink:.

If however, you want to just figure out what the best methodology is to find good attacks, or what the best overall defensive structures are, and you use machine learning loops to achieve this within your algorithm, that sounds super interesting. For example, you could make a neural network that tells you whether you should attack, with which and how many units and from which general direction. The actual implementation of that decision would then need to be covered by a different loop that doesn’t use machine learning. Now that I write that up this actually sounds pretty cool. I might look into this :slight_smile:


You have been developing a C++ version of the game? Did you already use traces of this in the season 1 battles or should I now expect a large power spike from one of the most threatening teams on the server? :thinking: :scream: :cold_sweat:

Not me :stuck_out_tongue:
Even so it would feel rewarding to release my game engine, it would basically be like weeks of works rendered useless because anyone could then start at an advanced stage.

So I totally agree with @kkroep

good luck with that, the state-action space of Terminal is bigger than the Go one. But it sure is interesting to try :grin:


Nice Ideas from some of the best players in this competition but here is my argument:
Yes, many people who have in the first season worked really hard to take advantage by developing there own optimised engine might feel that some of their work has been slightly devalued but the idea here is that this is a new season and we should push the game to the next level and this is one of the reasons i think its more suitable for C1 to make this .

Now as to using ML only in certain sections of the game that would be possible but again would require limiting the ai’s ability and instead of developing its own tactics it’s more enhancing user generated strategies, which is not the point here.

Now the state action space is massive here on terminal 3 defence 3 attack units 28 x 28 diamond map with all the resource management required, but this is why by providing a “for ml” optimised engine its not really making the “AI” of this problem any easier its still going to be very difficult but then we can at least focus on these issues rather than writing c++ code to optimise a game .

So whilst I understand @arnby comment about anyone could start at an advanced stage i think this is a good thing for the challenge to encourage more creativity more variety and improve the competition in general.

Looking forward to see what you think of this and interested to see what C1’s position on the matter?

1 Like

I would just like to clarify that I have zero simulations or effort put into speeding up the code. I simply played a lot of games by hand and then programmed a simple state machine based on strategies I found that work well (on average I use about 100 ms per turn). So I would actually benefit a lot from optimization solutions supplied by C1Games… However, I do think that there is currently a decently nice balance between how much effort it is to make an algo figure out a strategy and execute it, or the player figure out a strategy and program an algo to execute it. With respect to when we all started out there is a lot more help on the forum, and access to games being played that can kickstart new players. I am currently writing a small strategy guide to kickstart new players from a game knowledge perspective. But removing a certain class of strategy (optimizing for performance) is not the way to go I think. It doens’t just devalue past work, but more importantly it devalues a certain type of skill set.

However, the issue of new players being far behind is quite the problem. I imagine it to be quite intimidating for a new contender to try and fight their way through players with months of experience. It will be interesting to see how that plays out.

[edit]: the terminal developers have also added more difficult bots. Which I also like as a tool to give newer players access to stronger algorithms to experience. Just try and beat that aelgoo bot by hand within 20 turns :slight_smile:


You are so right, if the computing time wasn’t that much of an issue we could focus on the actual game instead of investing so much time on optimization.

1 Like

building a good ML also requires a certain type of skill set, doesn’t it?

1 Like

If computing time wasn’t an issue everybody could brute force the game. No ML needed if you can brute force the action phase like a chess engine. I understand your point that it would be nice to be able to focus on machine learning, but that doens’t necessarily make the proposed solution a suitable one for the given situation.

yes… it is really hard. Those guys at google get really good pay for developing it. But just because something is hard isn’t enough reason for me to make something else a lot simpler. I already tried to share some suggestions in previous comments where you can use ideas from machine learning without the need for extreme efficiency or large computer structures. It might be unrealistic to perform a full deep learning ML solution on this game with the available resources. Though that is not necessarily bad right?

@codegame it does indeed. I don’t have the experience or skill set for it so I’ve been hesitant to explore it.

Feeding into that, @kkroep, I ported my Python simulator to C++ this past weekend. I don’t have a full game engine to run games, just one game map at a time. Nothing is implemented for using more simulations at this time, so there is no power spike to be had if I can’t think of more ways to use simulations. There wasn’t any point in doing this in Season 1 if I had other things to work on.

Edit: Also my arm hurts from typing { } and ; so many times. I’m not porting over the other 12,000 lines of code if I don’t absolutely have to.

1 Like

it would be impossible to brute force the game, this game is way more complex than chess. while chess has 20 possibilities to the first turn and 400 to the second one, this game has thousands of possibilities and that’s just on the first turn (maybe even millions, think about every combination of possible placements).

1 Like

Simulating the entire solution space quickly blows up to infinity so you don’t go that deep in steps. With respect to chess you don’t actually need a lot more than a single turn to determine whether an approach is good, and I would imagine that when you go for a brute force strategy, you alreadt get very far with a turn by turn assessment. Then you can put in a few restrictions or assumptions, like spawning only units from a single type, and then all at the same spot. Then you have to consider that in a lot of cases, there aren’t that many cores available to either player, so not that much needs to be build. What is left over is a pretty decent brute force approach. It is not going through the entire solution space, but that would be overkill and indeed impossible.

Actually to some extend I would expect the aelgoo series to operate this way, and that is one of the most impressive algorithms on the ladder. It is especially impressive because of all the optimizations required to run such a strategy.

I am just giving an example of different strategies that could benefit from optimized code. Strategies that are specifically not ML and easier to implement. These strategies would be a lot simpler if no optimizations are required to pull it off, more so than a ML. I do hope you see the point I am trying to make here: I understand that you would like to make Machine Learning easier, but that doesn’t mean that C1Games providing a more optimized engine is the solution we are looking for. And I think such a measure would have a large impact on the rest of the algorithms that needs to be considered.

1 Like

So as i get these are the issues people have with this idea:

  1. Some people who have spent the time to do this themselves might feel that some of that work is somewhat devalued

  2. This could then somehow take a way a type of optimising performance strategy

  3. The current balance between “automated” strategy and manual strategy is good

So here are my suggestions

  1. This might be the case for a small number of people that are truly highly skilled in this area and have done the work but by stopping the rest having access to such engine (which doesnt have to be fully featured as the current one) will be harming more users than helping.

  2. Following on from the last question, this does disadvantage a major type of strategy mainly the ML - fully automated strategy

  3. I disagree here, there has been many improvements made to help the hand crafted solution like the feature allow to branch of the game at certain points which is good to see. So why not have something similar for people that want to work on ML

Finally i would like to add that i think that a middle ground solution can be found something along the line of a very fast engine that works offline with the speed up that the ML techniques require but then online we have the same engine we use now, as such those brute force methods will not work and allow ML methods to exist

I think these discussion are great and hopefully we can see some engagement from C1Games to also see what are their thoughts on the subject


That is an interesting suggestion. I support that idea. Maybe an additional middleground solution would be to provide a framework or extra help specific to ML. Like support for certain languages or something. Let’s see what happens.

1 Like

The main problem in my eyes is the similarity in using a simulator and having an open source (faster) engine for ML. I’ll all for open sourcing an ML engine, but the problem is that this could be then very easily modified to be a simulator. This is where the “devaluing” happens. I definitely do not think anyone should open source their simulator since it is an extremely significant tool and this is ultimately a competition :). Again though, I do support creating something for ML and have been thinking about this for a while (look at past posts to see where I am coming from).

Addressing a couple of the points mentioned above by @xZeko:

  1. I disagree with the statement that preventing access to an engine harms more users than helping. I believe this is the case with ML, but as I mention above a fast ML engine and a simulator are effectively the same thing. By giving people access to a fully functional simulator, this really does devalue the work that myself and others have done in creating our simulator. This is a competition where you are expected to create your own code. In my mind, this goes beyond a helpful community tool and gives people access to extremely powerful code they did not create.
  2. I definitely agree here, having worked a bit with creating a 100% ML algo, it is a major disadvantage to not have a fast engine, to the detriment of all users. I agree that ML needs to become more accessible and feasible for players, I’m just not sure what the best way to do this is.

In my mind, the key is separating an ML engine from a simulator. The simulator is the power I feel like people should not get access to without putting in the effort and the work to do it, and the simulator and ML engine are just so similar. I’m trying to think of ways to separate the two, but even creating a compiled version (not open source) would enable people to use the ML engine as a simulator. I definitely want to see more ML algo’s though :thinking:, I feel there should be another way to achieve this, but I can’t think of what it might be. Thoughts?


What if it was separate where an open source ML framework (probably in C++) had an “insert simulator here” portion, making users insert their own simulator to “complete” the engine. The engine would handle all of the running the game and managing health/game states/talking to algos/reporting the outcome of the match, but it would make calls to a general “simulator” that each person is left to implement. (Along with the simulator they’d need to make their own pathfinder, assuming that a game map is included and standard to the framework. A “standard” pathfinder that is a C++ version of the python one could be included.)

I wouldn’t know what would be the best “engine” to make for a machine learning approach, but my best guess is the steps are:

Initialize the game and algos
Send the game state to each algo
Gather moves from each algo
Simulate the map to get to the next turn (with user’s inserted simulator)
Handle resources/lost health
Loop back to step 2 until end game state is reached

Making an engine with everything except for step 3 would preserve “optimized simulators” while reducing the overhead necessary to start a ML approach. Granted, most of the work is in making a simulator, but I think it would still be a helpful tool.