Hello Terminal community, after vanishing from the face of the earth for the entirety of season 2 to focus on school (still wasn’t enough to save my 4.0 though ) and after being rejected from a gazillion co-ops and finding my summer more free than I expected, I’ve dusted off my browser shortcuts and fallen back into the world of Terminal.
…and oh boy, some things have changed! It’s nice to see some familiar top ranking faces still around, and quite a few new power-players on the leaderboard as well. Slight tweaks to the game rules seem set to open up some fresh static base designs, simulators are still super important, and machine learning is still very hard (but still irresistible to play with).
improving dynamic vs dynamic matches is so hard, I usually end up getting frustrated and shotgunning out random parameter changes because I’ve lost track of the roots of the underlying emergent behavior
Haha yeah I’m pretty happy with the result of my latest upload
And indeed, it is always quite hard to find the good parameters as we can’t really try a lot of variations
Taking another crack at ML. Downloaded 35 GB of replays. Hoping to train to predict attacks and unit placement from current base layout and bit/core reserves. Making the model by hand because I don’t have enough sanity left to get any form of keras or numpy working locally in the algo file. Training time looks to be about… oh, I dunno… two weeks?
I am not a ML specialist … but this does not look as a good imput / output for me.
Players move are often not the best moves for the current arena, even more they are often bad moves, and some times, they are not related to the current arena.
And you are collecting data from possible all players ? there are major difference in place style … the mixed out put of that will be a useless monstrosity.
In my oppinion good targets for ML are:
calculating best possible attack for a given defence style
calculating best possible general defence
calculating stable defence structures
calculating safe/efficient attack patterns
Analizing current config, and playstile changes. what works what does not.
Analizng the most efficient spawn patter: when to stack, when to overbuild, when to encypt
Most of this can be done with locally generated data
those are all pretty good candidates for genetic algorithms or NEAT approaches. Months ago I had hoped to make progress in those areas using generated data but as far as I could figure, the feedback loops require a very accurate and fast recreation of the game engine to evaluate, which is what I have always struggled with anyways so I dead-ended a little bit
what I am hoping to do is not teach my program to play using everyone’s games… that would be messy indeed. I am hoping to use everyone’s games to teach the game’s meta to my program. In theory, with enough training (possibly over-fitting is desirable?), it could look at a board state and go "Oh, I’ve seen similar shapes before, and I’ve associated this board shape with future wall placement at {these locations} and pings/emp/scramblers fired from {these locations}.
Despite slight variations, several of the main meta’s characteristics are pretty distinctive and consistent in terms of build and firing patterns (maze, demux, raptor, ping cannon, oracle’s hodgepodge, etc} and I’m experimenting to see if ML can identify and predict opponent moves in a more robust way than huge if-then trees using user-defined features, which was my previous best method.
Making the model by hand because I don’t have enough sanity left to get any form of keras or numpy working locally in the algo file
If you change your mind, some of the old threads like this one got some ML updates in Season 2. Not sure if you were following the forums, but I was experimenting around with a basic copy cat like overfit of the early Demux structures. There’s also this (slightly incomplete) library that I tried to update to the latest version of Keras that should allow you use a cpp port of forward-feeding a Keras model and put that back into Python. Some parts of it may not be perfectly updated but if you go that route and have issues you can message me.
Another thing I was kinda itching to try on the replays that wouldn’t need libraries as much or require lots of training time was some sort of Bayesian Network approach. Especially straightforward naive bayesian inference, which only needs to iterate once on the replays and count how many times it saw feature(s) X in the presence of output category Y.