Machine Learning

Say I was to code a replica of the game, and develop some sort of machine learning algorithm to it that would learn to get good at the game. Then, I implemented the results of the training into my algorithm for the actual thing. That would be within the rules, yes?
Or is it not, because it would likely favor those with beefy computers?
Thank you in advance.

Machine learning appears to be allowed:

However, you will not (currently) have access to the machine learning libraries in the online arena. Additionally, the online arena does not allow you to download any matches or other competitors’ bots, so you’d have to train the bot on itself, a human, or bots from other sources.

You’d need real CS skill to pull this off, so it’s legal.

Good luck in the arena!

I’d need skill, or, since I would be coding a replica, I could use tenserflow, lol. I’ll prpbably do that. The rules and restrictions of this seem to be very unclear, I have a feeling I’m going to accidentally cross some sort of line. Haha

I’ve checked out machine learning for this game, and it definitely doesn’t just magically produce top-rank algos in direct proportion to how many numbers your training computer can crunch.

For one thing, possibly by design, getting good training data is really hard. Other people’s algos (obviously) are unaccessible locally, and even the replay files from ELO matches are unavailable for study.

Trying to create a large enough and diverse enough set of training data all on your own on a local machine is a chore of epic proportions, and I quickly decided that it was not the approach for me.

Another problem I’ve encountered: This game takes a non-trivial amount of time to run. Even if you faithfully re-wrote the entire game engine in a way that behaved exactly as the online engine, and used the results from local matches as feedback for learning, this has the potential to take eons. Typically, machine learning needs at minimum a few hundred to a few thousand trials to converge, and it turns out that running a few thousand local matches takes days and days.

If you can cleverly write a version of the game engine that is both true in behavior to the original, so that the learning is accurate, but also executes matches significantly faster, so that training is time-feasible, and on top of it all you find a way to benefit from the trained model in an environment with no machine learning libraries, then you deserve any winnings resulting from that approach.

5 Likes

Machine learning is totally legal so don’t worry about that.

As for libraries you can always just include the library files in your uploaded folder. I believe pyenv has some functionality to let you make a folder of all your imported libraries but not 100% sure, and other languages have things like .jar files that let you just include a file for libraries that you can use. Eventually we’ll have a longer term solution for importing libraries without having to include them in your algo directly.

You can actually download the replay file from ranked matches using a url:
Get the replay number from your watch url
https://terminal.c1games.com/watch/94183
Then use the below url to download the replay:
https://terminal.c1games.com/api/game/replay/94183
Replace the 94183 with the number of the replay from your watch url.

Automating downloading and analyzing we’ll leave to the player. We may one day allow mass downloading but other features are higher priority.

1 Like

Oh also, some of the time to run the game is just waiting: 3 seconds to allow algos to setup, and 3 seconds after the game is over to allow time for things to close up. So a way to get a ton of games completed quickly would just be to parallelize running games locally since compute wise they are spending a lot of time waiting not computing anything.

Although, I do believe the error messages an algo prints out might get jumbled because it saves a temp file in the directory of the algo where it temporarily saves the errors. So if you have two separate games running with the same algo folders they will clobber the same error file. But the replays generated would be fine so this is only a concern if you want to train on things you print with debug_write. If you do need the error prints you could also just duplicate the folder of your algo a bunch to get around that so that every game is using a different algo folder. I’ll try to make it so the name of the temp error file generated has some random characters so this isn’t an issue in the future too.