16daystocode: How our bots work, plus other things

Hello! Steve here from 16daystocode to discuss some of how our bots worked, some of the issues we ran into, and some points that we would be interested in knowing the opinions of the community on. Our bots were indeed purely mimic bots. All our bots consisted of 2 components: the actual bot that we uploaded, and the database of matches that they pulled from. If you would like to learn more about our development process, my partner Chris wrote a development timeline which we have linked below. Basically we implemented a small-scale version of agile.

First let’s discuss the bot itself:

Our bot (the program we upload to C1’s servers) is basically an interpreter which analyzes the first move of its opponent and then tries to find that same layout in its database to identify that opponent. If it can’t find a perfect match, it will select an opponent with the most similar starting layout. Next, our bot will execute a winning strategy against the opponent that is stored in the database, making a few considerations along the way. Each turn our bot will first attempt to build anything that was built that turn in the replay. Next our bot will attempt to “heal itself” by replacing any pieces that shouldn’t be destroyed. If our bot runs out of replay to emulate, it loops back to an arbitrary spot in the replay and continues from there. We didn’t spend too much time on figuring out a better system for this because we typically did not need to use it. Additionally, our bots use a default python module to send us an email that contains various information to help us determine how effectively our bots are operating. This gave us valuable feedback on how our bots worked and showed us any possible holes in our database. Anyone struggling with feedback on live servers should consider this route, and we’ve included a brief tutorial at the end of this post on how to do it.

Next is the database:

Our database is a remote hosted, updatable database that we create locally and then push to GitHub for our bot to download and use at run-time. Using Git and GitHub allowed us to update the database without the URL changing so that the bots would only have to request from a single location. We are using the module Requests in our bot to download the database at run-time. We’ve also included a brief tutorial on Requests at the end. Our database consists of 5 main functions, each operating independently of each other and are as follows: Construct, Refactor, Update, Eliminate, and Prune/Push.


Our construct function is effectively a drag-net for bot id’s. It iterates through all bot id’s (since they are sequential) and checks to see if they have any losses. It only pulls one file from the API per bot and parses that one file for all that bots’ recent losses. The API requests are by far the most time-consuming part of this strategy, and most of our development time was spent trying to eliminate as many redundant or extraneous API requests as possible. Some definitions of terms that I will be using real quick: Target algo is the algo that gets defeated in the replay we are using. Emulated algo is the algo that we are trying to emulate to defeat the opponent in the live game. Target algo and live opponent should in theory be the same algo if we match them correctly.


This step basically consists of converting those various ids that construct collects into a usable database. First, the program checks to see if we’ve already refactored this match. If we have, we skip it. This allows the program to increase in speed as it is run, since it does not have to check a match twice. The refactor program looks up the replay file in the API for each new game we collected in Construct. This gives us access to many different factors that influence how likely we are to choose a specific match over another. We take those factors into account and bundle them all into a “score” for each match we consider. The higher the score, the more likely we are to be able to successfully recreate that strategy. Refactor then attaches the score along with the starting layout and a few other pieces of data to the data we already have.


The update function serves a few purposes. It updates the elo for each target algo to the data set, then picks the best game from each target for use by the bot. This function ensures that we have the most up-to-date elo for each bot, as well as only being able to choose the best game for each. Additionally, having a shorter list of possible games to send to the bot speeds up the run-time of the following process Prune/Push, which I will cover later.


This function is pretty straightforward. It just maintains a list of opponent algos that our bot has already played. After playing an algo, we remove its data from being entered into the final database sent to each bot so that our bot will have a higher chance of making a correct identification. This is because algos will never play each other twice. Any reduction in false data increases our chance for an accurate identification.


This function is the last step and is bot-specific. Each of our bots has a pruner that takes the current status of our bot into account and generates a database that is tuned for that bot. Prune looks up our bot’s elo first. Then, it cycles through the output of the update function, finding games whose target algo elo falls within a defined elo “bubble” around our bot. For example, we might only take games whose target elo is within ±50 of our own. This helps us eliminate some misidentifications by only considering algos that we might actually play at our given elo. We can also use several different sized bubbles to help collect layouts that aren’t represented in the smaller bubbles. Prune then cycles through all the selected games and picks the best games for each layout if there are duplicates. Once complete, it pushes the final database to GitHub for our bot to use.

So that’s pretty much how our bots worked. We’re really proud of the work we put in and the results it garnered. A detailed account of our 16 days to code and some of the rationale behind our decisions can be found here in our development timeline write-up: https://pastebin.com/75YGC2c8

We look forward to improving this process for Season 2 and are eager to compete again.

-Steve and Chris

Send email tutorial:


import smtplib
from email.message import EmailMessage

Call the function like this:

text = ‘whatever you’d like your bot to say to you’

Create a def called “send_email”:

Make sure this is tabbed properly. I couldn’t figure out how to do it on the forums.

def send_email(self, text):
msg = EmailMessage() # create email message
msg.set_content(text) # set email contents
msg[‘Subject’] = “whatever you’d like the subject line to be” # set subject
msg[‘From’] = ‘yourbotdummyemail@gmail.com’ # set sender email address
msg[‘To’] = ‘youremailhere@gmail.com’ # set receiver email address
server = smtplib.SMTP(‘smtp.gmail.com’, 587) # change this for other email providers
server.starttls() # start the email server
server.login(‘yourbotdummyemail@gmail.com’, ‘password’) # your bot’s email’s login info
server.send_message(msg) # send the email
server.quit() # end the email server

Comments on the send email function:

The ability to send information back from your live bots has many interesting consequences. Firstly, it allows feedback on strategy and decisions your bot makes when playing live matches. It also allows for online debugging which is something that many users have posted about in the forums. This can be used in countless other ways which I’m sure you guys can think of. A downside here is that you have to store your login information in plain text so definitely create a separate email account for your bot that doesn’t matter and that can be easily recovered. There are definitely other ways of sending information back from your bot using python and other languages so hopefully this starts some community discussion on this topic.

Pulling an external file tutorial:

Install requests:

Using command prompt or linux terminal

pip install requests

As of posting, we are on version 2.21.0
You can check your version in command prompt or terminal with

pip freeze


import requests

Call the function like this:

url = ‘http://raw.githubusercontent.com/<githubuser>/<repo>/<branch>/<file>’
my_external_file = self.get_external_file(url)

Create the def “get_external_file”:

Make sure this is tabbed properly. I couldn’t figure out how to do it on the forums.

def get_external_file(self, url):
external_file = requests.get(url, verify=False).text
return external_file

Comments on pulling an external file:

Pulling an external file was essential for our bot’s success and offers many interesting opportunities. Firstly, this could be used to pull the weights for a neural network for example. Here the network can train on a user’s computer and then the bot can just pull the constantly re-trained weights from online. Now, you could also write a small 6 line program to pull your entire bot file, hence making the deadline to upload your zip file less meaningful. We have held off on posting this because we were discussing this with C1 but as far as we can tell, they are okay with it.


What if your opponent didnt place anything in the 0. round?

Also will you be participating season 2?

he he, this is very interesting. If the external file access implemented, no more one algo id - one behavior expectation. :smile: :thinking:

If our opponent didn’t place anything on round 0, we referred to that as a null layout and treated it the same as any other layout. We explored other possibilities such as treating the second turn as the first turn if they didn’t play anything on turn 0 but we found this caused too much of a disadvantage for our bot. We just did the same procedure for null layouts as we did for a regular layout, picking the best game against the opponent we thought we were facing. The bubble method helps with this since there are a lot of algos that play nothing on turn 0.

We currently don’t have any bots uploaded for season two but we plan to in the future.

Indeed! It should make things a lot more interesting for top bots.

All worked locally, however, when uploaded got this:

Player 1 Errors:

Traceback (most recent call last):
File “/tmp/algo8223732250481045043/algo_strategy.py”, line 7, in
import requests
ModuleNotFoundError: No module named ‘requests’

Was uncertain that they have the package installed on their server, anyway wanted to give it a go. Did you place a request to install the ‘requests’ package somehow?

Algo crashes when playing by hand, however, plays ranked matches without crashing. :thinking:

Edit: this was due to fact that it external file was only used for debug output. Essentially reading text from an external file and printing it out. Because in ranked matches debugging output is disabled, there is no issue.

Ok, had to spend a little time.
So here is how I did that.

class AlgoStrategy(gamelib.AlgoCore):

    def __init__(self):
        #added code
        url = 'https://raw.githubusercontent.com/janis-s/external-terminal/master/say_hello.text'
        gamelib.debug_write('Hello, I can speak with the World. The World says: {}'

    def install_and_import(self, package):
        import importlib
        except ImportError:
            import pip
            globals()[package] = importlib.import_module(package)

    def install(self, packge):
        subprocess.call([sys.executable, "-m", "pip", "install", package])

This resulted in:
/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings

Hello, I can speak with the World. The World says: Hello from outside :slight_smile:


Is it allowed to sending request to outside? Like we could communicate the state to a server that sends back the actions to take each step?

I guess - as long as you can do it within 5s, would not hurt you.

About C1’s attitude towards this, I do not know. @C1Ryan

We are planning on disabling network requests. Until we do, users will not be penalized for making use of them.


Thanks, I was afraid that it would led to abuse (like using a an other (more powerfull) server to do all the calculation and send back the instructions)

However, it would be great if we were allowed one communication at the end. The algo could then send an automatic report of the game to its user. This would greatly help the algos’ improvement by allowing the after-analysis of the decisions made in match.
What do you think?


Maybe the server can automatically send an email of debug_write()'s collected during the game when an algo crashes, or perhaps when some debug setting is toggled on, emails are sent to the user.

1 Like