Hello! Steve here from 16daystocode to discuss some of how our bots worked, some of the issues we ran into, and some points that we would be interested in knowing the opinions of the community on. Our bots were indeed purely mimic bots. All our bots consisted of 2 components: the actual bot that we uploaded, and the database of matches that they pulled from. If you would like to learn more about our development process, my partner Chris wrote a development timeline which we have linked below. Basically we implemented a small-scale version of agile.
First let’s discuss the bot itself:
Our bot (the program we upload to C1’s servers) is basically an interpreter which analyzes the first move of its opponent and then tries to find that same layout in its database to identify that opponent. If it can’t find a perfect match, it will select an opponent with the most similar starting layout. Next, our bot will execute a winning strategy against the opponent that is stored in the database, making a few considerations along the way. Each turn our bot will first attempt to build anything that was built that turn in the replay. Next our bot will attempt to “heal itself” by replacing any pieces that shouldn’t be destroyed. If our bot runs out of replay to emulate, it loops back to an arbitrary spot in the replay and continues from there. We didn’t spend too much time on figuring out a better system for this because we typically did not need to use it. Additionally, our bots use a default python module to send us an email that contains various information to help us determine how effectively our bots are operating. This gave us valuable feedback on how our bots worked and showed us any possible holes in our database. Anyone struggling with feedback on live servers should consider this route, and we’ve included a brief tutorial at the end of this post on how to do it.
Next is the database:
Our database is a remote hosted, updatable database that we create locally and then push to GitHub for our bot to download and use at run-time. Using Git and GitHub allowed us to update the database without the URL changing so that the bots would only have to request from a single location. We are using the module Requests in our bot to download the database at run-time. We’ve also included a brief tutorial on Requests at the end. Our database consists of 5 main functions, each operating independently of each other and are as follows: Construct, Refactor, Update, Eliminate, and Prune/Push.
Construct:
Our construct function is effectively a drag-net for bot id’s. It iterates through all bot id’s (since they are sequential) and checks to see if they have any losses. It only pulls one file from the API per bot and parses that one file for all that bots’ recent losses. The API requests are by far the most time-consuming part of this strategy, and most of our development time was spent trying to eliminate as many redundant or extraneous API requests as possible. Some definitions of terms that I will be using real quick: Target algo is the algo that gets defeated in the replay we are using. Emulated algo is the algo that we are trying to emulate to defeat the opponent in the live game. Target algo and live opponent should in theory be the same algo if we match them correctly.
Refactor:
This step basically consists of converting those various ids that construct collects into a usable database. First, the program checks to see if we’ve already refactored this match. If we have, we skip it. This allows the program to increase in speed as it is run, since it does not have to check a match twice. The refactor program looks up the replay file in the API for each new game we collected in Construct. This gives us access to many different factors that influence how likely we are to choose a specific match over another. We take those factors into account and bundle them all into a “score” for each match we consider. The higher the score, the more likely we are to be able to successfully recreate that strategy. Refactor then attaches the score along with the starting layout and a few other pieces of data to the data we already have.
Update:
The update function serves a few purposes. It updates the elo for each target algo to the data set, then picks the best game from each target for use by the bot. This function ensures that we have the most up-to-date elo for each bot, as well as only being able to choose the best game for each. Additionally, having a shorter list of possible games to send to the bot speeds up the run-time of the following process Prune/Push, which I will cover later.
Eliminate:
This function is pretty straightforward. It just maintains a list of opponent algos that our bot has already played. After playing an algo, we remove its data from being entered into the final database sent to each bot so that our bot will have a higher chance of making a correct identification. This is because algos will never play each other twice. Any reduction in false data increases our chance for an accurate identification.
Prune/Push:
This function is the last step and is bot-specific. Each of our bots has a pruner that takes the current status of our bot into account and generates a database that is tuned for that bot. Prune looks up our bot’s elo first. Then, it cycles through the output of the update function, finding games whose target algo elo falls within a defined elo “bubble” around our bot. For example, we might only take games whose target elo is within ±50 of our own. This helps us eliminate some misidentifications by only considering algos that we might actually play at our given elo. We can also use several different sized bubbles to help collect layouts that aren’t represented in the smaller bubbles. Prune then cycles through all the selected games and picks the best games for each layout if there are duplicates. Once complete, it pushes the final database to GitHub for our bot to use.
So that’s pretty much how our bots worked. We’re really proud of the work we put in and the results it garnered. A detailed account of our 16 days to code and some of the rationale behind our decisions can be found here in our development timeline write-up: https://pastebin.com/75YGC2c8
We look forward to improving this process for Season 2 and are eager to compete again.
-Steve and Chris
Send email tutorial:
Imports:
import smtplib
from email.message import EmailMessage
Call the function like this:
text = ‘whatever you’d like your bot to say to you’
self.send_email(text)
Create a def called “send_email”:
Make sure this is tabbed properly. I couldn’t figure out how to do it on the forums.
def send_email(self, text):
msg = EmailMessage() # create email message
msg.set_content(text) # set email contents
msg[‘Subject’] = “whatever you’d like the subject line to be” # set subject
msg[‘From’] = ‘yourbotdummyemail@gmail.com’ # set sender email address
msg[‘To’] = ‘youremailhere@gmail.com’ # set receiver email address
server = smtplib.SMTP(‘smtp.gmail.com’, 587) # change this for other email providers
server.starttls() # start the email server
server.login(‘yourbotdummyemail@gmail.com’, ‘password’) # your bot’s email’s login info
server.send_message(msg) # send the email
server.quit() # end the email server
Comments on the send email function:
The ability to send information back from your live bots has many interesting consequences. Firstly, it allows feedback on strategy and decisions your bot makes when playing live matches. It also allows for online debugging which is something that many users have posted about in the forums. This can be used in countless other ways which I’m sure you guys can think of. A downside here is that you have to store your login information in plain text so definitely create a separate email account for your bot that doesn’t matter and that can be easily recovered. There are definitely other ways of sending information back from your bot using python and other languages so hopefully this starts some community discussion on this topic.
Pulling an external file tutorial:
Install requests:
Using command prompt or linux terminal
pip install requests
As of posting, we are on version 2.21.0
You can check your version in command prompt or terminal with
pip freeze
Imports:
import requests
Call the function like this:
url = ‘http://raw.githubusercontent.com/<githubuser>/<repo>/<branch>/<file>’
my_external_file = self.get_external_file(url)
Create the def “get_external_file”:
Make sure this is tabbed properly. I couldn’t figure out how to do it on the forums.
def get_external_file(self, url):
external_file = requests.get(url, verify=False).text
return external_file
Comments on pulling an external file:
Pulling an external file was essential for our bot’s success and offers many interesting opportunities. Firstly, this could be used to pull the weights for a neural network for example. Here the network can train on a user’s computer and then the bot can just pull the constantly re-trained weights from online. Now, you could also write a small 6 line program to pull your entire bot file, hence making the deadline to upload your zip file less meaningful. We have held off on posting this because we were discussing this with C1 but as far as we can tell, they are okay with it.