Running Multiple Games

So I found that after I would get an aglo strategy to the point where I thought it was better than a previous version, I wanted to make sure. This meant running the game over and over and checking to see who won for each game. To make this faster I wrote a couple of scripts to make this process faster and easier.

This is only for Windows (except for the python script obviously) since it is in PowerShell.

There are two files, a PowerShell script and a very basic python script.
Both files must be put in the main game directory (eg has README, engine.jar, etc).

The PowerShell script runs as many games as you want while only running so many at a time (what I call batch size).
So to run 20 games but only 5 running at a time you would enter:

.\[NAME OF FILE] 20 5

Which in my case is:

.\run_multiple 20 5

You should not set the batch size to be more than 15, depending on your system because it slows WAY down. I tried 100 when testing because I’m greedy :smile: and it crashed after a long time (granted, I don’t have a super powerful system).

After saving the code you will need to change the algo that the script runs, which should be just changing the names to be your algos. The comments have information about changing that as well as the line number.

The Python script simply looks at all the replay files and checks who won the game, player 1 or 2 and then outputs the sums. You should save this file as ‘’ or you can edit the call in the PowerShell script.

Here is the code for the PowerShell:

# This is a PowerShell script to run a bunch of games with a single command.
# This is intended for when you have worked out all bugs and are testing to see if a new strategy really is better than an old one.

# This program assumes this file is in the main engine directory (eg dir with README and engine.jar)
# You can change this of course, but that is on you :).

# Inside $runprog you can see that there is the execution "scripts/run_match.ps1 algos/my-bot algos/starter-algo" (line 48).
# This should be the same command you use to start your game locally. So change "my-bot" to be whatever the name is of your algo directory.

# The script at the end is a seperate python file that looks at all the replay files and tells you how many each player won.

# You call this script in PowerShell by running .\[THIS_FILE_NAME] [NUM_OF_GAMES] [BATCH_SIZE]

# An example would be: .\run_multiple 20 5
# This would run the game 20 times with only 5 games running at a time.

# You can leave out a batch size and the default is set to 5 (of course you can edit this below)
# so this command would do the same thing: .\run_multiple 20

# DO NOT RUN WITH A LARGE BATCH SIZE (like >15, depending on your computer) or else it will take forever and crash

# If you have questions just ask me on the forums - @Isaac

$global:completed = 0		# Number of games that have finished
$global:running = 0			# Number of games currently running
$dir = [System.IO.Path]::GetDirectoryName($MyInvocation.MyCommand.Path)		#Get the current directory of this script

# Set batch size defaults - sets how many games can run at one time
If (!$args[1]) {
	$global:batch = 5
ElseIf ($args[1] -lt 0) {
	$global:batch = 5
Else {
	$global:batch = $args[1]

# Main loop that runs each game - each loop starts a game in a new powershell (not visible)
For ($i = 1; $i -lt $args[0]+1; $i++) {
	Write-Host "started game #$i"
	# This is the program that is run, so essentially just starts the game
	$runprog = {
		cd $path
		scripts/run_match.ps1 algos/my-bot algos/starter-algo | Out-Null				# Edit this to change the algo you are running
	$job = Start-Job $runprog -ArgumentList $i,$dir 		# Start the new powershell
	$global:running++										# Add 1 to the number of running programs
	# This is triggered when the associated game ends
	$jobEvent = Register-ObjectEvent $job StateChanged -MessageData $i -Action {
		Write-Host ("finished game #$($event.MessageData)")
		$global:completed++				# Add to our completed counter
		$global:running--				# Remove the number of running games by 1
		$jobEvent | Unregister-Event	# Remove the job
	# if the number of games running equals the batch size - wait here
	While (($global:batch - $global:running) -eq 0) {}

# Wait for it all to complete
While ($global:completed -lt $args[0]) {}


# This script looks at all the replay files and prints out the number of winners

Here is the code for the python file (save it as ‘’ or edit the PowerShell command)


This is a python script to check all replay files in the /replays folder.
It looks at each file and simply looks for a string to see which player won.

This program assumes this file is in the main engine directory (eg dir with README and engine.jar)
It also assumes the "replays" folder exists.
You can change this of course, but that is on you :).

If you have questions just ask me on the forums - @Isaac


	import os

	replayDir = os.getcwd()+'\\replays'			# Gets the current directory

	# Initialize counters
	p1WinCnt = 0
	p2WinCnt = 0
	unknown = 0

	# Loop through every file in "replays" directory
	for filename in os.listdir(replayDir):
		if filename.endswith(".replay"):
			with open(replayDir+'\\'+filename, 'r') as file:
				data =
				if (data.find('"winner":1') != -1): p1WinCnt += 1
				elif (data.find('"winner":2') != -1): p2WinCnt += 1
				else: unknown += 1

	# Print results
	print ('Player 1 Wins: {}'.format(p1WinCnt))
	print ('Player 2 Wins: {}'.format(p2WinCnt))

	if (unknown > 0): print ('Could not find winner: {}'.format(unknown))

	print ()
except Exception as e:
	print (e)

Hope this helps some people :).


Wow, this is awsome. If you want, the C1 starterkit is open source. You can make a PR, and we might review it and add it to the starterkit.

1 Like

Awesome! Will do :).

Fun stuff!

@Isaac - does this run the same algos for every game?

I’d estimate all my own algos to be deterministic - they play the same way every game. So repeating 20x will just give the same result as playing it once.

If I get around to it I may try and tweak the PowerShell to run through a list of algos for player 2 and see if my new algo can beat each iteration of my other attemps that I still have laying around.

Yes, it runs the same algo’s every time, since I am trying to design my algo to be very much non-deterministic.

To change this you could simply edit the for loop to go through all directories in your /algos folder (or just make an array)

So you could get all the directories (algos) with this as long as your current dir is /algos (or whatever directory contains all the algos you want to compete with):

dir -Directory | Select-Object FullName

(found here:

And then run each against of them. Alternatively, you could pit every algo against every algo and determine which did best overall (you would need to edit the

You would edit the line inside of $runprog from:

scripts/run.ps1 algos/starter-algo algos/starter-algo | Out-Null

to be:

scripts/run.ps1 algos/testing-algo [DIR] | Out-Null

where [DIR] is obtained from dir -Directory | Select-Object FullName.

And the For loop would no longer loop through your input, instead it would loop through directories (algos).

Good luck! Let me know how it goes :).

1 Like

I was able to modify the script to run every algo in my algos folder.

@Isaac - did you make a PR into the starter kit? Maybe it’d be worth merging this into a single script that you can pick which way to run. But mine is just a straight edit of yours currently.

My version gets invoked with algo-name and batch size:

PS :> .\run_brawl.ps1 my-starter-algo 5

I ended up going with a relative path instead of the full path.

$algos = dir algos -Directory | Select-Object Name
$algoCount = $algos.count

The main loop was changed to iterate the whole list.

# Main loop that runs each game - each loop starts a game in a new powershell (not visible)
For ($i = 1; $i -lt $algoCount+1; $i++) {
    $p1Name = $args[0]
    $p2Name = $algos[$i - 1].Name
	Write-Host "started game #$i. $p1Name vs $p2Name"
	# This is the program that is run, so essentially just starts the game
	$runprog = {
		param($path, $p1Name, $p2Name)
		cd $path
		scripts/run_match.ps1 algos/$p1Name algos/$p2Name | Out-Null				# Edit this to change the algo you are running

It ended up being pretty simple once I realized the importance of the runprog params!

I left alone, so it prints out the number of wins for player 1 and 2 still. Since your algo always gets invoked as player 1, you’re hoping for a 100% win rate. Ideally any p1 loss would show the p2 algo that beat it, but I didn’t get that far.


This is awesome! I have submitted the PR and it has been approved, but C1 just has to do some stuff on their end before it is actually part of the starter kit.

I like your idea of merging the scripts into one, but for now, I’m not too concerned about implementing it :), maybe I’ll work on it later or if you want to that’d be awesome as well.

Nice solution to not have to edit the file. I plan on making a python script to run basic statistics that will more fully analyze replay files individually and as a whole (maybe within the next week? Depends on my schedule). This script would keep track of algos and hopefully look at things like bits earned/spent, cores earned/spent, the time between spawning units, etc compared to wins to try and get helpful information from all your games. So if you have any ideas for useful statistics let me know. (I would add this file to the contributions starter-kit as well).

Thanks! At some point I’ll probably play around with merging the scripts, but I’m trying to balance that with actually working on my algos!

If you haven’t already seen it, the final line of the replay includes some pretty interesting stats:

       "player1": {
			"stationary_resource_spent": 6.0,
			"dynamic_resource_spoiled": 10.0,
			"crashed": false,
			"name": "my-algo",
			"dynamic_resource_destroyed": 9.0,
			"dynamic_resource_spent": 46.0,
			"stationary_resource_left_on_board": 6.0,
			"points_scored": 31.0,
			"total_computation_time": 12220

That might save some manual tracking work for you. I definitely think the destroyed stats are useful.

1 Like

I hadn’t seen that, thanks!

I’m intrigued by the notion of a non-deterministic algo.
Is it non-deterministic in the sense that it tries to read the opponent and make moves accordingly, and thus is different for different opponents?
Or in the more literal sense that if the same opponent makes identical moves during repeated games, your algo potentially responds to their identical moves in a different way each time?

1 Like

You’re right, I should have been more clear. At that moment I meant non-deterministic in the sense that the algo is reading the situation and adapting it’s strategy dynamically. In other words, it would not just map to a single structure, or switch between several. Rather, it would analyze each situation and try to win for that one. So more like your first definition. I misused the word since this would output the same every time.

So I guess a better word to describe it would be ‘adaptive’ or maybe ‘dynamic’.

However, since you bring up the option of true non-determinism (different outcomes with the same inputs), it is actually extremely easy to implement. In fact, in the current algo I am working on I could change it from non-deterministic to deterministic (but still follow the same logic/strategy) with the addition of a single line of code. The distinction here is that I’m not just changing where I spawn units, etc, my strategy is the same but is non-deterministic since it is based on an algorithm, and that algorithm is non-deterministic.

The question then becomes, not how to make an algorithm non-deterministic, since you can just generate a random number generator in (ignoring the fact that they are pseudo-random) and get non-deterministic results. Instead, the question I find interesting is how could non-deterministic behavior result in a better overall algo?

I’m having difficulty thinking of advantages :slight_smile: and there are certainly many disadvantages. One possible good thing could be unpredictability, but this depends on your opponent. For example, if you are competing against an algo that is making decisions by trying to predict where you will develop your strategy on the next turn, randomness would make it very hard for this algo to beat yours. However, I don’t think this would help overall since most algos (currently) are not thinking this far ahead.

Anyways, those are just some thoughts. Would love to hear what other people think :thinking:.

Your assessment is pretty similar to mine.

Kudos for going for an algo that comes up with a base (from scratch?) on the fly. It seemed like something of a holy grail if it works, but all my attempts so far to produce that have failed.
At the moment I’ve settled for the “rock-paper-scissors” approach, with several pre-designed building options, and a finite state machine and fitness function that just take their best guess at what counters the opponent. At the moment very few algos are adaptive so this works well, but I’m sure that won’t be the case for too long.

Fun stuff! My main algo is adaptive in the same sense you guys are discussing, which is why I was interested in playing as many different games locally as I could. I’ve happened to keep an archive of each version I’ve uploaded, so that’s let me inflate my # of algos locally to a couple dozen.

One of my algos builds a base completely on the fly, and it’s having mixed success. The best performing version went 12-5 today. I keep iterating on it, but I’m trying to balance adding new versions and letting it get enough time in the global competition to really understand how it does.

It’s building in a reactive manner right now - if an enemy breaches me, I’ll defend the breached area with destructors. If it detects the enemy is building a horizontal wall, it mirrors it at a distance to allow max EMP damage. Some games it performs beautifully, but others it never quite catches up since it plays from behind (it literally does nothing the first turn).

1 Like

Yeah, I have similar problem when it comes to being 1-step behind. I personally view it as the largest problem facing adaptive algos since it’s primary input is based on what the enemy is doing; as we know, that data is always 1 turn behind reality. I have a couple of ideas of how to hopefully minimize this risk, but all untested so far. I’ve personally mainly been building a library to calculate a whole bunch of data and then I’ll worry about testing what data matters :crossed_fingers:.

My best iteration of my adapative algo managed to hit #11 on the global leader board for a short time. But, as is often the case, it’s not even my newest iteration. It seems what I think is best doesn’t always translate to better performance.

But the recent analysis of your dumb-dumb algo inspired me to consider that what matters more isn’t where the enemy breaches, but where their units spawn. It seems most algos tend to have a pretty static spawn point. So the goal now is to abuse the use of that … we’ll see if it translates to actual advantage though!

1 Like

Lol yeah, the dumb-dumb algo gave me a lot of ideas for exploitation :grin:.

I think you’re right about most algo’s having a static spawning point. I’m super curious to see how the game changes when people start changing things up. I think it will get much harder to say “this is good” and “this is bad” even with a change as simple as say switching spawn sides every once in a while.

Switching spawn sides can be surprisingly effective, even at basic levels of coding. One of my first algos had me switch sides for deploying attackers once my health went below a certain level (this was before I figured out how to analyse action frames), and it worked pretty well against static algos.


For anyone interested, this code is now part of the C1StarterKit in python (it can also be run by linux users) so you can get it from there.

(It has been slightly modified with @n-sanders’s comments)

1 Like

@Isaac how do i get this ting to save replays?
EDIT:its just some error with java. i dont know java but i assume my java version is more up to date than the starter kit ones. it says this : Exception in thread “main” java.lang.UnsupportedClassVersionError: com/c1games/terminal/Terminal has been compiled by a more recent version of the Java Runtime (class file version 54.0), this version of the Java Runtime only recognizes class file versions up to 52.0

Thanks for pointing this out. I’ll consider bumping our Java versions soon. For now, you may have to install an older version of Java.

Update: We will be bumping versions for various dependancies after the current competition season.