Getting Replay Results

Hi everyone,

I have another file I’d like to share :). This one is the fledglings of a replay file analyzer and is in its infancy right now, but I wanted to share it to get feedback and see what people would like added.

The header contains most of the information on how to run it, but a brief overview:

  • The file should be saved in the /scripts/contributions/ directory
  • Default run is

py scripts/contributions/[FILE_NAME].py

but this can be modified in any of the following ways to change the data displayed (right now not a whole lot):

py scripts/contributions/[FILE_NAME].py -f [REPLAY_FILE].replay [REPLAY_FILE].replay
py scripts/contributions/[FILE_NAME].py -n 3
py scripts/contributions/[FILE_NAME].py -avg health
py scripts/contributions/[FILE_NAME].py -g bits

The last one will graph the bits of each algo in a match, but you need to have matplotlib installed for it to work. You should be able to install it just by executing pip3 install matplotlib in your PowerShell or Terminal. You can still run everything without it if you don’t want the graphs (but why on earth wouldn’t you! :smile: )

The header has more information about each of these and what parameters are valid.

I will submit a pull request this to be added to the starter-kit once C1 is ready, but until then I figured I’d post it here so people could use it (and give feedback).

Right now it is quite limited (only displays end stats and can graph health, bits, and cores) but I wanted to know what ideas for what information people are most interested in getting. This code already has most of the functionality to compare multiple games against each other so I’d quite like to add that as well.

Here’s the code:

'''

This is a file to help display data about Terminal matches.
This file should be saved in the scripts/contributions/ directory.

You can call this by opening Powershell or Terminal in the main starter kit
file (where engine.java is located) the same way you would start a game.
Then, you can run it by executing:
>py scripts/contributions/[FILE_NAME].py
where FILE_NAME is the name of this file.

Just running this should output that looks something like this:
Getting Results:

-----------------------------------------------------------------------------------
Showing replays\p1-18-10-2018-13-51-45-1539892305554--547739390.replay
-----------------------------------------------------------------------------------
my-bot:
|
|      End Stats:
|      |               stationary_resource_spent : 343.0
|      |                dynamic_resource_spoiled : 381.1
|      |                                 crashed : False
|      |              dynamic_resource_destroyed : 25.0
|      |                  dynamic_resource_spent : 29.0
|      |       stationary_resource_left_on_board : 0.0
|      |                           points_scored : 4.0
|      |                  total_computation_time : 58238

starter-algo-ZIPME:
|
|      End Stats:
|      |               stationary_resource_spent : 101.0
|      |                dynamic_resource_spoiled : 125.1
|      |                                 crashed : False
|      |              dynamic_resource_destroyed : 222.0
|      |                  dynamic_resource_spent : 312.0
|      |       stationary_resource_left_on_board : 98.0
|      |                           points_scored : 30.0
|      |                  total_computation_time : 103

By default, this will run the replay file that was created the most recently.

----------------------------------------------------------------------------------------

You can specify which file you would like to run by:
>py scripts/contributions/[FILE_NAME].py -f [REPLAY_FILE].replay
where REPLAY_FILE is the file you'd like to look at. You can list as many files as you would like and it will run on each file. For example:
>py scripts/contributions/[FILE_NAME].py -f [REPLAY_FILE].replay [REPLAY_FILE].replay [REPLAY_FILE].replay

You can also specify how many replays back you would like to run (by date) using the -n parameter. Example:
>py scripts/contributions/[FILE_NAME].py -n 3
would run the last 3 games you ran

----------------------------------------------------------------------------------------

You can output the averages for health, bits, and cores for the match by using the following:
>py scripts/contributions/[FILE_NAME].py -avg health
The only (currently) accepted parameters for -avg are:
	- health
	- bits
	- cores
You can include 1, 2, or all 3 in your output. For example:
>py scripts/contributions/[FILE_NAME].py -avg health bits cores

----------------------------------------------------------------------------------------

Lastly, if you install matplotlib you can graph (currently) health, bits, and cores for matches.

Simply do:
>py scripts/contributions/[FILE_NAME].py -g [PARAMETERS]
Where PARAMETERS can be health, bits, or cores (you can do 2, or all 3 as well on one graph)

For example:
>py scripts/contributions/[FILE_NAME].py -g health


All of the commands above can be combined in any order and way. For example, if I wanted to run the last 2 replay files and only output the average heath and graph the number of bits, I would run:
>py scripts/contributions/[FILE_NAME].py -n 2 -avg health -g bits

If you forget you can also see the possible commands by:
>py scripts/contributions/[FILE_NAME].py -h

Also, all outputs use sys.stderr.write(), so you can call this directly from the
run_match.py safely without worrying about messing up communication from the engine.

I plan on adding more to this file, specifically the ability to graph more
data and get more broad statistics. I also plan on making it possible to get
data from across multiple matches (you'll actually notice a lot of the functionality is there, but just needs some finishing up).

If you have any suggestions let me know :) - @Isaac

'''

pltInstalled = False

try:
	import os
	import sys
	import json
	import glob
	import argparse
except ImportError as e:
	sys.stderr.write("WARNING: Module not found, full error:\n\n")
	sys.stderr.write(e)

try:
	import matplotlib.pyplot as plt
	pltInstalled = True
except ImportError:
	pass

# handles all the arguments
def ParseArgs():
	ap = argparse.ArgumentParser(add_help=False, formatter_class=argparse.RawTextHelpFormatter)
	ap.add_argument('-h', '--help', action='help', help='show this help message and exit\n\n')
	ap.add_argument(
		"-n", "--num",
		default=1,
		help="number of files (in order of date created) to analyze\n\n")
	ap.add_argument(
		"-avg", "--averages",
		nargs="*",
		default=[],
		help="data you would like the average of (not very useful right now)\nValid Options:\n\t- health\n\t- bits\n\t- cores\n\n")
	ap.add_argument(
		"-f", "--file",
		nargs="*",
		default=[],
		help="specify a replay file (or multiple) you'd like to analyze\n\n")
	ap.add_argument(
		"-g", "--graph",
		nargs="*",
		default=[],
		help="specify what data you would like to be graphed - you must have matplotlib installed\nValid Options:\n\t- health\n\t- bits\n\t- cores\n\n")
	return vars(ap.parse_args())


# Stores data pertaining to an individual Algo
class Algo:
	def __init__(self, name):
		self.name = name
		self.replays = {} # this effectively holds all information

	def __eq__(self, other):
		return self.name == other.name
	def __toString(self):
		return self.name
	def __str__(self):
		return self.__toString()
	def __repr__(self):
		return self.__toString()

	def getAverage(self, arg, replay):
		avg = 0.0
		div = 0.0

		for replay in self.replays:
			div += len(self.replays[replay])
			for turn in self.replays[replay]:
				if turn == 'endStats': continue
				avg += float(self.replays[replay][turn][arg])

		try:
			return avg / div
		except ZeroDivisionError:
			sys.stderr.write("Error: Dividing by zero")
			return -1

	def addData(self, replay, turn, arg, data):
		if replay in self.replays:
			if turn in self.replays[replay]:
				pass
			else:
				self.replays[replay][turn] = {}
		else:
			self.replays[replay] = {}
			self.replays[replay][turn] = {}

		self.replays[replay][turn][arg] = data

	def addEndStats(self, replay, endStats):
		self.replays[replay]['endStats'] = endStats;

	def printBlock(self, header, data):
		hLen = 7

		sys.stderr.write('|\n|{: >6}{}:\n'.format('', header))
		for arg in data:
			val = round(data[arg], 1) if type(data[arg]) == int or type(data[arg]) == float else data[arg]
			sys.stderr.write('|{: >{fill}}{: >40} : {}\n'.format('|', arg, val, fill=hLen))

	def printAvgs(self, options, arg, replay):
		data = {}
		if len(options[arg]) > 0:
			for lbl in options[arg]:
				try:
					data[lbl] = self.getAverage(lbl, replay)
				except KeyError:
					sys.stderr.write('Invalid parameter \'{}\'\n'.format(lbl))

			self.printBlock('Averages', data)

	def printEndStats(self, replay):
		del self.replays[replay]['endStats']['name']
		self.printBlock('End Stats', self.replays[replay]['endStats'])

	def dispData(self, options, replay):
		sys.stderr.write('{}:\n'.format(self))
		for arg in options:
			if arg == 'avg':
				self.printAvgs(options, arg, replay)
			elif arg == 'endStats':
				self.printEndStats(replay)
		sys.stderr.write('\n')

	def addPlot(self, options, replay):
		for lbl in options:
			data = [self.replays[replay][turn][lbl] for turn in self.replays[replay] if turn != 'endStats']
			plt.plot(data, label='{}\'s {}'.format(self, lbl))


# Stores data from a single replay and creates the Algo classes
class Replay:
	def __init__(self, fName):
		self.fname = fName;
		self.ref = None
		self.turns = {}
		self.validTurns = []

		self.loadData()
		self.unpackData()

	def __eq__(self, other):
		return self.fname == other.fname
	def __toString(self):
		return self.fname
	def __str__(self):
		return self.__toString()
	def __repr__(self):
		return self.__toString()

	def loadData(self):
		with open(self.fname) as f:
			for line in f:
				line = line.replace("\n", "")
				line = line.replace("\t", "")

				if (line != ''):
					data = json.loads(line)

					try:
						data['debug']
						self.ref = data
					except:
						turnNum = data['turnInfo'][1]
						frameNum = data['turnInfo'][2]
						self.turns[(turnNum, frameNum)] = data
						self.validTurns.append((turnNum, frameNum))

	def unpackData(self):
		try:
			self.algo1, self.algo2 = self.createAlgos()

			for t, f in self.getValidTurns():
				turn = self.getTurn(t)
				turnInfo = turn['turnInfo']
				p1Stats = turn['p1Stats']
				p2Stats = turn['p2Stats']

				self.algo1.addData(self.fname, t, 'health', p1Stats[0])
				self.algo1.addData(self.fname, t, 'cores', p1Stats[1])
				self.algo1.addData(self.fname, t, 'bits', p1Stats[2])
				self.algo1.addEndStats(self.fname, self.turns[self.validTurns[-1]]['endStats']['player1'])

				self.algo2.addData(self.fname, t, 'health', p2Stats[0])
				self.algo2.addData(self.fname, t, 'cores', p2Stats[1])
				self.algo2.addData(self.fname, t, 'bits', p2Stats[2])
				self.algo2.addEndStats(self.fname, self.turns[self.validTurns[-1]]['endStats']['player2'])
		except Exception as e:
			pass

	def createAlgos(self):
		endStats = self.turns[self.validTurns[-1]]['endStats']
		return Algo(endStats['player1']['name']), Algo(endStats['player2']['name'])

	def getAlgos(self):
		return [self.algo1, self.algo2]

	def getValidTurns(self):
		return self.validTurns
	def getTurns(self):
		return self.turns
	def getTurn(self, turn, frame=-1):
		return self.turns[(turn, frame)]

# handles opening multiple games (replays)
class FileHandler:
	def __init__(self):
		self.replays = []

	def getReplays(self):
		return self.replays

	def getLastReplay(self):
		return self.replays[0] if len(self.replays) > 0 else None

	def getReplay(self, i=0):
		if i >= len(self.replays):
			sys.stderr.write("Invalid replay")
			return None
		return self.replays[i]

	def __latestReplays(self, num=1):
		files = glob.glob('replays/*.replay')
		files = sorted(files, key=os.path.getctime, reverse=True)
		return files[:num]

	def loadFiles(self, num=1, fNames=[]):
		if len(fNames) > 0:
			for fName in fNames:
				if fName.find('replays') == -1:
					self.replays.append(Replay('replays/'+fName))
				else:
					self.replays.append(Replay(fName))
		else:
			for fName in self.__latestReplays(num):
				self.replays.append(Replay(fName))



if __name__ == '__main__':
	args = ParseArgs() # get command line arguments

	fh = FileHandler()
	fh.loadFiles(int(args['num']), args['file']) #loads the files - all JSON reading is here

	# check to see if matplotlib is installed
	graphingEnabled = True if len(args['graph']) > 0 else False
	if graphingEnabled and not pltInstalled:
		sys.stderr.write("WARNING: matplotlib not installed - no graphs will be shown\n\n")
		graphingEnabled = False


	# these options are passed to let the algo know what to display and add to the plots
	options = {
				'avg':		args['averages'],
				'endStats':	None,
				'graph':	args['graph']
			  }

	# loop through all replays
	for replay in fh.getReplays():
		sys.stderr.write('{:->90}\n'.format(''))
		sys.stderr.write('Showing {}\n'.format(replay.fname.replace('replays/', '')))
		sys.stderr.write('{:->90}\n'.format(''))

		try:
			for algo in replay.getAlgos():
				algo.dispData(options, replay.fname)

				if graphingEnabled:
					algo.addPlot(options['graph'], replay.fname)
		except Exception as e:
			sys.stderr.write('Error parsing file\n')
			sys.stderr.write(str(e)+'\n')

		# show the graph, if enabled
		if graphingEnabled:
			plt.ylabel('Value')
			plt.xlabel('Turn #')
			plt.legend(loc='best')
			plt.show()

		sys.stderr.write('\n')
3 Likes

For anyone interested, this code is now part of the C1StarterKit in python so you can get it from there.