I dunno if this will ever catch on considering the nature of the topic, but I was reminiscing about my Terminal history and my Terminal to-do list for Season 2 and found a few lighthearted gaffes, design decisions that are retroactively clearly pretty silly, and best-practices that I’ve elected to ignore.
On a whim, I thought it might be fun if there was a place for players to vent (about themselves ) and share any (strictly harmless) guilty secrets they might be harboring about their time in Terminal. Anecdotes, replays, or code snippets are all encouraged.
Naturally, I’ll go first:
When Maze algos first appeared on the scene, I tried detecting them by checking whether there were no structures left standing in the front-center on my side of the board.
Many of my algos are un-commented spaghetti monsters with too few objects or functions, which has historically led to the occasional short-lived algos that executed things in the wrong order and build bases that were maximized to allow enemy shots through or some such nonsense
I still have not re-written the starter kit path-finding function. Yeah its Season 2 folks and I’m still pathing from good ol’ navigate_multiple_endpoints in navigation.py
I did not know python refuses to throw an error when you nest “for x in y” loops and name x the same thing… There were several undefined defined behaviors that took ages to debug because of this. The simulator was the worst occurrence of this by far (since I did everything in one function with very deep nesting).
We realized with less than a week left in the competition (after the emoji series held the top spot for a while) that our action frame parsing wouldn’t do anything if neither side placed information units. Our predictor would just… skip the turn… We weren’t sure what kind of undefined behavior we were looking at so we were prepared to reintroduce the bug if fixing it somehow made our algo worse.
Our algo we submitted for the UMich live event featured a simulator we had spent most of our time working on. However, aside from the numerous bugs that wouldn’t be discovered for months, we didn’t realize that we were indexing into only the left edge of the map when picking what simulation was best, causing us to almost never pick the right simulation. Actually, it was never the “right” simulation since we only attacked from the left side
I realised this morning that every algos I uploaded the past week had a major flaw:
When simulating the attack, the attacking units where actually marked as the opponent’s. So instead of finding the best best way to destroy the opponent’s base, it was trying to find the best way to destroyed its own base from its own side
The funny thing is that it altered very little the perfomances of the algos since they reached the top 10 elo range
This isn’t much of an embarrassing thing as much as it is a side bot that I wanted to build when I was bored and waiting for my current bots to build elo.
My current bots had a really solid way to predict attack patterns so I thought it’d be fun to add on to that and predict attacking unit placements and copy defences.
The end product was a hilarious bot that in a perfect game, could copy everything it’s opponent was doing, the turn it was doing it.
I promptly named this bot DITTO and released it into the world and watch a lot of players simply just loose on time on turn 100. It was easily defeated against less predictable bots but man it was a lot of fun to watch
Oh geez… That’s like a nightmare I would have, lol.
I recently found a bunch of bugs with my simulator where if two units of different types were on the same position even if it wasn’t their turn they would move with the fastest unit. I’d literally never noticed since most algos just spawn one type of unit but it’s been bugged for months :).
I had trouble vs one specific maze algo(starfish-inspired).
To make progress faster, i recreated the maze algo locally, and used it as a sparing bot…
I was unable to beat the simple bot for 2 weaks!!
I was convinced myself, that this was just because my personal algo is way more optimized for real matches …
At some point, I uploaded the bot, to do a specific test in the online simulator …
30 min later, the bot was with 150 higher elo then my “top algo”
When I first coded my A* pathfinding, I initially set a test, to run the new pathfinding algorithms in parallel with the default one, and crash if there is a difference in their results. After few small bugs all was stable.
path = self.shortest_path_finder.navigate_multiple_endpoints(...)
path2 = self.astar.navigate_multiple_endpoints(...)
if len(path) != len(path2) or path[-1] != path[-1]:
Around one month later, I noticed some incorrect pathing predictions. I re-enabled the parallel comparison again … but that did not detected a problem.
It took me some time to realise my comparison test was wrong … and it ignored several pathing errors.
@arnby, @paprikadobi, @clinch, y’all need to stahp making algos that do different things every time I replay them even if my moves are identical. It’s filling my development process with double-takes and paranoia and making me crazy
( The problem is totally not that I go down rabbit holes and end up playing whack-a-mole with one-off bandaid solutions and ever-larger decision trees because I’m still not quite sure how to take full advantage of my new c++ high-accuracy simulator )