Low-Key Embarrassing Admissions (Terminal Blooper Reel)

I dunno if this will ever catch on considering the nature of the topic, but I was reminiscing about my Terminal history and my Terminal to-do list for Season 2 and found a few lighthearted gaffes, design decisions that are retroactively clearly pretty silly, and best-practices that I’ve elected to ignore.

On a whim, I thought it might be fun if there was a place for players to vent (about themselves :stuck_out_tongue:) and share any (strictly harmless) guilty secrets they might be harboring about their time in Terminal. Anecdotes, replays, or code snippets are all encouraged.

Naturally, I’ll go first: :blush:

  • When Maze algos first appeared on the scene, I tried detecting them by checking whether there were no structures left standing in the front-center on my side of the board.

  • Many of my algos are un-commented spaghetti monsters with too few objects or functions, which has historically led to the occasional short-lived algos that executed things in the wrong order and build bases that were maximized to allow enemy shots through or some such nonsense

  • I still have not re-written the starter kit path-finding function. Yeah its Season 2 folks and I’m still pathing from good ol’ navigate_multiple_endpoints in navigation.py

4 Likes

I remember feeling the need to vent when I realized I had a break-y boi logic error in my simulator that caused many strange behaviors. Cringey meme shared below.

A couple other gems we found during season 1:

  • I did not know python refuses to throw an error when you nest “for x in y” loops and name x the same thing… There were several undefined defined behaviors that took ages to debug because of this. The simulator was the worst occurrence of this by far (since I did everything in one function with very deep nesting).
  • We realized with less than a week left in the competition (after the emoji series held the top spot for a while) that our action frame parsing wouldn’t do anything if neither side placed information units. Our predictor would just… skip the turn… We weren’t sure what kind of undefined behavior we were looking at so we were prepared to reintroduce the bug if fixing it somehow made our algo worse.
  • Our algo we submitted for the UMich live event featured a simulator we had spent most of our time working on. However, aside from the numerous bugs that wouldn’t be discovered for months, we didn’t realize that we were indexing into only the left edge of the map when picking what simulation was best, causing us to almost never pick the right simulation. Actually, it was never the “right” simulation since we only attacked from the left side :frowning:
3 Likes

I’ll just leave it at those. Hits home hard… :sweat_smile:

1 Like

When Maze algos first appeared on the scene, I tried detecting them by checking whether there were no structures left standing in the front-center on my side of the board.

Funnily enough, my Track algos still use this method for detecting maze algos. For some reason it was more effective than other methods I tried to use, so I just stuck with it :stuck_out_tongue:

I still have not re-written the starter kit path-finding function. Yeah its Season 2 folks and I’m still pathing from good ol’ navigate_multiple_endpoints in navigation.py

Same here. To be fair, I only really started doing proper path analysis since season 2 began, but many many hours were wasted trying to get a proper pathing algorithm working (still no success yet :cry:)

I still have not re-written the starter kit path-finding function. Yeah its Season 2 folks and I’m still pathing from good ol’ navigate_multiple_endpoints in navigation.py

jup same :joy:

But the worst thing about my algorithms is probably, that they still count the attackers for every path to find the best possible route (no action phase simulation, not even checking for encryptors).

I also have not rewritten the starter pathfinder, although the Rust one is faster.

In season 1, for a while my algos ran lots of simulations and picked the worst one. But the really embarrassing part is that the algo didn’t really do any better after I fixed it.

8 Likes

I realised this morning that every algos I uploaded the past week had a major flaw:

When simulating the attack, the attacking units where actually marked as the opponent’s. So instead of finding the best best way to destroy the opponent’s base, it was trying to find the best way to destroyed its own base from its own side :sweat_smile:
The funny thing is that it altered very little the perfomances of the algos since they reached the top 10 elo range :joy:

3 Likes

This isn’t much of an embarrassing thing as much as it is a side bot that I wanted to build when I was bored and waiting for my current bots to build elo.
My current bots had a really solid way to predict attack patterns so I thought it’d be fun to add on to that and predict attacking unit placements and copy defences.
The end product was a hilarious bot that in a perfect game, could copy everything it’s opponent was doing, the turn it was doing it.
I promptly named this bot DITTO and released it into the world and watch a lot of players simply just loose on time on turn 100. It was easily defeated against less predictable bots but man it was a lot of fun to watch :joy:

4 Likes

Could you post that algos Id? I would love to see some of those games

1 Like

I’ll have to repost it. I had to take it down to make room when I was increment versions of my main bot but I wouldn’t mind reposting it at all

1 Like

If thats alright, that would be nice, thanks

Here’s DITTO’s id: 56193. He’s not a perfect ditto since he can’t predict defense patterns but he’s still not bad and is a good source of entertainment for me lol

1 Like

Let’s bring back this fun thread with my contribution for today:

I’ve just programmed for 8 hours without compiling or testing and now I’m too afraid to run anything :scream:

Edit: ( Spoiler: now I ran it and everything is on fire )

Edit #2: ( My new simulator runs now but takes 14 second long turns :dizzy_face: )

2 Likes

Oh geez… That’s like a nightmare I would have, lol.

I recently found a bunch of bugs with my simulator where if two units of different types were on the same position even if it wasn’t their turn they would move with the fastest unit. I’d literally never noticed since most algos just spawn one type of unit but it’s been bugged for months :).

I had trouble vs one specific maze algo(starfish-inspired).
To make progress faster, i recreated the maze algo locally, and used it as a sparing bot…
I was unable to beat the simple bot for 2 weaks!!
I was convinced myself, that this was just because my personal algo is way more optimized for real matches

At some point, I uploaded the bot, to do a specific test in the online simulator …
30 min later, the bot was with 150 higher elo then my “top algo”

3 Likes

When I first coded my A* pathfinding, I initially set a test, to run the new pathfinding algorithms in parallel with the default one, and crash if there is a difference in their results. After few small bugs all was stable.

    path =  self.shortest_path_finder.navigate_multiple_endpoints(...)
    path2 = self.astar.navigate_multiple_endpoints(...)
    if len(path) != len(path2) or path[-1] != path[-1]:
          exit(1)

Around one month later, I noticed some incorrect pathing predictions. I re-enabled the parallel comparison again … but that did not detected a problem.
It took me some time to realise my comparison test was wrong … and it ignored several pathing errors.

1 Like

@arnby, @paprikadobi, @clinch, y’all need to stahp making algos that do different things every time I replay them even if my moves are identical. It’s filling my development process with double-takes and paranoia and making me crazy :crazy_face:

( The problem is totally not that I go down rabbit holes and end up playing whack-a-mole with one-off bandaid solutions and ever-larger decision trees because I’m still not quite sure how to take full advantage of my new c++ high-accuracy simulator :rofl: )

3 Likes