4 of the top 8 algos in the CB challenge were “maze” algos. And AdrianMargel’s sawtoothV2, the originator of the 9-11-13 maze won it all with a non-adaptive approach.
Which leaves me to eat my words:
sawtoothV2 builds the same structures in the same order every game and never spends more than 92 cores. Each turn he spawns as many EMPs on [24, 10] as he can. And he just won $2000.
Having hit the top8 myself with a maze algo, I wanted to share some thoughts on why I think it’s dominating the meta right now (some of it is probably pretty obvious):
- It fights a war of attrition:
a. Spending as many bits each turn as it can limits the bits lost due to the decay effect
b. The maze path is structured to maximize EMP’s long range
c. As long as it can deal more than 8 cores of damage every two turns (since it’s fine with “chipping away” at enemy stationary units) the opponent’s base will fall further and further behind.
- The maze is cheap to build, only around 10 destructors with maximized kill zones when fully built
- SawtoothV2’s static strategy minimizes mistakes
I still believe adaptive algos will find long term success, but regarding #3 - it’s easy to get too clever and shoot yourself in the foot in a number of ways.
Any other observations about the maze meta? I’ll admit I’m ready to focus on how to overcome it!
I agree wholeheartedly with the “it’s easy to get too clever and shoot yourself in the foot”. My algos that attempt to be adaptive have, without fail, run into a whole legion of bizarrely specific situations against static algos in which whatever static template they choose falls into the cracks of my algo’s decision-making engine, and I watch in horror as my algo makes edits to its base that let enemy shots walk right in. Against other adaptives the situation can be even worse, since at the moment my algo attempts to be predictive, which calls apart if the opponent is using a complex decision-making process.
That being said, adaptives are still in their infancy as a whole, and I agree that with more time and refinement they will become the meta.
I suppose I should add - I never actually gave my thoughts on the ‘maze meta’. My personal approach to adaptiveness attempts to bypass lots of R&D (hey I’m a busy student ) by beginning with a lot of hard-coded “meta knowledge”, and all the adaptive has to do is identify what class the opponent is and transform into a “static” algo that counters it. There are, of course, about 10 flaws with this approach but I still can’t write a fast enough pathing algorithm to do anything better at the moment.
My current thoughts on the “maze” is that if you let it get setup, you have probably already lost. The surest key to victory that I have found is early aggression with a heavily encoded “corner gun” that just blasts health-buffed pings mercilessly to try and race the assembly of the maze. The problem with this is that it loses to enough other algos that it wouldn’t see any real leaderboard success as a strictly static algo, and if an adaptive algo attempts to deploy this, it might be too late by the time it identifies the opponent as a “maze” type.
While some of the “corner gun” strategies aren’t completely consistent, Aktualnyv5, the second place algo, used a corner gun that could choose either side, but also choose the middle. From how it lost to sawtooth, it seemed like it failed to properly stop sawtooth from creating its maze (which, as you said, is when you have already lost).
As of this post, Aktualnyv3 is ranked 12th in the world, with an ELO of 1880, so it appears that this approach is fairly effective (assuming that v3 performs similarly to v5).
I am always tempted to share my thoughts in threads like this, but sadly we want our players to define the meta themselves.
However, I can say that I will be highlighting some of the weaknesses of the EMP line strategies and provide an example of how to create an extremely effective dynamic algo in the C1Ryan Challange.
More thoughts on “maze meta”: having considered some more, I think that there is actually a good way to come back after the maze has been built, as long as the opponent is static. Spamming scramblers down their maze that arrive close to when the emp’s do will negate their potential to lower your life, while you slowly rebuild a corner gun out of the range of their emps.
That is a good point. I feel like adaptive algos can do very well with this. The maze limits where their units can move, so you can save a lot of cores by only building defenses where there EMPs will be exiting. They will likely exit near a corner, so if you build a corner gun with a lot of encryptors at that corner, you can rely on them always exiting at that corner. You can also possibly mess up their EMPs routing as well by destroying the pathing filters. If you eliminate the line-EMP-sniping capability of the maze algo, you can probably handle them without too much trouble.
I think your idea is good, let me add some toughs of my own to this:
First, we don’t have too much time to react because if the maze algo start alternating between pings and EMPs it will be very hard to correctly time the scramblers
Secondly, I’m not sure targeting the corner is always the best: the maze algos we have seen dominating the CB challenge don’t have a very good center and 5+ over-encrypted EMPs would probably rip open their maze, and that should be so hard to rebuild that you could win just using pings in the cannon every turn after that.
I have also noticed the difficulty of attacking corners. Even with a very simple defense, there is no emp distance advantage for the corners unless they get through the wall, and then you’ve got bigger issues. This is because you can’t spawn units in the corner without immediately being destroyed and you can’t really target them by spawning at (13,0) and (14,0) because they don’t have enough time to do effective damage before they get in range of destructors (if there are filters, which there always are).
I think the two best options are what has already been mentioned, timing scramblers and targetting the middle. I have done some preliminary testing with targetting the middle and it can be quite effective if you only try and disrupt the pathing. This could be combined quite nicely with a well-timed scrambler to protect the new opening as well. I think scrambler timing actually wouldn’t be too difficult since it doesn’t have to be perfect.
I think the meta just moves slowly because of the difficulty of testing your own algos. Today at the UMich competition, the sawtooth algo went up as Umich-special and 6 of the 25 teams had algos beating it in a few hours. Considering that most of the people in the room probably hadn’t played Terminal before, I think that’s a lot.
My team had an adaptive maze algo that is designed to beat static mazes by out-emp wiping them. It smashed through the other maze algos including sawtooth and ended up winning (yes let me brag). It has a lot of weaknesses and we would have lost if we matched against a ping rush strategy (or a lot of other strategies), but I think that’s an issue with the coder and not the approach
I think adaptive algos just take more time to code but eventually adaptive algos will always beat static algos.
The maze generally, though, may be a permanent meta feature for any non-rush algo. In Terminal you can’t control directly unit pathing. However, building a maze (or any wall with a small gap in it) lets you control both your own and your enemy’s units’ pathing. Combined with the fact that emps outrange destructors, with enough pathing control you can get an enormous amount of “free” damage. So using full-length walls with small gaps basically gives you an extra entire dimension of control. Whoever can get a wall up in their front two rows can control a large percentage of the board–critically, the corners–and unit pathing. But it doesn’t mean they’ll be static or necessarily look like Sawtooth or Cthaeh.
Congrats on the win, @Destrolas.
Interestingly enough, I agree that sawtooth’s algo is easy to beat (when you’re training directly against it). And given the static nature, it’s easy to clone and train offline against. I have a feeling many people running dynamic mazes have been using sawtooth as the benchmark. It just so happened that my first round of the top 8 matchup in the CB challenge was another adaptive maze that knocked out my adaptive maze. My algo had previously beat sawtoothV2 in a random global competition matchup, but that’s the reality of the single elim bracket. And to be clear, I think it’s great that sawtooth took the top prize - he was the originator of the whole strat after all.
What I think is more interesting about the maze meta, is it appears being such a favorable matchup against the bulk of the field (as noted by there being 4 in the top 8 of CB). The maze-snipers I’ve been working on easily defeat mazes but lose to different strats too often.
And I agree that it’s the control aspect of mazes, making the most out of EMPs, that gives it such a high win percentage. Perhaps it’s accurate to think of them as being a “conservative” play style. Low risk, but high consistency.
A few team members agree with that conclusion, that it is too difficult to test your algos - We think the matchmaking improvements slotted for next week and the ability to challenge specific algos on the leaderboard (as if they were bosses) or spectate their games will allow the ecosystem as a whole to evolve much faster, with players having more feedback and ability to learn.
Matchmaking improvements are coming in next week. We also want to improve players ability to get feedback from top algos, and will be speccing that out, more details to come.
@Destrolas Great insights!
I love that I came to that exact same conclusion a week ago.
Also, I think that the META for each competition is fairly easy to predict at the moment because you might not want to select a concept that has not proven to be viable.
That sounds interesting to me. I assume that you thought about how to implement something like this because it sounds like quite a challenge to me, balancing freedom and limitations of it.
Somehow, I feel like I should get some credit for naming it “Maze Algos”
We have a few ideas about what we want to do, nothing finalized yet, that was more an example. Ill provide specific details when they become available
@C1Ryan Great, you all really come out with a lot of new features and are responsive to the community. I’m excited for the matchmaking changes.
On testing algos, I’m not sure if a slow-moving meta due to difficulty to test is a bad thing. I think there’s a sweet spot in strategy games where the meta has time to settle and people can copy or innovate within it before it shifts in a major manner. Also, if a meta moves too fast, it can converge onto a solved state, and then the devs have to constantly change the game balance to keep the meta from stagnating, which usually isn’t fun for the devs or the players. I think the gradual ranked matches already give players a good data set, unless you’re using machine learning.
I really like the idea of spectating leaderboard matches. I learned a lot from watching some of those matches through threads by 8 (Fun fact: Did you know that 8 was behind the term “Maze Algos”? ) and n-sanders, and they are technically already accessible through the leaderboard api. I think making that a fully supported feature would be awesome.
I could see some issues with being able to challenge the leaderboard algos. Since the game is basically deterministic, this would really punish people for getting onto the leaderboard (except maybe @Aeldrexan).
I at least partially agree that slow-moving metas can be healthy, but I personally think that the reason behind the slow-moving-ness cannot simply be “you are forced to physically wait as feedback on your design is fed to you drop-by-drop”, because that just frustrates the players. I am 100% in support of the proposed matchmaking changes and can’t wait for next week.
Ability to challenge is very interesting, and I can think of about 50 situations in the past week when I would have liked to see the results against some top strategies without waiting 10 days for elo to climb and hope I get matched against them. However, I also agree that this has some potential for abuse and should be approached carefully.
I have a feeling that changes to the matchmaking will make a significant improvement in being able to refine your algos without needing to challenge specific ones. I know C1 has talked before about letting you rematch against algos that caused yours to crash in a global match, but that has a very different use case. Maybe I could see a feature where the winner of a competition gets put in as a “boss”, but I’d want to see some pretty tight controls on challenges in general because of the potential for abuse.
With the current state of matchmaking, it’s down to luck to get good feedback. A few days before the CB challenge I had a final version of my algo I wanted to put through the ringer to try and expose weak spots, so I used 5 of my upload slots for that one algo just to try and get it to lose. And, as matchmaking luck had it, it had over 150 straight global victories (each slot had 30+ games played and elo’s all over 1775). Literally the only losses I saw with it were mirror matches against my own algo. I took those 5 down and put up the same algo renamed as MazeRunner for the CB challenge and still haven’t seen a global competition loss (35-0 since 11 AM EST on 10/25).
Given that daedalus-0.5 knocked me out of CB, I know this algo isn’t perfect (and have an offline kryptonite algo for it), but until something changes with matchmaking, getting feedback to improve on those final few weak spots on high level algos is really hard.
Yeah, these are my feelings as well. I was pretty nervous about the single-elim brackets because I was 100% certain there were weaknesses that I just hadn’t gotten to see in the global leaderboard because most of my matches were against trivial opponents that taught me nothing new about my algo.
I think that being able figure where the opening in the maze is, then blocking it off with destructors/filters - and then sending your own emps in could be a viable tactic.
Of course then you’ve got to figure out if they’re using a maze or not and where the opening is, but these strike me as solvable problems.
The thing is, most of the damage maze algos do is not after they leave the maze, its when they are inside of it. Blocking off the entrance to the maze won’t do much, unless you manage to break a hole in an earlier part of the maze. Blocking off the maze exit could work if you have a ping rush line targeting the other side of the maze, but then there may be problems with the maze detection, since it takes a few turns for ping rush lines to be built and they are quite fragile.