Throttle insight in top algos

Hey guys,

I gave this a fair bit of thought, and decided to make a post about it. It regards the ability to view top level matches in detail. Let me pick an example in a game called heartstone. It is supposed to be a game where you draft your own decks and strategies, and then try them out on the ladder. The problem is that there are only very few people participating in drafting the decks. You can just find the best decks online and people copy them. Being the originator of a deck doesn’t yield a substantial advantage after that.

Now in this game of terminal I like to think that people are rewarded for their own creative strategies, and so far I think this has been decently true. However, people are discussing about fast evolving meta, and being able to see what the top spots do so people can adapt. And imo this can go wrong really quick and would completely remove my interest in the game. I feel like a challenge like this game shouldn’t allow someone new to directly observe the best algo in detail with many match ups, kinda copying it, making a small modification, and get a better version of it. For the majority of the player base this encourages good copying instead of fresh ideas. Especially for those who are mainly interested in the prize at the end.

This is different than in chess, where you can view grand masters, but never understand what they really think about. I don’t expect the most successful algorithms to be so counter intuitive that they can’t be figured out by good programmers,at least to a decent level.

Currently the painful algos to observe are the attempts of direct copies of the sawtooth algorithm, firewalls placed exactly the same way.

What I would propose is that people can only view their own matches against other algorithms, unless shared specifically in a discussion or something (not in bulk for everybody). And then when they make good strategies, their ELO rises allowing them to see more elegant strategies. It forces the players to think more about the puzzle, which might result in more interesting algorithms.

What do you guys think?


While I also think we will go through a phase with a lot algos just copying other top algos, I don’t think hiding the matches of other players is a good idea. It would mostly slow the evolution of the meta (because we couldn’t see what works) which would delay the apparition of better algos.

I think right now every single of our algos is at most mediocre: they are either (mostly) static or adaptative but stupid. I bet every one of us can easily beat his own algo, but that could be over in a few months.

Actually our algos are so bad that @C1Ryan is confident that his algo would beat any of ours in his ‘Challenge C1 Ryan’. So let’s design better algos to prove him wrong! :stuck_out_tongue_winking_eye:


I agree with @Aeldrexan. Sooner or later algos will become good enough that directly duplicating the logic won’t be possible.


Haha, totally agree, latest top algos are becoming more adaptive and static algos will eventually not be able to keep up. We would not be able to conclude this while not seeing the matches. I am for keeping this open as possible.

I’m not sure if I’m the right person or wrong person to comment on this thread given NOT_MY_FINAL_FORM’s position on the leaderboard at the moment, but here goes anyway.

I see copying as a valuable (and even inevitable) part of the learning process for this game. Even before Maze Algos entered the scene (credit to @876584635678890 for naming them :wink:), you could find many variations of the “Champion” boss algo running around the leaderboard. I actually think that has less to do with someone trying to find the best “net deck” to run with, and more to do with a desire to learn why that algo has such a high win rate.

Sawtooth is an interesting phenomenon being both a truly static algo and also an extremely effective one. On top of that, It’s immediately obvious one way you could make it “better” - attack with pings at the right time. But does deciding when to attack with pings actually result in a better win rate? Or does it just speed up games EMPs would win with anyway? If you can learn why an algo wins and why it loses, you can improve upon it or figure out how to consistently beat it. And that’s the benefit to being able to see top algos play. It helps all of us get better.

I agree that creative strategies should be rewarded, so the question becomes can you end up with a creative innovation from an iterative improvement on someone else’s idea?

Anyone watching NMFF can see that Sawtooth was the basis for firewall placement, but given that it has a 100+ elo lead over Sawtooth, the learning comes from figuring out what modifications were made that resulted in that improvement. A few that should be obvious from seeing a handful of matches - I’ll attack with pings given certain board states, I don’t have a fixed IU deployment location, and the maze will either open left or right depending on which position I determine to be better for a given match. What of my changes resulted in a “better” maze? I certainly have my ideas of what’s working (and the mistakes it still makes), but if someone can do any of those things better I think they deserve credit for it.

So in the end how do we decide when a solution can be called creative?

1 Like

Hey @Aeldrexan I actually think our top player’s algos are really good, a few of them look like they might even be able to score on me once or twice

1 Like

Just to be 100% transparent I really am just talking myself up for fun / to build hype, I would estimate my chances of getting first place are realistically around 20%, my algo is probably going to be pretty good but i’m not expecting to sweep everyone by a mile.


My problem with making adaptive algos are, you think you can conquer anything, then there is some issue, it doesn’t work, you think you over did it, but I don’t want to give that up. So what I do is try to find why and how to fix it. But I don’t know how, the bugs are too complex for me to figure out why my algo did what it did so now I sitting here asking “how the hell do I fix that”. I guess thats what machine learning is for.

Okay I read through the comments, and the prevalent opinion is that current/future algorithms cannot properly be copied properly just by looking at it perform. If this turns out to be the case the issue I am raising is invalid.

So this was kind off my initial point. Someone made a new algorithm that got all the way to the top, and some other programmers are like: “I wonder why it’s so high, let’s duplicate it. O look, I am top 20! Aperantly it is effective”. I am saying this, because initially this is exactly what I did, and it is much less fun/challenging than pondering over new ideas.

I am willing to believe that the landscape of right now will change a lot in the (near) future. All the top algos have to deal with effective static strategies. The new matchmaking already helps a ton with iterating on algorithms. This is especially important for designing dynamic algorithms

My main concern was that I believe to be many more strategies that are effective than currently in the top spots, and if everyone is focussing on the top algos, the room for experimenting with wild ideas might be less.