Kickoff competition results are now available

Check out all of the matches and the final leaderboard on the competition page
Read more about the competition format used here

Prize distribution (details here):
1st - Team: Earth afire (Acshi H), $100
2nd - Max K, $50
3rd - HOSO, $50
4th - hamidwat , $50

2 Likes

Ggs everybody

1 Like

Lol what happened to 38 algos, it got reduced to 19 somehow. And I think Round Robin didn’t work out as expected because my team’s elo was the highest in the competition (2442) and we got knocked out early lol. We would’ve benefitted from getting matched with more algos (less randomized matchmaking) rather than being matched with only a select few.

There were 38 unique players/teams that clicked the join button on the competition. Of these, only 19 players/teams submitted an algo.

I think Round Robin [Groups] didn’t work out as expected because my team’s elo was the highest in the competition (2442) and we got knocked out early lol

Your team made it through the first round and were knocked out in the second round, alongside 8 of the 12 players in that round. Within your group of 6 players, your team won 2/5 games, while another entrant won 4/5 and 2 entrants won 3/5. Your final placement seems reasonable to me given the performance of your the algos in the competition, and I don’t feel that you got knocked out particularly early/unfairly.

The intention behind RRG is to ensure that algos are not eliminated early due to a single bad matchup in an early round, a problem that has plagued the old “fully placed” and “single elim” formats. I feel that the format is working as expected.

Is this because of the “Elo island” problem? Where high rated algos can’t match against many other algos due to them being too high rated? I’ll consider expanding the matchmaking range again, this seems to be causing issues consistently.

Have you considered not doing a hard range but rather some sort of gaussian probability thing? It might also be nice (hypothetically) if you were typically matched with 1 much better, 3 similar and 1 much worse (and do matchmaking by rank and not rating). Also, there was something a bit strange in the groups because we were in Ring Road Bets group both of the first 2 times which I think should be avoided as much as possible (albeit, in this case it wouldn’t have mattered as we tied with Hoso for the 2nd place and lost on tiebreaker so it’s not like the duplication took a spot away). I see you mentioned the player rematch in the post just below this so I should add that I agree about it not being a massive issue and it is obviously not a bug so nothing is wrong, but aren’t there round robin groups where this is less likely (albeit, this was a tiny tournament so it may have been inevitable)

Could I ask what criteria we got 5th place under? It seems like other teams tied us for that (I was surprised to see us do so well given how quickly we developed our code but it makes much more sense given ties).

2 Likes

For the purposes of glicko, it is traditionally preferable to match with someone close to your own rating, so the current system picks a random algo within a certain rating range of you, and then expands this range if none can be found.

I get the idea here, but don’t think its going to make a tangible difference for the user experience. The root problem seems to be the fact that top players get isolated at very high ratings and have a hard time getting matches against most of the field. I think expanding the max rating diff to match will go a long way to solve the root issue.

I am unsure what you mean by this. Rank is only determined after a competition is run

The only way to prevent rematches is to make it so that a single person promotes from each group, which in turn removes much of the benefit of RRG (Since it would make it so that a single very strong player in your group early on can knock you out).

I agree that there are ways that rematches can be reduced, but we intentionally decided against this because it would make it impossible for a human to “trace” their progress through a competition and understand what logic was used to form each group. This is kinda an important feature, we have found in the past that non-transparent mechanics in competitions draw a lot of complaints when they resolve in ways that negatively impact a user. For example, if group seeding was ineffable, people would feel annoyed if they got “randomly” placed into a very strong group and knocked out.

When matches in a group are finished, the seeds are redistributed based on the players performance. For example, if you are in a group with seeds 5, 11, and 17, and the seed 17 team won, they would take seed 5. If their next group had seeds 1, 3, 5, 7, 9, and 11 in it, the third-placed player in this group would take 5th seed, and therefore, 5th place.

Its not the most precise way to “Fully place” people, but we do not plan on using the specific placement for prizing outside the final round, so the specific placement is only for prestige/aesthetic, and I think the specific placement is meaningful/precise enough for that purpose.

We could consider adding recursive loser brackets to resolve ambiguity… It would require a ton of matches if the loser brackets were round robin, but I think an SRE loser bracket could be ok… hmmm…

1 Like