Match of the Week - Revisited

I have tried to train a perceptron (ie a single neuron) taking, for each players, the following values as inputs:

  • health
  • saved cores
  • saved bits
  • number of filters
  • number of encryptors
  • number of destructors

I trained this perceptron on 800+ replays of games between top algos (originally I had downloaded them to make a better prediction phase)
Training it only on top algos was a mistake as I will explain later.

I then used this perceptron in an algorithm similar to the one I described in my previous post (I summed the squared distances of prediction extremums) to rate “interestingness” of replays #2209300 to #2209400

Here is the top 5:
#1: 2209381
#2: 2209355
#3: 2209399
#4: 2209321
#5: 2209380

While the 3rd one could indeed match my definition of “interesting” (early game going back and forth, then player 2 having a huge core advantage around turn 20, but finally lost), the other are mainly about very low level algos and don’t seem that interesting.

To understand that, let’s look at the weights I obtained after the training:

  • health: 0.212429
  • cores saved: 0.124875
  • bits saved: 0.450969
  • filters: 0.12725
  • encryptors: 0.794231
  • destructors: 0.539732

The weights for numbers of encryptors and destructors are high but when you divise it by their cost it is just about 1.5 as important as filters.

The weight that is the more spectacular in my opinion is the one relative to the bits saved.
And I think it is not a mistake from my perceptron when evaluating win probability in high level game:
At this level, big number of bits saved up means that there will soon be a devastating attack that could literally end the game, or at least destroy a lot of firewalls.

But at low level, it is not the case: starter-algo saves bits to launch very ineffective attacks, and some even worse algos just fail to attack.
Even core on board is not too relevant at low level, only health is very relevant.

So the predictions resulting from my perceptron are useless when evaluating outcome probability of low level matches

And so I think that whatever thing we would try to use as an advantage measurement should be made relatively to the level range of the match.

1 Like