hmm… but a benchmark with a given algorithm wouldnt really be ideal I think. I was more thinking something like:
- Prepare a set of N board states (examples in list below)
- Hand over each board state to your algorithm and measure the time the algorithm takes to simulate this specific state K times
- compile data into a nice graph to display strength and weaknesses of the algorithm.
Example:
Scene 1-10 have 1 single information unit on the field, sometimes 1 for each player, sometimes they reach each other, sometimes they dont (measures speed of pathing + unit targeting)
Scene 11-20 have 1 single information unit and a dozen of firewalls on each players side (measures unit pathing + firewall interaction)
Scene 21-30 have 2 information units on different positions and the same number of firewalls (checks how efficient pathing is done for different groups of units)
Scene 31-40 consist of a big maze of filters through which information units have to move (checks how well the simulator handles long pathing distances)
etc…
one would have to handmake these prepared game states and maybe make a library that is easy to attach your simulator with. This way we can ensure the same environment for each player.
since some algorithms (like my own) are better at calculating a gamestate repeatedly, maybe one could take this into account too and make a seperate benchmark for that (I’m taking about 60 microseconds to prepare the gamestate each time a new building was placed, so if I can take this time away over 100 passes, I get an increase in performance of about 30% (total time is in the order of 0.2 ms)).
I think I can best him