State of the art: Action Phase Simulation

Hi All!

I see more and more adaptive algos on the leaderboard which is very exciting for the overall improvement of the competition I think. Seeing the discussions around here, I feel like a lot of people are trying the idea of an action phase simulator for the adaptiveness of their algos.
So I thought it would be interesting to have something like a state of the art, just to know how many people worked on this and how far they went

  • I have a complete simulator (it can simulate exactly any match)
  • I have a partially complete simulator (it can simulate exactly a good part of the ranked matchs)
  • I have a partial simulator (it focuses only on some aspects of the action phase: pathing, attack etcā€¦)
  • I donā€™t have a simulator yet but plan on building one
  • I donā€™t have a simulator and donā€™t plan on building one

0 voters

If the different categories seem unclear or insufficient to you, donā€™t hesitate to say so

I also wanted to have a poll for simulatorsā€™ performance, to see how deep players went into optimization, but without any proper benchmarking test, I fear it will not be very accurate ( has someone any idea for that?)

1 Like

Well I have a very inefficient way of choosing where my units should be placed but its the best I could do soā€¦ xD
Basically what I do is I get the path of every possible location, that a unit would take in form of a list and store it in another list, then I loop through that path and count the attackers for every location. I then sum up all of those attackers (destructors) per path and choose the one with the lowest score.
So I guess this would be most accurate: I have a partial simulatorā€¦

4 Likes

donā€™t worry, my first simulation work was exactly the same thing :wink:

My simulator should hopefully be complete soon, but currently, it simulates everything as long as my units can score. If there is no way to score, my algo crashes (times out) because of my current method for pathing. The fix should be fairly easy, but I donā€™t have any time because of how many assignments I have for school right now :frowning:.

As for polling for optimization, firstly, assuming that everybody is using python (which they arenā€™t, but I think that most people are), I would recommend using timeit for measurement. It is extremely easy to use. However, this misses some aspects of optimization. My simulator, for example, takes roughly twice as long for the first simulation. The way that I do pathfinding means that initializing it takes some time, but it can be reused for many simulations, assuming that nothing major happens to the map between simulations (If I were to test what would happen if I made a maze in one turn, for example).

Because of this, I would say that a couple of options are:

  • Time that it takes for one simulation
  • Time that it takes for 100 simulations (optimizations, such as memoization, allowed)
  • Number of simulations that can be completed in 5 seconds (optimizations allowed)

If the time measured is the time for one simulation, one possibility is to check the time for 100 simulations anyway, but from the beginning (no memoization, saving info, etc.). I have found that if I am measureing an extremely short amount of time with timeit, it sometimes takes non-negligible time itself. To be fair, this is usually when I place it in get_target, because it is the limiting factor. In any case, to ensure that this isnā€™t an issue, something like this could be done:

start_time = time.clock()
for _ in range(0, 100):
<indent> # Simulate, insure no information is reused
end_time = time.clock()

Another consideration is the match played. Depending on how long units survive and the density of firewalls (for pathfinding), it would be another factor to consider. I believe that the easiest way to do this would be to use the starter algo for the strategy, and play it against the starter algo as well. A similar issue is which turn the simulation is completed on. A starter algo vs. starter algo match takes 17 turns, so the time for the first 16 (in case the end is strange) turns, or to take the cumulative time for these rounds. However, in 17 rounds, I donā€™t think that the firewalls become very dense, so maybe another method is better? Maybe playing the starter algo against a specific boss, or something like that?

Something else to consider is the placement of units for the simulation (my simulator is very fast if it assumes no information units are placed :grinning:). A suggestion, for the sake of simplicity, is to assume the placement that the algo will do next turn, which can be known for certain due to the use of the starter algo on both sides (hopefully).

Another consideration is where the game is played from. I think that the best way would be to upload the algo and play it on the playground, assuming the playground uses the server, and not the local computer. I donā€™t actually know whether it does. Does anybody else? It might be necessary to check time in ranked matches, which is another issue, as the opponent could not be controlled.

Did I miss any other factors to consider?

Thanks for your answer! I think you covered the main factors.

But you kinda confirmed what I had in mind: it seems way too complicated for just few insights on what people do :sweat_smile:

If you want to see what a simulation is useful for: this match here is visually one of the best example I have seen. itā€™s on turn 20. (The rest of the match is not very interesting)

I was trying to create a rapid pixel perfect simulator (ready for some machine learning thing that will probably just fail). In the process I discovered that this was very difficult and also discovered some general issues in official ranked games. Iā€™ll include an example of each kind of problem I had, apologies if some are fixed already or just me being a derp:

  1. Game 900000 Frame 1168, Float Precision Mistake:
    One of the three pings should have died on this frame. However itā€™s health is 1.335144E-5 because of floating point precision error so it survives another complete round, giving it an extra attack turn and absorbing another attack before dying. This happens in many matches involving encryptors because 0.15 (shield decay rate) cannot be perfectly represented in binary. Iā€™ve checked randomly that this happens 6439 times in 10K games from 900000-909999 and 629 times in 1K games from 1027000-1027999.

  2. Game 900058 Frame 980, targeting mistake?
    This might just be my wrong interpretation of targeting. In here a bunch of pings come up and makes a choice between two filters (Left and Right). I claim the filters are all equal in all targeting criteria except the last (ā€œ5. Choose the target closest to an edgeā€). I then claim R is closer to an edge and should be targeted. Yet the pings choose L. (Edit: so this was actually fixed yesterday https://forum.c1games.com/t/firewalls-targetting/754/3?u=yujinwunz)

  1. Game 900072, Game 900097 Attacking self-destructing units
    As an example, these 2 games feature units that just self-destruct because they were trapped by their own from the start. But for the life of me, I cannot find a reasonable set of rules that explain their behavior in all games. For example, I find that sometimes these units manage to make an attack turn before destroying, and other times not. Sometimes, these units are targeted and draw fire from enemies before destroying, but other times not. I may be wrong about all of that but it takes so much effort to figure out whatā€™s happening here that Iā€™ve decided to just stop investigating this particular rare-ish quirk. But I will mention that order of ā€œself-destructā€ relative to movement and attack is not mentioned in the spec.

All but the first are probably too minor to make any effect on the official tournaments. The reason I post this here also is because if anyone is trying to make a simulator Iā€™ll just say itā€™s not worth the monumental effort of making it match 100% with C1ā€™s implementation since thereā€™s so many implementation idiosyncrasies, and itā€™s better putting effort elsewhere after youā€™ve got it 95% there.

4 Likes

But I will mention that order of ā€œself-destructā€ relative to movement and attack is not mentioned in the spec.

This has actually come up for me as well, I never know where to put a self-destruct check in my logic. It would be nice if this were clarified. The same thing goes for breaching, although I believe that just occurs when a unit moves out of the board.

1 Like

Yeah from my experience too breaching is immediate, the unit no longer attacks that frame and it is no longer a target

The docs are actually pretty clear about this:

During each frame, actions will occur in this order:

  1. Each unit takes a step, if it is time for them to take a step.
  2. Each shield decays. See ā€˜Encryptorā€™ for information about shields.
  3. New shields are applied, if a unit has entered the range of a friendly encryptor
  4. All units attack. See ā€˜Targetingā€™
  5. Units that were reduced below 0 stability are removed

So if a unitā€™s next step is to breach, then that occurs first and they are no longer part of the map, as @yujinwunz said.

Yes, itā€™s mostly just when self-destruct happens thatā€™s unclear.

It would happen the same time as all other movement/breaches. Think of it this way, the first bullet is to take the next step, but if there is nowhere to go, it has to self-destruct. Thus, self-destruct effectively is itā€™s next step.

You would imagine thatā€™s how self-destruction works, but itā€™s actually not. It seems like units are only ā€œmarkedā€ for self-destruction and after that I have no idea when it actually happens. Eg, some rounds in game 900097:

  • Round 60, frame 5241:
    2 pings self destruct in (2, 15). But In the same round the tower at (3, 12) attack one of them, and both those pings attack their normal target (the filter at 2, 13)

  • But in the same game on Round 72 frame 6371:
    The scrambler at (25, 11) ought to self-destruct, but because it didnā€™t take 5 steps, thereā€™s no self destruct event and it just dies instead (makes sense so far). However if you look at the attacks, it has two enemies that can see it and itā€™s also the only info in their range. But it draws fire only from the firewall at (25, 14) while the EMP at (24, 26) attacked something else.

Now that Iā€™m thinking about it, ā€œself-destructā€ happening in the attack loop (so same order as attacks themselves - by unit creation order) might make sense but I havenā€™t gone deep enough into the rabbit hole to confirm that.

Iā€™ll investigate these points today. Issue #1 seems especially important

Based on what has been described here, and making assumptions based on my knowledge of the code, I am currently assuming that self-destucts are flagging the units for death, causing it to not actually remove the unit until

  1. Units that were reduced below 0 stability are removed

Regardless, Iā€™ll investigate today and figure out what is actually happening.

In general, I have noticed a number of edge cases in targeting that make creating ā€˜perfectā€™ simulators far more difficult than I would like. Itā€™s something we will be working to improve with some rule changes in season 2.

2 Likes

Ok, so it turns out self destructs were ā€˜flaggedā€™ in the step phase but ā€˜resolvedā€™ in the combat phase:

  1. All units attack.

A self destructing unit will always fire a shot on the frame they self destruct if able, and they can be shot by enemies whoā€™s attacks resolve before the self-destucting units combat phase. Iā€™m going to split the combat phase into ā€˜resolve self destructā€™ and then ā€˜resolve combatā€™ so all SDs will be resolved before combat starts.

We will also be fixing the floating point issue with pings, as upon investigation the impact will be miniscule.

We arenā€™t going to be deploying an engine change this weekend, and its already a bit to late to do so today, but I will try to make sure these are fixed Monday morning.

1 Like

+1 for the simulator performance.
as a benchmark Iā€™d say each player has to compute a few houndred different turns (from random matches) repeatedly (100 times). here you could then differentiate the turns that have to be simulated by their complexity.
->25 low, 50 medium and 25 high density turns (where density refers to the number of firewalls and information units on the field).
Iā€™m really interested how fast it can go and how much effort other people put into this.

Edit: Also you could maybe make a different category of performance tests, where you have ~5-10 turns in which tricky stuff happens (like a a lot of firewalls getting destroyed, making for new paths, units self destructing consecutively (effectively breaching a wall), or just a LOT of shields being applied to a single unit at the same time).

1 Like

yeah thatā€™s the idea but honestly I am quite overhelmed by the implementation of my algos so I will just put it like this:

According to a match on the playground against the boss algoo54, my recent algo did about 585k simulations in 37 turns. Which means about 16k simulations per turn (I do nothing on the first turn).

One simulation is the complete resolution of a turn. I change the board every dozen of simulations.
75% of these simulations have 2 squads of informations
25% of these simulations have 1 squad of informations

In average it results in something like 0.3ms per simulation

Edit: a small precision, this average time includes all the time used in order to create the various boards but also all the computation to choose them. (I absolutly donā€™t how much it represents in ratio but as I said I am lacking time)

2 Likes

I feel like Iā€™m doing something wrong if you can run 10 simulations before I finish copying the game state. Iā€™d be blown away if you didnā€™t have the best benchmarks.

6 Likes

hmmā€¦ but a benchmark with a given algorithm wouldnt really be ideal I think. I was more thinking something like:

  1. Prepare a set of N board states (examples in list below)
  2. Hand over each board state to your algorithm and measure the time the algorithm takes to simulate this specific state K times
  3. compile data into a nice graph to display strength and weaknesses of the algorithm.

Example:
Scene 1-10 have 1 single information unit on the field, sometimes 1 for each player, sometimes they reach each other, sometimes they dont (measures speed of pathing + unit targeting)
Scene 11-20 have 1 single information unit and a dozen of firewalls on each players side (measures unit pathing + firewall interaction)
Scene 21-30 have 2 information units on different positions and the same number of firewalls (checks how efficient pathing is done for different groups of units)
Scene 31-40 consist of a big maze of filters through which information units have to move (checks how well the simulator handles long pathing distances)
etcā€¦
one would have to handmake these prepared game states and maybe make a library that is easy to attach your simulator with. This way we can ensure the same environment for each player.

since some algorithms (like my own) are better at calculating a gamestate repeatedly, maybe one could take this into account too and make a seperate benchmark for that (Iā€™m taking about 60 microseconds to prepare the gamestate each time a new building was placed, so if I can take this time away over 100 passes, I get an increase in performance of about 30% (total time is in the order of 0.2 ms)).

I think I can best him :wink:

1 Like

ā€¦Iā€™m guessing itā€™s C++? :stuck_out_tongue: as a rule of thumb I assume anything I hand write in Python is 10x slower than a C++ equivalent