I agree, wow. My newest algo runs 35 simulations per turn, and sometimes takes 4500 ms, so I probably need some more optimization. I’m doing more adaptive defense now, for example simulating the next turn depending on which side of the board I spend my cores on, but my algos seem to be getting worse, judging by the leaderboard performance.
Right now my version simulates everything except interactions between enemy and friendly information units and takes about 30ms for a whole turn (note: this is for a completely random starting board, so will likely take slightly longer in an actual game) and there are plenty of optimisations left to make, as I was in a rush to finish with the C1Ryan competition.
I have seen that my algorithm using this can quite often find ways to place units such that they break through thin walls of filters and such, which my old algorithm would never have considered. So I do believe that this kind of logic will be important in the future, especially if it can be optimised enough to be able to simulate different placements of firewalls and how they fare against possible attacks the enemy can make.
For example, when an algo’s base gets completely destroyed, often they will place a small number of firewalls which instantly get destroyed next turn and so on until they lose the game. But if you could simulate a turn you may realise that it would be a waste to place those firewalls and that you should instead save up to create a stronger defence, from which you may be able to recover from and go on the gain the upper hand.
My simulator that simulates everything pretty accurately now runs 5x faster by memoizing get_locations_in_range. It uses a maximum of 500kb more memory, and is waaaay faster, and just gets faster as the game goes on. This would probably help all of your simulators too if that function is a bottleneck, which it is for me.
I have a question about the pathfindung when recalculating pathes during action phase.
My question is, if the previous location a unit walked over is a factor for deciding which path to choose when the path is beeing recalculated during the action phase because some firewalls were destroyed?
Yes the previous move direction is a factor, the unit will continue to obey to the zig-zag rule.
How can you tell which functions and areas of code are the bottlenecks? I am new to python, so I have no idea. Is there a way to see which functions most of the time is being spent in?
My algo takes about 200 - 300ms in general (on my computer), so I have a lot of improvement to do, based on the results that others have.
%timeit myFunction()
will give you an average run time for this function
%prun myFunction()
will give you the time and the number of calls for every function called in myFunction so you can identify the bottlenecks
few tips: https://wiki.python.org/moin/PythonSpeed/PerformanceTips
The timeit module is really nice for single functions and small bits of code. If you want to track several things or time larger bits, etc a really simple way is just:
import time
start_time = time.time()
# do things here
elapsed_time = time.time() - start_time
print ('elapsed time: {}'.format(elapsed_time))
If you are windows you can use time.clock()
instead of time.time()
. A really nice article that explains the differences between different timing functions in python is here:
https://www.pythoncentral.io/measure-time-in-python-time-time-vs-time-clock/
In regards to speeding up your algo, I talked a little bit about it here:
but a good place to start is always wherever you have loops.
There are also profiling modules to tell you where the bottlenecks are automatically. Snakeviz is a nice tool for viewing the results, I think I’ve seen it mentioned on this forum.
I have a tip to make the function itself faster without having to cache many locations.
My program has something close to preloaded units for every unit type. These units have precalculated lists of locations for their ranges. These locations are relative to the unit, so (-1, 0) refers to one location to the left from the unit’s location.
My code then only transforms these lists for unit locations on the map to get locations in range.
Another tip I can give is to focus on what doesn’t have to be recalculated. For example: You have some destructors and some information units. You don’t really need to calculate a target for every destructor in ticks where no units move.
Is there a simple way to make a copy of the game_map to play around with? For some reason this is creating problems for me, i’m new to python, the object references tend to work differently than c++ or javascript.
oldmap = copy.deepcopy(game_state.game_map)
// add some unit and do scoring simulation
// .. put original map back
game_state.game_map = oldmap
I don’t seem to be able to make a copy of the game_map.__map either. I’ve resorted back to using the game_map.add_unit and remove_unit and trying to keep straight how to reverse the things I’ve added.
btw those Aelgo 250 simulations are truly amazing at predicting my moves, one game it strangely cleared the entire board of defenses then intercepted my every attack one by one for 5-6 turns before going back on the attack and winning.
Yeah, Python’s a bit annoying about when you get a copy vs when you get a reference. I just always assume it’s a reference and explicitly copy when I need one.
Adding/removing the units you place is a great way to go, but if you start getting into full/partial simulations that deal damage, the units will start to move around/get destroyed, so you’ll need to do that on a copy at some point. This thread goes into that bit.
You can definitely play around with the game_state/game_map you’re given, and you don’t have to submit the “original” one. The only thing that matters to the game engine is the calls to attempt_spawn() and attempt_remove() you make on the game state that’s submitted. I recommend making copy constructors for both the game state and the game map, although your use of copy.deepcopy() works as well.