Judging from recent forum thread, there seems to be a lot of interest in action phase simulators. I’m working on one myself, and many other players seem to be as well. I think it might be beneficial to all of us, as well as new players that don’t want to have to spend too much time on this, if we cooperated more. I know it’s a competition, but all of our simulators will converge to the same thing eventually, and it seems like it should be more about what you simulate and what you do with that information, your actual strategy, than how much time you spend on writing a simulator. Is anyone else here interested in something like this?
It’s not against any rules to share a bunch of code right? I can post my turn simulator to github. I am mostly certain that the pathing algorithm is correct and fast – not 100% confident about the correctness of the simulator as a whole yet, but it at least seems pretty close.
After doing some profiling and fixing some huge timesinks (snakeviz is awesome, and copy.deepcopy is ****ing slow) I can simulate 1 turn in about 100ms on my laptop, probably 50ms on the server.
I would certainly be interested in some collaboration, since I spent 2 or 3 weekends on this thing alone rather than actually submitting better algorithms.
I’ve seen in those threads that a lot of people try to make ultra-precise turn simulation, with questions like ‘if there are two units within equal range, with the same shields, but different decay rates, are these used in the target prioritization?’. I would advise taking these things into consideration only if that doesn’t slow down your simulation, because most of time it is insignificant.
For now I don’t even do complete turn simulation, just an ‘attack simulation’ that only consider one stack of moving units and I that takes 9 to 15 ms (including a deepcopy) on average for a whole game on my laptop (about 2.5x faster in a ranked match). I’m still trying to improve that.
@RJTK Are your simulations done for complex turns (many stacks of units, many firewalls destroyed) or average turn over a normal game? If it is for complex turns that’s really impressive, if it is for average turns, I think you could at least double the speed of that, especially if you don’t try to overdo the precision.
I am also not using thorough action phase simulators. I am just checking how much damage would be taken if a unit with infinite health spawn from a certain position :"). The problem of exact simulation, is that the next turn will almost always not be the situation you are calculating for, because the opponent will also deploy stuff. The margins of error I am using to account for that are much larger than errors in simplified calculations.
Maybe I’m wrong about this approach though. My algos aren’t actually that intelligent, so maybe someone develops an algo I can’t win against unless I ramp up the analytical precision of my algos to make accurate decisions.
My simulator doesn’t do all the details right yet, but I think it’s important to simulate how your units interact with probable enemy units, and effects like encryptor buffs that really change the game. My simulator does these, although not perfectly, and it’s somewhat slower than RJTK’s. I suppose there’s merit in less complex simulations if you can do more of them, but it’s always fun to see games where my algo sends some scramblers right in the path of their units, right when they spawn them, of its own accord.
The approx 100ms number is for a map with containing 50 randomly placed firewalls of a random type, and 10 random placements of new firewalls or information for each player, so I think the situation is fairly complex. I profiled this bench mark and about 60% of the time is spent calculating the pathing information, and the remaining 40% is for the frame-by-frame simulation, mostly spent in the targeting logic. If I kept a fixed map and only spawned information then all the path computations could be amortized over how ever many turns. I’m also treating units as always separate, even if they are stacked, so there is one obvious place performance could be improved. I’m focusing now on actually using the simulator to produce a better algorithm though.
@kkroep it is true that the next turn wont be exactly what you calculated, but if you can simulate a very large number of turns you can always improve the evaluation of your moves. For instance, if you have some candidate set of actions you can simulate each of those against 10 or 100 of your opponent’s possible responses to choose your own move either by minimax or the move with the highest average score.
@RJTK My simulator does use stacks, and I only recalculate paths when firewalls are destroyed. Pathfinding doesn’t take up that much time for me, it’s mostly targeting and scanning for destructors. Overall, though, my simulator takes about 150ms, 50% slower than yours, so maybe it makes more of an impact than I thought.
To get back to the original topic, your simulator sounds mostly better than mine, so I would be happy to contribute some optimization or something if you uploaded it somewhere.
@kkroep Even if your prediction of the opponent’s next move isn’t accurate, it allows you to get an idea of where they’re targeting, and how much damage they might do, so I think it’s important to account for everything that could go into that.
In the current way my algos works, it would require a very large amount of work to obtain info fr the that would change the decision making process. More is better but time is also limited unfortunately. For example, this weekend I was experiencing a lot of difficulty defending ping rushes. I could detect them no problem, but I couldn’t find an effective counter to deploy.
I am using an EMP line style of attack, and so far there is a limited amount of strategies that require special care, that need targeted solutions. Otherwise the EMP line will just scoop up the relevant firewalls.
I guess that for extremely adaptive algos you need something like this though. I actuwlly domt know how those algos are programmed
I am looking for an action phase simulator. I am thinking about making a dynamic algo, but making the simulator in python seems to be the most time consuming part. I would be willing to collaborate to share resources on a dynamic algo. Maybe we can make a probability graph / map for deploying structures and units.
quick tip about deep copy: You can make your own specific copy generator, it is usualy way faster
For example, let’s say you have a set of size 1000:
s = set([i for i in range(0,1000)])
you can create a simple function as this one:
def copySet(src_set): return set([l for l in src_set])
And here are the execution times:
664 µs ± 16.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
39.8 µs ± 672 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
test it you will see it works
as long as you recreate the object piece by piece and each one of this pieces is not linked when copied, it will work. Here each piece is an int, so no shallow copy is made
Indeed, in this case a deep (enough) copy is made as ints are immutable, this function would work with sets of any immutable type (tuple, int, bool, etc.).
To make it even faster you could write
def copySet(src_set): return set(src_set), avoiding the intermediate list comprehension. Similarly, to construct the initial set you can write
s = set(range(1000))!
@876584635678890 I agree with you on that:
You will be able to see that source_set.copy() and set(source_set) are exactly the same, but to me the list comprehension only adds computing time, which seems counter intuitive
My example was maybe too simple. I just wanted to say that if you have objects which are nested, for example a list of list, a simple .copy() won’t do a shallow copy. My point was to say that you can breack down your object to pieces where .copy() works.
Maybe this example is better:
let’s take this object:
nested_list = [[[,] for j in range(28)] for i in range(28)] #our nested list copy_list = nested_list.copy() #the copy nested_list.append((1,1)) #we modify a value in the original list print(copy_list) #and see that it also modified the copy Out: [(1, 1)]
I had in mind cases like this where a .copy doesn’t make a deep copy.
So you could use deepCopy from the module copy, but as we said, it takes to much time.
So instead we could “dive in” the object using list comprehension until a point where no shallow copy can be made.
For example we can do this:
def copyNestedList(src_list): return [[[[e for e in width] for width in column] for column in row] for row in src_list]
This will assure you a deep copy of your object and be faster than deepCopy.
I still a much to learn about python and I just started experimenting with time optimization 3 days agos. So all of my knowledge comes from experimenting with the objects and trying several ways to do the same thing. There are certainly other ways to do this, but this was the first solution I found that was better than a deep copy. (one of them would be to use list() for the last level for example)
I am completely open to other ways others know of! I just read some people complaining about deepCopy so I shared what I came up with, even if I know there is much to improve on my solutions.
I’ll also add that if you are working with classes, you can override
__deepcopy__ to provide your own implementation of a deep copy on that class. This means that
copy.deepcopy(<instance of your class>) will call your specific copy function, rather than the general builtin version, giving much faster copying times.
With a new method of pathfinding and with @arnby idea to avoid the native deepcopy (many thanks!) I’m now able to do my ‘attack simulation’ in less than 4ms on average on my laptop (so less than 2ms in ranked match).
I could do like 3k of those simulations per turn! Well I still use only ~250 of them…
… and I don’t know what I would do with 3k imprecise simulations? some sort of minimax? Do you guys have any other idea?
I have the same perfomance as you for my attack simulator (maybe a bit better ) and … I was asking myself the same question.
I guess this is where the fun part begins There are so many ways to deal with that, this is surely what will differientiate us
about your 250 simulations, I guess you have all your stacked attacks (3x28), the ennemy’s (3x28) and ?
The 250 simulations are:
- Evaluating the attacks my enemy did over the last 20 turns to select up to 3 of the ‘scariest’ ones
- Evaluating a variable number of defenses (about 50 per turns, but can go from 0 to a huge number) against those ‘scary’ attacks
- Up to 2x3x28 simulations for my own attacks considering both this turn, and the next one if I keep my resources, but I skip some of those attacks if it the deploy location is between 2 other ones that share the same results
Obviously 250 is just the average.
Edit: I also have a method to try spawning encryptors if some special circonstances are reached.
Ok so you already have quite a smart way to go about this.
I just finished my turn simulator so hadn’t time to implement something like this. So not much hope for the C1 Challenge, but I will work on it in the coming days
Whoah, people are putting some serious effort into this logic O_o. I have non of this stuff. I do 4 simulations per turn assuming everything has infinite health to find the path of least resistance for an attack and thats all . I’m very interested to see algos in action that truly utilize the info provided by these calculations. (or they are already deployed but I dont recognise the complexity of the opposing decision making process).