You will be able to see that source_set.copy() and set(source_set) are exactly the same, but to me the list comprehension only adds computing time, which seems counter intuitive
My example was maybe too simple. I just wanted to say that if you have objects which are nested, for example a list of list, a simple .copy() won’t do a shallow copy. My point was to say that you can breack down your object to pieces where .copy() works.
Maybe this example is better:
let’s take this object:
nested_list = [[[,] for j in range(28)] for i in range(28)] #our nested list
copy_list = nested_list.copy() #the copy
nested_list.append((1,1)) #we modify a value in the original list
print(copy_list) #and see that it also modified the copy
Out: [(1, 1)]
I had in mind cases like this where a .copy doesn’t make a deep copy.
So you could use deepCopy from the module copy, but as we said, it takes to much time.
So instead we could “dive in” the object using list comprehension until a point where no shallow copy can be made.
For example we can do this:
return [[[[e for e in width] for width in column] for column in row] for row in src_list]
This will assure you a deep copy of your object and be faster than deepCopy.
I still a much to learn about python and I just started experimenting with time optimization 3 days agos. So all of my knowledge comes from experimenting with the objects and trying several ways to do the same thing. There are certainly other ways to do this, but this was the first solution I found that was better than a deep copy. (one of them would be to use list() for the last level for example)
I am completely open to other ways others know of! I just read some people complaining about deepCopy so I shared what I came up with, even if I know there is much to improve on my solutions.
I’ll also add that if you are working with classes, you can override __deepcopy__ to provide your own implementation of a deep copy on that class. This means that copy.deepcopy(<instance of your class>) will call your specific copy function, rather than the general builtin version, giving much faster copying times.
With a new method of pathfinding and with @arnby idea to avoid the native deepcopy (many thanks!) I’m now able to do my ‘attack simulation’ in less than 4ms on average on my laptop (so less than 2ms in ranked match).
I could do like 3k of those simulations per turn! Well I still use only ~250 of them…
… and I don’t know what I would do with 3k imprecise simulations? some sort of minimax? Do you guys have any other idea?
I have the same perfomance as you for my attack simulator (maybe a bit better ) and … I was asking myself the same question.
I guess this is where the fun part begins There are so many ways to deal with that, this is surely what will differientiate us
about your 250 simulations, I guess you have all your stacked attacks (3x28), the ennemy’s (3x28) and ?
Evaluating the attacks my enemy did over the last 20 turns to select up to 3 of the ‘scariest’ ones
Evaluating a variable number of defenses (about 50 per turns, but can go from 0 to a huge number) against those ‘scary’ attacks
Up to 2x3x28 simulations for my own attacks considering both this turn, and the next one if I keep my resources, but I skip some of those attacks if it the deploy location is between 2 other ones that share the same results
Obviously 250 is just the average.
Edit: I also have a method to try spawning encryptors if some special circonstances are reached.
Whoah, people are putting some serious effort into this logic O_o. I have non of this stuff. I do 4 simulations per turn assuming everything has infinite health to find the path of least resistance for an attack and thats all . I’m very interested to see algos in action that truly utilize the info provided by these calculations. (or they are already deployed but I dont recognise the complexity of the opposing decision making process).
I agree, wow. My newest algo runs 35 simulations per turn, and sometimes takes 4500 ms, so I probably need some more optimization. I’m doing more adaptive defense now, for example simulating the next turn depending on which side of the board I spend my cores on, but my algos seem to be getting worse, judging by the leaderboard performance.
Right now my version simulates everything except interactions between enemy and friendly information units and takes about 30ms for a whole turn (note: this is for a completely random starting board, so will likely take slightly longer in an actual game) and there are plenty of optimisations left to make, as I was in a rush to finish with the C1Ryan competition.
I have seen that my algorithm using this can quite often find ways to place units such that they break through thin walls of filters and such, which my old algorithm would never have considered. So I do believe that this kind of logic will be important in the future, especially if it can be optimised enough to be able to simulate different placements of firewalls and how they fare against possible attacks the enemy can make.
For example, when an algo’s base gets completely destroyed, often they will place a small number of firewalls which instantly get destroyed next turn and so on until they lose the game. But if you could simulate a turn you may realise that it would be a waste to place those firewalls and that you should instead save up to create a stronger defence, from which you may be able to recover from and go on the gain the upper hand.
My simulator that simulates everything pretty accurately now runs 5x faster by memoizing get_locations_in_range. It uses a maximum of 500kb more memory, and is waaaay faster, and just gets faster as the game goes on. This would probably help all of your simulators too if that function is a bottleneck, which it is for me.
I have a question about the pathfindung when recalculating pathes during action phase.
My question is, if the previous location a unit walked over is a factor for deciding which path to choose when the path is beeing recalculated during the action phase because some firewalls were destroyed?
I have a tip to make the function itself faster without having to cache many locations.
My program has something close to preloaded units for every unit type. These units have precalculated lists of locations for their ranges. These locations are relative to the unit, so (-1, 0) refers to one location to the left from the unit’s location.
My code then only transforms these lists for unit locations on the map to get locations in range.
Another tip I can give is to focus on what doesn’t have to be recalculated. For example: You have some destructors and some information units. You don’t really need to calculate a target for every destructor in ticks where no units move.
oldmap = copy.deepcopy(game_state.game_map)
// add some unit and do scoring simulation
// .. put original map back
game_state.game_map = oldmap
I don’t seem to be able to make a copy of the game_map.__map either. I’ve resorted back to using the game_map.add_unit and remove_unit and trying to keep straight how to reverse the things I’ve added.
btw those Aelgo 250 simulations are truly amazing at predicting my moves, one game it strangely cleared the entire board of defenses then intercepted my every attack one by one for 5-6 turns before going back on the attack and winning.
Yeah, Python’s a bit annoying about when you get a copy vs when you get a reference. I just always assume it’s a reference and explicitly copy when I need one.
Adding/removing the units you place is a great way to go, but if you start getting into full/partial simulations that deal damage, the units will start to move around/get destroyed, so you’ll need to do that on a copy at some point. This thread goes into that bit.
You can definitely play around with the game_state/game_map you’re given, and you don’t have to submit the “original” one. The only thing that matters to the game engine is the calls to attempt_spawn() and attempt_remove() you make on the game state that’s submitted. I recommend making copy constructors for both the game state and the game map, although your use of copy.deepcopy() works as well.