Just how fast is the built in pathing thing?

I’m currently trying to create some sort of simulator, though I don’t know how fast it’s going to be because I don’t have any optimized or different version of the regular pathfinder that’s faster.
So, how fast is the already built in algorithm?
Is it even necessary for me to spend time coding a new pathfinder?

I think this thread may have some answers. Usually you’d want to profile it in some way with a large number of trials, but the much easier alternative using time.time() each pathfinding call and printing out how long it took should give a good approximation. You should be able to run a suitable number of simulations with the existing pathfinder, but in comparison to a reasonable optimized simulator it’s “low hanging fruit” in terms of how slow it is versus how much you can optimize it.

Is it possible to take an existing algorithm (for example, A*) and modify according to Terminal logic? Would that be a good approach because I’m gonna be running this a LOT of times. I wasn’t satisfied with my tests as it almost always went over time when doing simulations.

There is a fair bit of talk about this here. The short answer is yes.

A slightly more detailed answer would be (using A* as the example) that you are trying to find the shortest path to a set of set of points, rather than just one. You could do this by running A* 24 times which is basically just a modified version of Dijkstra’s algorithm. This tells you the path that is shortest. If two paths are tied for distance, you simply step the opposite direction you went last time. Essentially, you only need the pathing algorithms to tell you the shortest path, then you can code Terminal logic for things like alternating directions.

I’m not going into a ton of detail just because there have been a lot of posts about this already.

Anyone see any python code online on shortest path algos? Everything I saw pointed to scipy functions. The recursive sort of thinking mean the code gets pretty tangled.

For an easy speedup I did also find that self.get_locations_in_range(location,5.5) is EXTREMELY slow… running it once and saving the results is a big speedup for those using the built in functions like I am.

There are tons and tons of resources talking about shortest pathfinding algorithms in python (without modules).

I highly recommend researching the concept first and then trying to implement them yourself, but most of the time you can find the complete code as well. These examples, of course, are general case so you’ll have to format the data from the game_map, but this shouldn’t be too difficult. I’ll list some resources below:

geeksforgeeks has some fantastic writeups about algorithms and how to implement them.
Here is one for Dijkstra’s algorithm.

If you’re completely new to graph theory and pathfinding, I came across a very introductory example of setting up and example pathfinding graph which simply finds a path, not the best one, but is a introduction to creating an algorithm in Python. It is found on the main python site here.

A* is a very poppular pathfinding algorithm, and you can find tutorials all over about it. A couple are:
https://medium.com/@nicholas.w.swift/easy-a-star-pathfinding-7e6689c7f7b2 (this one is in python)
https://www.geeksforgeeks.org/a-search-algorithm/ (this is in C++, but the explanation is excellent)
https://www.youtube.com/watch?v=-L-WgKMFuhE&list=PLFt_AvWsXl0cq5Umv3pMC9SPnKjfp9eGW (this is a gamedev tutorial in C#, but the first video explaining the theory is really good)

There are lots more algorithms out there, but this is a nice starting point. You could also check out the Breadth-First Search algorithm (I wouldn’t recommend using it for your actual algo though) just to have a simpler example as well. Hope some of the resources here help!

1 Like

Wow, thanks!
That was waaaaay more than i asked for. It was really helpful.

Jupyter notebook min requirement pathfinder … my first attempt…gets a result and a path in 40ms. If anyone can add the A* logic to this plzzz share. For terminal just include the game_map contains_stationary_unit check and in the game map and use the entire edge for the end point.

import time 

nodes = {}

def hashit(loc):
   return "%i-%i"%(loc[0],loc[1])

locStart = [1,12]
nodes[hashit(locStart)] ={"loc":locStart,"cost":0,"prev":[]} 

locEnd = [13,20]
bExit = False

ct0 = time.time()
while not bExit:
keylist = [key for key in nodes]
for key in keylist:
    node = nodes[key]
    if key == hashit(locEnd):
        print("end ",key)
        bExit = True
    #add neighbors
    loc = node['loc']
    c = loc[0]
    r = loc[1]
    cost = node['cost']
    for loc2 in [ [c+1,r],[c-1,r],[c,r-1] ,[c,r+1]]:
        if hashit(loc2) not in nodes:
            nodes[hashit(loc2)] = ({"loc":loc2,"cost":cost+1,"prev":loc})
            existingNode = nodes[hashit(loc2)]
            if cost+1 < existingNode['cost']:
                existingNode['cost'] = cost+1
                existingNode['prev'] = loc

ct1 = time.time()
print("time=" ,(ct1 - ct0))

loc = locEnd
while True:
node = nodes[hashit(loc)]
loc = node['prev']
if len(loc) == 0:

You’ve got the basic idea of adding the neighbors and then checking if it equals the exit based on cost. The key here is your cost function. The primary difference (at first glance) between what you’re doing and A* is your cost function. It looks like you’ve implemented a (sort-of) breadth-first search. You add 1 based on if it is connected. This is your cost function: distance from the start. A* takes into account both the distance from the end goal (called h_cost) and the start (g_cost), then the sum of these two is called f_cost. You make decisions about where to go based on the f_cost.

Another problem is that you have is how you pick your next node to look at. Right now you do keylist = [key for key in nodes], which is essentially a list of points you want to check. Since nodes is a dictionary, the order of keylist is not consistent and thus you randomly choose which node to look at next. Instead, you always want to look at the best option first. You could do this any number of ways, but a more efficient than what you’re doing now would probably be to define keylist outside of your while loop (starting with the start position) as a dictionary and then store each edge (unchecked) position in this dictionary. You then remove the position from keylist each loop (since it is being checked) and store its f_cost. You then pick the next node to look at based on the minimum f_cost. So as a short example of what I mean:

# other code you've written
dist = # find the distance between your end goal and start node here (using either Manhattan or Euclidean distance - google it) 
open_nodes = {hashit(locStart) : dist} # This if effectively a list of nodes on the edge who are not checked
closed_nodes = set()
while not bExit:
    # Note that you can get rid of the "for key in keylist:" loop, since you select a specific key each time
    node = nodes[min(open_nodes.items(), key=(lambda elem: elem[1]))[0]]  # this gets the node with the lowest distance (cost) from keylist (unchecked points on the edge)
    closed_nodes.add(hashit(node['loc']))    # this means we have already checked this node
    # differences from before:
    #    - where you do the cost function, you'll need to calculate the f_cost
    #    - with you're loop: "for loc2 in [ [c+1,r],[c-1,r],[c,r-1] ,[c,r+1]]:", add that location to 'open_nodes' if it is not in 'closed_nodes', so instead of
    #           'if hashit(loc2) not in nodes:' it becomes 'if hashit(loc2) not in closed_nodes:'
    #    - add units to open_list in addition to nodes

However, you should note that this (intentionally) is quite a poor solution since min in python means it has to loop through every single value in open_nodes. This means it has to loop through open_nodes every node it checks. A better solution would be to either use a data structure that is naturally sorted like a binary search tree. Don’t worry about it, but thought I’d point it out.

In addition, you’ll notice you’re using nodes still, but if you use open_list (not checked nodes) and closed_list, these effectively replace it.

So for calculating the f_cost where you add the cost, instead of just adding one, you set it with:

h_cost = cost + 1 # add 1 if you are using Manhattan distance, basically how you're calculating distance
g_cost = # calculate the distance to your goal (use the same concept from the h_cost)
existingNode['cost'] = h_cost + g_cost
open_nodes[hashit(existingNode['loc'])] = h_cost + g_cost
# again, this is slightly redundant, and you can rework it so you don't need existingNode

Lastly, a small note about the function hashit. You are (I’m assuming) hashing the functions so you can use the positions as keys in the dictionary. If you store your points as tuples, they are immutable (can’t change) and thus can be hashed by python. This means you could remove the hashit function completely and just use tuples. I left it in my examples but you can see this just with:

testDict = {(0,0):0, (0,1):1}

You should note you cannot use tuples as points for the default pathfinding function provided by C1 in the starterkit.

1 Like

nice, seems almost there then if I add a g_cost, and improve sorting of adding the next node, and open/close list is also simpler than saving everything in a huge dictionary. Will repost my code when I get it working.

As the board is so small maybe a simple sort-of-sorted hack is faster in cpu_time than a min function or PriorityQueue solution i’ve seen somewhere. maybe can also reuse the solved nodes for different paths.

I think no point in figuring out how to run 100k simulations as then the simulation opponents like aelgoo will in the min-max mirror worlld also be able to predict perfectly the best move I’ve found. But I’m currently limited to running only 3 full board map simulatiions…need to squeeze in a few more to come up with some sneaky moves against those other extremely similar mutexes…

Yup, you are very correct about the simplicity of using an open and closed set instead of a single large data structure; it simplifies things a lot. And yes, there are many ways of improving speed and efficiency :).

When you say you can run 3 full board simulations, what do you mean? Do you mean running a single set of units or a full game?

testing running a set of unit from every edge position and see how much damage they do with a specific defense setup. the built in pathfinding and findunitinrange seems extremely slow considering the simple logical calculations, in c++ would easily be able to do 10s of thousands of calculations like this in a second hmm… will keep testing, tuples do seem would be faster.

I think the usual convention is 1 simulation = 1 set of specific units in one place, so that would be 28 - blocked_edges simulations. The reason I bring this up is to just get a better scale of what’s possible with a simulator. The fastest simulator I heard of averages ~2 ms a simulation, although I’m guessing some sort of memoization is used to make up the difference (2 ms might be with no cache), and I’ve heard of someone who managed to get less than 2 ms, but they never shared numbers. Personally with C++ I average 10-20* ms with no memoization, no hard-coded answers to function calls like getLocationsInRange(), a breadth-first search, and little optimization of function calls in the actual logic, mostly because I haven’t needed to optimize those things yet. Before switching, I believe I dropped my Python simulator times to 20-100 ms (it was noticeably less consistent, for the below reasons).

*10-20 because of not optimizing the function calls in the actual logic. Most boards will average 10, but say you drop 7 emps along a really long path…

1 Like

I have extremely similar results and haven’t optimized further for similar reasons.

1 Like

Those are amazing times for simulations!

Moving from Breadth to A* and using tuples for the hash intstead of strings, my single path time is down from 5ms to 1.5ms. As >90pct of paths in Terminal wouldn’t backtrack, modifying A* to go on straight runs to the node in the right direction instead of checking the scores of the whole open node list would make it much faster.

Still working out how to choose when there’s 2 equal choices, including first move also. The code for ShortestPathFinder() in navigation.py seems to be doing something very different than A* pathfinding.

That sounds really good. I’m still finding out how to incorporate terminal logic into my A* implementation.

finally worked out path finding to the edge, but just realized how hard self destruct logic is when I can’t reach the edge. Have to add a check for no more new nodes to search then search the whole closed list for the minimum distance I think.

The edge logic, it took a while to figure the logic out of shortest distance to the edge instead of a point:

            if edge == self.TOP_RIGHT: distEnd = abs(c + r - 41) / 1.4142
            if edge == self.BOTTOM_LEFT: distEnd = abs(c + r - 13) / 1.4142
            if edge == self.TOP_LEFT: distEnd = abs(r - c - 14) / 1.4142
            if edge == self.BOTTOM_RIGHT: distEnd = abs(c - r - 14) / 1.4142

About the edge logic, is there some inaccuracy if you count the distance to square corner? For example for top_right: distance = 27 - x + 27 - y.

just doubled checked, every x+y on the top right adds up to 41, so that line is distance=0.

It also seems one needs to give a tiny priority to moving vertical direction in event of two equal choices.

1 Like

Is this what you do to create the zig zag? In the documentation, it says that if there is a tie it moves the direction not moved last.