Computing Time Issues

@876584635678890
An algo that has ‘crashed out’ will not play any more matches. The reason we need to include time outs as a regular ‘crashing’ algo is so that we can ‘crash out’ while loop algos, which are fairly common.

Hey, just to report in on this, the two engineers best suited for this task did spend some time discussing solutions and how to better communicate the time constraints and ensure they are accurate, but they seem decently time consuming and will not be in before the global competition.

Its not a very satisfying answer, but ultimately you may have to try to enforce your cut off a little earlier. We may increase the crash count to avoid losing algos. We acknowledge that this is an issue on our end which is disproportionately affecting algos that try to get as much use out of their turn time as possible.

If you could do that before I lose my algo ranked third, I would be glad :sweat_smile:

It is indeed a bit frustrating… If 7s are counted for 5s of computation, it means that I must use only 3.57s or so (if we assume the shift in time is linear)
but ok I will look into that

We are going to change it so your crash count won’t increase unless you timed out for over 30 health in one turn, as opposed to taking your last point of damage from overtime. This will make it so that it will only affect algos that are broken and not algos that are going slightly overtime.

4 Likes

Lots of simulations are nice, but it depends on what you do with them. Less can be more sometimes.

2 Likes

Thanks a lot @Isaac ! works like a charm!

I don’t know a lot about this, but maybe I was previously monitoring the computing time and now I am monitoring the real time (chronometer vs time reading). I don’t know the why of the difference but it is now very accurate even in ranked matches (at least for the past 8 hours)

To you have any suggestions for timing in Python? I thought “import time” and using time.time() was doing pretty well, but either my recent increase of computation time is showing this to be ineffective, or I’m experience the server issue much more frequently. I am starting and stopping the timer at the beginning/end of each simulation, perhaps I should just have one universal start at the beginning of my turn?

Here’s more strange behavior. I wasn’t sure where to put this but I suspect it has to do with time so I’ll put it here.

I have here a few replays that are a little bit odd. I should preface with the disclaimer that I’m not 100% certain that my algos behavior would have been identical for the two matches, but a far as I can tell it was running the same sections of code for both of these matches.

The opponent for both of these matches was the exact same algo, Aelgoo80c. Not just the same name, but the same algo.

In game one, his turns are well under time, and he wins the match.
https://terminal.c1games.com/watch/1764058

In game two, several of his turns are slightly over time, and, critically, he makes different decisions as a result of what I assume is an ‘early abort’ on the simulations due to timeout. The result of these different decisions cause the games to diverge as early a round 2, and lead to his loss.
https://terminal.c1games.com/watch/1766296

So now we have the exact same opponent, playing two copies of what I believe were identically behaving algos of mine under these circumstances, and for one match the server keeps up and he wins, and for the other match the server has a slow day and he loses? Should server variance be having this much of an effect?

This suggests that you could rematch the same algos over and over, and in some cases the winner would change depending on how fast the server could run? I’m not sure if this is how things should be. On the other hand, I suppose it could be written off as one of the “costs” of an advanced simulating algo that goes right down to the timer: if you count on using all 5000 ms to win, sometimes the server is slow and you can’t make the right predictions?

Food for thought.

2 Likes

@KauffK
This is probably not only due to the server, the main cause, in this case, is my bad coding.

I have some huge problems with pointer in my c++ versions of Aelgoo, I hope I have solved all of them with Aelgoo81.
I think the c++ version of Aelgoo is crashing in both those games as soon as turn 2, and in both case I’m switching to the python version, but since that crash is random, the time left to handle turn 2 for the python version is also random.

By the way even if there was no server time issues and no random crashes, a same algo can for many reasons have randomized behavior.

Ah, that’s interesting. I had no idea there were such shenanigans going on inside your algos :thinking:

I suppose someone could deliberately have their algo take different actions on repeat matches against an identical opponent, but I couldn’t think of any way in which this was an advantage and figured that whatever the cause was, it wasn’t a feature of your algo.

But wow, I was so confident it was some sort of timeout contingency. Never would have guessed you had a nested languages with the c++ copy aborting back to the python copy if something went wrong. :hushed:

I would love to see how you did that :smiley:
(Do python handles errors that occure in a C++ module like actual python errors?)

I’m getting strange behavior vs Aelgoo80c as well. https://terminal.c1games.com/watch/1769215
14 and 16 second turns. I don’t see how this could happen when I’m keeping track of computing time. I’m sure I’ll be able to maintain a top 10 spot, but I’d be very disappointed if I lose a match in the final competition due to timeouts.

Edit: I may have found a temporary solution for anyone else struggling with timing out. I’m now using game_state.my_time to trigger a boolean and reduce the number of simulations I run to what should normally run in 2-3 seconds (and will hopefully stay below 6). game_state.my_time comes in as milliseconds and should be fed from the game engine that’s assigning time penalties, so hopefully it’s accurrate.

Also,

Ah yes, good ol’ manual-memory-manipulation-related crashes. Notoriously dependent on the host machine memory, the current state of the stack/heap, which hemisphere of the earth your code is running in, the current phase of the moon, and whether you’ve said anything that would hurt the pointers’ feelings. Known solutions include memory debuggers, tearing our your computers drives and connecting the pointers by hand, and boiling a goat in its mother’s milk while chanting. Remember kids, never say “segmentation fault” out loud three times in a row, or the segfaults will be able to appear before you and enter your drives.

…Incidentally, I may have recently become salty trying to debug C++ memory problems. I do not like C++ memory problems.

And that is why mine is the only algo in the top ~300 written in Rust - guaranteed memory safety. It’s a great language, and just as fast as C++, and I definitely recommend it.

1 Like

You’re welcome @arnby! I’m glad it worked so well :).


@Ryan_Draves I’m not sure there is such a complete library for time measurement since it varies by os (like you can only use .clock() on windows). One possible solution is to create a decorator in python to monitor different functions using the time module. There is an article about it here. You could have it return the time necessary or maybe check inside of the decorator. There are several different solutions here, this is just one. However, I personally believe the same fundamental steps apply, it just varies on how you do them. I’m not sure why you’d get inaccuracies though :thinking:.


@arnby, in regards to your question about handling errors in a python c++ extension, you can read the official documentation here. Essentially though, exceptions are an object, just like anything else in Python (even c++ python code, it is just a PyObject*). And if you want it to show (which would be good practice) you would create a normal c++ check or exception and if it is not true you would need to return the PyObject* error which will then be shown in the pure python. Here is a small example of a check I do when multiplying two vectors together (for my ML algos, lol rip):

Py_ssize_t TupleSize = PyTuple_Size(args);
if(!TupleSize) {
    if(!PyErr_Occurred()) 
        PyErr_SetString(PyExc_TypeError,"You must supply two vector inputs.");
    return NULL;
}

A little bit about memory in Python :)…

So python tracks its own memory and has its own heap. Thus, undefined behavior will occur if you attempt to access memory created from c++ (malloc()) vs python (PyMem_RawMalloc()). If you create an object on the c++ heap and delete it using python’s memory deallocation this will definitely cause undefined behavior. This means it is perfectly safe to use new with c++ so long as you don’t access it using python’s tools (objects). I personally just create a function to convert whatever c++ heap object I am returning to a PyObject*j and return that. This may seem inefficient but it is actually extremely fast, nothing to worry about (unless you are transferring a truly ridiculous amount of information, we are talking like a microsecond or less).


Regarding debugging python c++ extensions. What I discovered here has personally been invaluable for tracking down memory bugs and seg. faults. I strongly strongly recommend using a Linux build because then you can use the gdb debugger. I’m not aware (haven’t looked) of a simple windows counterpart for what I do.

Essentially, I just run the algo_strategy.py without using the engine and supply the input manually. This has several advantages. Specifically, you can input whatever you want and you can stop the algo for as long as you want. The clear disadvantage is that entering engine strings is an absolute nightmare. However, we don’t have to :). Simply run your algo and output all strings (nothing else) to a file. Then you simply copy and paste the commands generated from the engine. If you have a deterministic algo this will work every time. If not, you usually just have to run it once or twice before the seg. fault occurs. Then, gdb tells you where that pesky error is.

Step by step:

  1. Run your algo normally, saving commands from the engine to a .txt file.
  2. Run gdb, specifically (assuming you are in your algo’s main folder):

gdb python.py
>…
>run algo_strategy.py

  1. Copy and paste all the commands from (separated by \n) the generated text file

gdb will then run and tell you where it seg fault was. If you still can’t just see it then just use the command where (in the same session) inside gdb while still debugging.

Without this I definitely would have just given up out of frustration when working on my simulator :), hope it helps some people :).


@Thorn0906 I haven’t had a chance to work with Rust yet but have been wanting to for a while, maybe someday soon…

2 Likes

Thanks @Isaac for these useful tips! :grin:

I can say that you really saved me there: for my current top algo which still uses the old way, 3 out of the 5 last losses are due to time out (22,22 and 19 damages). So hopefully the new version should do better! :smile:

the algo with the 50k simulations per turn is keeping his first place in the leaderboard for a while now and i can’t say i’m suprised.

The fact that the demux serie is not on the leaderboard definitely helps a lot

And what @Ryan_Draves said is true: it is useless to have lots of simulations if you don’t know how to use them properly. Many times my algos shoot themselves in the foot because they try to be too smart.
And the fact that the vast majority of algos don’t use that much simulations, and that mine lose to some algos with almost no computation time used, show that having more simulations doesn’t necessarily make you better

1 Like

i get it that if you don’t know how to use lots of simulations it is useless, but you can say the same thing about every powerful tool and it doesn’t change the fact that you have a very very powerful tool. if u are good enough of a programmer to have 50k simulations per turn i’m sure you won’t be struggling too much to find a good strategy on how to use those simulations (well, you are 1st on the leaderboard. and in contrast to other more static algos it much harder to find weaknesses in algos like yours). look, if everyone could think of a masterpiece structure like kkroep did (as hard as it is) it will still be eventually countered by another structure or strategy. but with lots and lots of simulations you have a wide range of possibilities for a good strategy and it doesn’t require you to invest so much thinking in your deffence and offence details like i’m sure kkroep has while still giving you a great potential for success.

you did won against copies of demux for the record (i do know that there are still big differences). it will be interesting to see this much.

That is definitely true haha, because I’m lazy I don’t want to do anything like telling my algo how to behave: the little guy has to find on his own

There are big differences indeed, in season 1 my top algos lost to most of demux serie. But the copies are handled pretty well

1 Like