Python requested Libraries

Hey everyone could you supply the python libraries you would like to be included in our workers for your python algos? I can’t guarantee when we will get to including them but would be good to get a list.

Here are some I already heard about and recall:

numpy
keras
pytorch
Scipy
TensorFlow
Pandas

5 Likes

So we are planning to launch this tonight. Here are the exact versions of the libraries we will provide:
torch==1.0.1.post2
numpy==1.16.2
Keras==2.3.0
scipy==1.3.0
tensorflow==1.14.0
pandas==0.25.1

Let us know if you want any other libraries.

There is a chance there are bugs with the deployment, in which case we will roll it back. So we cannot guarantee we will have it ready for competitions this weekend.

4 Likes

This is now live. Let us know if there are issues.

1 Like

Wow, this could be a game-changer. Now I’m even more sad that my senior semesters have started, I would love to be able to burn away hours on end each day trying to apply trained models to Terminal :heart_eyes:

CVXPY and CVXOPT could also be useful.

as keras is now deprecated, would it be possible to update to tensorflow 2.0? (and use tf.keras)

1 Like

So, change the requirements to:

pip3 install torch==1.0.1.post2
pip3 install numpy==1.16.2
pip3 install Keras==2.3.0
pip3 install scipy==1.3.0
pip3 install tensorflow==2.0.0b1
pip3 install pandas==0.25.1

Any other requests?

2 Likes

If its not too much trouble, I would appreciate a version upgrade for torch. I think the current version is 1.3.1

Okay servers should now have the following python versions:
pip3 install torch==1.4.0
pip3 install numpy==1.16.2
pip3 install Keras==2.3.0
pip3 install scipy==1.3.0
pip3 install tensorflow==2.0.0b1
pip3 install pandas==0.25.1

1 Like

Just confirming that this is still accurate as of 11/1/2020

does this pertain to Termianl 2020 oct 11 ?

1 Like

Yup! That’s what Ryan means there!

So in python we have torch+numpy+…, but in Rust / Java we don’t? (as [sadly] you can’t just get torch from a rust crate)
Also is it possible that we make a poly-language submission? (like, we can have a rust solution, which calls into python via: https://pyo3.rs/master/python_from_rust.html , would be trivial to support I guess, it just need pyo3)
It’s really cool to support torch+…, given that it’s an “AI” competition, but can we also know what we are running on? Like, is there a gpu - again, that would be VERY fitting for an AI comp-? Or at least an SIMD optimized cpu version? If we don’t have any measure here, not even the guarantee that what we get is consistent with playground, there might be a code that on some runs just stops working at all, because the runtime config is opaque, and inconsistent (you can’t expect to get the same runtime, it always used to run? Well, now it doesn’t!) would be a bit unfair.

When I’ve done stuff with torch, I’ve gone with a python submission that includes my rust code compiled as a python library. It works well enough for my use case since I have no need to directly call Torch from Rust, instead I glue together my rust code + torch with a little python glue code.

There’s no gpu, what you’re getting here is one CPU core. I’m sure Pytorch includes plenty of SIMD optimizations, but you’re still not going to be able to run a huge amount of NN computations.

2 Likes

Hey @0xcc , welcome to Terminal.

Important note for future humans
This information is unlikely to be accurate if you are reading this after Summer 2021, please make a new post requesting more recent info. I will consider making a more permanent solution to provide transparency into machine-setup if there is demand for this at this point.

Terminal setup
The overall concern being raised is related to the lack of transparency into our machine setup, so ill address this holistically first, and then address specific points you raised.

We are a small team, and we run millions of matches each year between thousands of GB of algos executing arbitrary user code. To facilitate this, we use a decently complicated cloud compute setup that we adjust a few times a year.

An algo running in the spring competition season can expect to run with 1 GCP VCPU and 1 GB memory. You will be able to allocate more, but might start running into issues and performance hits above 1GB.
“1 VCPU” is basically meaningless to most people. Speed should be consistent across ranked and tournament games, so you can look at your ranked matches as a benchmark for your algo’s speed.

Misc details: The OS you will be running on is currently ubuntu:18.04. Max size for an algo is currently 50MB, though most algos are much smaller.

Addressing specific concerns

No and no. We run millions of matches each season, and a change like this will impact very few users and significantly increase operational costs.

Games played on playground use a continuous connection and are handled in a completely different way, and playground game’s performance is not comparable to online matches.

Random matchmade games run on the exact same setup as tournament matches, so you can expect consistency between these kinds of matches.

This has not been a major problem in the past, though I could imagine it happening to algos that use very high amounts of memory, or make system level calls.

Summary
Hopefully this was helpful, I get why its annoying that we are not more transparent about this stuff but hopefully its understandable. At the end of the day, the setup we have should “Just work” for most people’s strategies.

Working around constraints is an important skill for developers, and we are content with the setup we are currently running. We are always open to more feedback though, let us know your thoughts.

2 Likes

just interested if it is possible to make tflearn avaible for python algos?

At a glance, it seems like tflearn is used for training ML algos. Algos can’t be trained on our servers. The lifecycle for an uploaded algo is:

  • An algo is uploaded, compiled, and saved to our storage system
  • When a match is made, the compiled algo is retrieved from storage by the match-running server
  • The match is run with this compiled algo

The same compiled algo will be used for every match, and can never be changed. And training during a match is beyond impactical lol.

I may be off on this, if tflearn is useful for something else we can consider it

Can we import ctypes in our python code?

And ‘import os’ .

@0xcc
These imports are part of the python standard library and should all work automatically on terminal, did you try these libraries and run into errors when running your code on our servers (Algo crashing during random matchmade games)?

I can investigate this if you confirm that there seems to be an issue