I followed the upgrading still nothing
not manually I assume that if I deleted the image, and then docker-composed up, and I can see the pull working it should pull the correct one
Is there a way to do so without touching the config? directly through the Task object?
How can I change the version of the Cleanup Service?
So if I'm collecting from the middle ones, shouldn't the callback be attached to them?
Hi guys, just updated the issue - seems like the new release did fix the color scale, but I notice some data points are missing (the plot is missing data!)
see my comment on the issue
https://github.com/allegroai/clearml/issues/373#issuecomment-894756446
I mean the code in whatever form it is - I'm working with git specifically, but if i have diffs I'd like to see the code with the diffs applied
eventually i think it should display the contents of the script executed in the most straightforward manner regardless of version control
but I can't seem to run docker-compose down
you can use pgrep -af "trains-agent"
to fix it, I excluded this var entirely from the docker-compose
Now I remind you that using the same credentials exactly, the auto scaler task could launch instances before
Do i need to copy this aws scaler task to any project I want to have auto scaling on? what does it mean to enqueue hte aws scaler?
:face_palm: 🤔 :man-tipping-hand:
I mean, I barely have 20 experiments
pgrep -af trains
shows that there is nothing running with that name
That is not very informative
Especially coming from the standpoint of a team leader or other kind of supervision (or anyone who wants to view the experiment which is not the code author), when looking at an experiment you want to see the actual code
` # Python 3.8.10 (default, Jun 2 2021, 10:49:15) [GCC 9.4.0]
clearml == 1.0.5
hyperopt == 0.2.5
matplotlib == 3.4.3
numpy == 1.21.2
pandas == 1.3.2
plotly == 5.3.0
python_dateutil == 2.8.2
scikit_learn == 0.24.2
statsmodels == 0.12.2
tqdm == 4.62.2
Detailed import analysis
**************************
IMPORT PACKAGE clearml
tasks/data_projection.py: 9
tasks/hp_optimization.py: 6
tasks/hpo_n_best_evaluation.py: 6
tasks/pipelines/monthly_predictions.py: 4
IMPORT PACKAGE hypero...
I only have like 40 tasks including the example ones
AgitatedDove14 I really don't know how is this possible... I tried upgrading the server, tried whatever I could
About small toy code to reproduce I just don't have the time for that, but I will paste the callback I am using to this explanation. This is the overall logic so you can replicate and use my callback
From the pipeline task, launch some sub tasks, and put in their post_execute_callback
the .collect_description_tables
method from my callback class (attached below) Run t...
Actually I removed the key pair, as you said it wasn't a must in the newer versions