Reputation
Badges 1
43 × Eureka!right, callbacks.py
is a file inside the repo, but is not part of the package
we'll create a minimal working example :-)
or I can make comparisons inside some projects but not others
we are developing a model and I've built a webapp with Streamlit that let's you select the task, and you can see the confusion matrices, splits, data, and predictions on data train/val (all saved in the task), ...and also a model predict function in an image you upload
ok, so ClearML doesn't add all the imported packages needed to run the task to the Installed Packages, only the ones in the main script?
Ok, I think figured it out. We started with a main script that imported sklearn and then we moved that function outside the main script, and instead imported that function.
So when we cloned the first time we had sklearn in the Installed Packages, and therefore our agent was able to run. The (now) cached clearml-venv had sklearn installed, and when it run the second experiment without the sklearn import in the main script and therefore without it in the Installed Packages it didn't matter, b...
yes, the code is inside a git repository In the main script: from callbacks import function_plot_conf_matrix
and inside callbacks.py
of course at the beginning we have from sklearn.metrics import confusion_matrix
or something like that
I see, I can confirm that these packages (except for google_cloud_storage) are imported directly in the main script
so what we should do is turn pip freeze on in the clearml.conf
file?
I should remark that it's been working OK nonstop for 5 months already.. but yesterday and today I'm experiencing theses crashes
Ok, tried the following four things:
(fail = sklearn not listed in installed packages)
no _
init
_.py
file in the module_a folder, not a git repo: fail no _
init
_.py
file in module_a folder, git repo: fail with _
init
_.py
file in module_a folder, not git repo: fail with _
init
_.py
file in module_a folder, with git repo: OK!
Just want to know if it would be possible when you have your ClearML server inside your GCP environment, and you want to launch training jobs using Vertex AI. Would the training script be able to register to the server when there is no public IP?I guess it's more related to networking inside GCP, but just wanted to know if anyone tried it.
but the reason I said the comparison could be an issue is because I'm not being able to do comparisons of experiments
oh I meant now...so after the reboot everything goes back to "normal"..except that I can't make the comparisons
I can't access the WebAPP nor ssh the server
Awesome! I'll let you know if it works now
how quick is "very quickly"? we are talking about maybe 30 minutes to reach 100 epochs
it's very odd for me too, I have another project running trainings longer that 100 epochs and I don't have this issue
great! thank you for such a quick response!
I need to wait 100 epochs 😅
the issue is that the confusion matrix showing for epoch 101 is in fact the one for epoch 1.
The images are stored in the default files server
I'm afraid I'm still having the same issue..