
Reputation
Badges 1
43 × Eureka!sorry, in my case it's the default mode
I'm afraid I'm still having the same issue..
oh wait, I was using clearml == 0.17.5 and I also had this issue
it's very odd for me too, I have another project running trainings longer that 100 epochs and I don't have this issue
I'm plotting the confusion matrices the regular way, plot, then read figure from buffer to create the tensor, and save the tensor
how quick is "very quickly"? we are talking about maybe 30 minutes to reach 100 epochs
I need to wait 100 epochs 😅
I don't understand though..why doesn't this happen on my other experiments?
oh I meant now...so after the reboot everything goes back to "normal"..except that I can't make the comparisons
Just want to know if it would be possible when you have your ClearML server inside your GCP environment, and you want to launch training jobs using Vertex AI. Would the training script be able to register to the server when there is no public IP?I guess it's more related to networking inside GCP, but just wanted to know if anyone tried it.
Worked perfectly, thanks!
Awesome! I'll let you know if it works now
but the reason I said the comparison could be an issue is because I'm not being able to do comparisons of experiments
So what changed?
We changed other bits of code, but not that one..
But maybe we are focusing on the wrong thing, the question now is why is ClearML only detecting these packages (running a different experiment than Diego)
Pillow == 8.0.1
clearml == 0.17.5
google_cloud_storage == 1.40.0
joblib == 0.17.0
numpy == 1.19.5
pandas == 1.3.1
seaborn == 0.11.0
tensorflow_gpu == 2.3.1
tqdm == 4.54.1
yes, the code is inside a git repository In the main script: from callbacks import function_plot_conf_matrix
and inside callbacks.py
of course at the beginning we have from sklearn.metrics import confusion_matrix
or something like that
I see, I can confirm that these packages (except for google_cloud_storage) are imported directly in the main script
right, callbacks.py
is a file inside the repo, but is not part of the package
so what we should do is turn pip freeze on in the clearml.conf
file?
Ok, I think figured it out. We started with a main script that imported sklearn and then we moved that function outside the main script, and instead imported that function.
So when we cloned the first time we had sklearn in the Installed Packages, and therefore our agent was able to run. The (now) cached clearml-venv had sklearn installed, and when it run the second experiment without the sklearn import in the main script and therefore without it in the Installed Packages it didn't matter, b...
ok, so ClearML doesn't add all the imported packages needed to run the task to the Installed Packages, only the ones in the main script?