Badges 138 × Eureka!
I don't think it's related to the region.
I do have the log of the autoscaler.
We also have an autoscaler that was implemented from scarch before ClearML had the autoscaler application.
I wouldn't want to share the autoscaler log with this channel.
My task runs just fine.
But no GPU.
(When it demands GPU it collapses).
Looking at the VM features on GCP UI it seems no GPU was defined for the VM.
It's a private image (based off of this image).
Welcome to the Google Deep Learning VM
Based on: Debian GNU/Linux 10 (buster) (GNU/Linux 4.19.0-21-cloud-amd64 x86_64\n) `I am leaving the docker line empty, so I assume there's no docker spun up for my agent,
The thing is this.
My optimizer works a bit different.
my "optimized task" is actually a task that gets a specific
Hyper parameters and then enqueus more tasks (each one on different object)
is there a possibility to "clear" a queue from python?
A "purge" method for :
I can only watch the current length of the queue, how do I remove all task/ specific tasks?
I took it offline with Alon shomrat from ClearML.
It seems like that the problem is solved (at least for now).
It's hard for me to tell why, and also for him.
TimelyPenguin76 Maybe you were able to find the problem ?
I don't remember what was the solution.
Might just updated my ClearML version...
I am not a staff member. But it seems like something quite trivial with not much effort.
if you can avoid conda and don't need the c++ dependencies that conda takes care of
(and since you can convert to pip fomat , you probably can).
Should note that it works when i run the container locally (with no external env variables).
I am using the github actions given server.
I messaged with Alon from your team and he will upload an update to the old repository.
Thanks a lot!
GrumpySeaurchin29 Thank you!
I do have the configuration vault feature.
I managed to make it work.
Seems like I have been using it wrong.
In order to facilitate the multiple credentials one must use the Clearml SDK obviously.
So I just started using
StorageManager and it works.
Manged to make the credentials attached to the configuration when the task is spinned,
boto3 in the script still uses the "default" access keys instead of the newly added keys
Using an autoscaler service(on 24/7 EC2 machine)
that triggers EC2 workers (with an AMI I saved prior to activation)
Hope that helps
I tried to delete ~/clearml.conf (apparently it was already exist)
and rerun clearml-init
Nope. It gives me errors.
Just like the guy that replied in the thread I linked in my previous reply here.
Still no good, managed to apply with errors only
Ok, seems like the problem is solved.
These uncommited changes were already applied to the local branch, but the
git apply error wasn't very informative.
Adding the flags he added also didn't help
It's generated automatically by HPO script.
So it might be added inside the report completion section
my_optimizer = an_optimizer.get_optimizer() plot_optimization_history(my_optimizer._study)Since
my_optimizer._study is an optuna object
you can pickle the above object (pickle the study).
But you can't actually pickle the optimizer itself as you said/
That helps a lot!
Although I didn't understand why you mentioned
torch in my case?
Since I don't use it directly, I guess somewhere along the way multiprocessing does get activated (in HPO)
I would guess it relates to parallelization of Tasks execution of the
no, it is AWS EC2
As a matter of fact, all my tasks are "running" state although some of them have failed