
Reputation
Badges 1
39 × Eureka!As a matter of fact, all my tasks are "running" state although some of them have failed
Using an autoscaler service(on 24/7 EC2 machine)
that triggers EC2 workers (with an AMI I saved prior to activation)
Hope that helps
folder is rather small.
3.5MB
Am able to see it in the artifacts.
but can't download it (the address is wrong)
You are right Idan,
I consulted our Private ClearML channel.
you cannot insert these environment variables any other place,
only in init script.
Here is the full quote:
Important to notice I am running my instances on GCP, but the container is on ECR (AWS)
CostlyOstrich36
Thank you,
Solved,
I messaged with Alon from your team and he will upload an update to the old repository.
I see,
is there a possibility to "clear" a queue from python?
A "purge" method for :clearml.backend_api.session.client.Queue
?
I can only watch the current length of the queue, how do I remove all task/ specific tasks?
Thanks,
solved.
I tried to delete ~/clearml.conf (apparently it was already exist)
and rerun clearml-init
That helps a lot!
Thanks Martin.
Although I didn't understand why you mentioned torch
in my case?
Since I don't use it directly, I guess somewhere along the way multiprocessing does get activated (in HPO)
I would guess it relates to parallelization of Tasks execution of the HyperParameterOptimizer
class?
My task runs just fine.
But no GPU.
(When it demands GPU it collapses).
Looking at the VM features on GCP UI it seems no GPU was defined for the VM.
I don't think it's related to the region.
I do have the log of the autoscaler.
We also have an autoscaler that was implemented from scarch before ClearML had the autoscaler application.
I wouldn't want to share the autoscaler log with this channel.
the environment setting you added to your vault is only applied inside the instance when the agent starts running there, not as part of the command that starts the instance.
The most common DevOps practice for having these kind of variables in the init script but not completely exposed to the naked eye is by adding something like
export MY_ENV_VAR=$(echo '<base64-encoded secret>' | base64 --decode)
to the init script (編集済み)
Update:
Manged to make the credentials attached to the configuration when the task is spinned,
Although boto3
in the script still uses the "default" access keys instead of the newly added keys
I am not a staff member. But it seems like something quite trivial with not much effort.
if you can avoid conda and don't need the c++ dependencies that conda takes care of
(and since you can convert to pip fomat , you probably can).
Nope. It gives me errors.
Just like the guy that replied in the thread I linked in my previous reply here.
Should note that it works when i run the container locally (with no external env variables).
The thing is this.
My optimizer works a bit different.
my "optimized task" is actually a task that gets a specific
Hyper parameters and then enqueus more tasks (each one on different object)
I took it offline with Alon shomrat from ClearML.
It seems like that the problem is solved (at least for now).
It's hard for me to tell why, and also for him.
TimelyPenguin76 Maybe you were able to find the problem ?
I don't remember what was the solution.
Might just updated my ClearML version...
I got the same issue as well last night.
I do have the configuration vault feature.
I managed to make it work.
Seems like I have been using it wrong.
In order to facilitate the multiple credentials one must use the Clearml SDK obviously.
So I just started using StorageManager
and it works.
Thanks.
Still no good, managed to apply with errors only