
Reputation
Badges 1
46 × Eureka!@<1537605940121964544:profile|EnthusiasticShrimp49> , now that I have run the task on remote, can I copy the artefacts/files it creates back to my local fs?
Lets say the artefacts are something likeartefacts = [checkpoint.pth, dvc.lock, some_other_dynamically_generated_file]
Hmmm, my only issue there is that not all of my "artefacts" are clearml artefacts.
The files I need are models and other locally modified files that get generated by the clearml task on remote
I want the script to be agnostic to whether it is run using clearml or not, with a particular queue or not
Hey @<1577106212921544704:profile|WickedSquirrel54> , I would definitely be interested in this. A gist would be cool too
is the agent execution dependent on some CMD in my docker file?
I've also overriden CLEARML_FILES_HOST= None , and configured it in clearml.conf file. Don't know where its picking 8081 😕
Sorry false alarm
I tried that earlier - that checks out , it matches the s3 path I provide in the conf
@<1523701070390366208:profile|CostlyOstrich36> , as written above, I've done that. It still tries to send to 8081
No, it was fixed by restarting clearml then and some services. But currently, we gave up and we use debug=True so we dont use the services queue
That makes sense, but that would mean that each client/user has to manage the upload themselves, right?
(I'm trying to use clearml to create an abstraction over the compute / cloud)
I'm thinking of using s3fs on the entire /opt/clearml/data folder. What do you think?
Thanks! so it seems like the key is the Task.connect
and bubble up params to original task, correct?
I set it up like this: clearml-agent daemon --detached --gpus 0,1,2 --queue single-gpu-24 --docker
but when I create the session : clearml-session --docker xyz --git-credentials
and I run nvidia-smi
I only see one gpu
We have some scenario where a group of clearml experiments might represent a logical experiment. We then want to use all the trained models in a pipeline to generate some output.
With that output, we probably want to some third party like mechanical turk, do some custom evaluations - and some times more than once. We then want to connect (and present) these evaluations alongwith ClearML experiments.
we have various services internally to do this --> however, we have to manually link it up w...
Also @<1523701070390366208:profile|CostlyOstrich36> - are these actions available for on prem OSS clearml-server deployments too?
nice! I was wondering whether we can trigger it by the UI, like "on publishing" an experiment
I need to mock it - because I'm writing some unittests
feels like a typo somewhere
found out the command swaps singular and plural. It's --gpus 0 and --gpu 0,1,2