BTW when I started using s3, I was thinking I needed to specify ouput_uri for each task. Soon realized that you just need the prefix where you want to put it into, and clearml will take care of project etc being appended to the path. So for most usecases, a single output uri set in conf should work.
Hi DeliciousBluewhale87
I think you are correct, there is no way to pass it.
As TimelyPenguin76 mentioned you can either set a default output_uri on the agent's config file, or edit the created Task in the UI.
What is the specific use case ? Maybe we should add this ability, wdyt?
you will have to update this in your local clearml.conf, or wherever you are doing the Task.init from.
i ran this in my local machine..clearml-task --project playground --name tensorboard_toy --script tensorboard_toy.py --requirements requirements.txt --queue myqueue
Yeah, that worked.. As I was the running the agent in a different machine as our deployment of clearml was in k8s.
yup, i updated this in my local clearml.conf... Or should be updating this elsewhere as well
On the agent's machine, you should update the default_output_uri. Make sense ?
yup, i updated this in my local clearml.conf... Or should be updating this elsewhere as well
Hi DeliciousBluewhale87 ,
You can try to configure the files server in your ~/clearml.conf
file. could this work?
Hi guys,
I filled up the default_output_ur in the conf file, but it doesnt get reflected in the clearml ui.
Disclaimer : Clearml is setup as a k8s pod using the Helm chartssdk { development { # Default Task output_uri. if output_uri is not provided to Task.init, default_output_uri will be used instead. default_output_uri: "
" } }