Reputation
Badges 1
103 × Eureka!Hi SweetBadger76
Further investigation showed that the worker was created with a dedicated CLEARML_HOST_IP - so running the
clearml-agent daemon --stop
didn't kill it (but it did appear in the clearml-agent list But once we added the CLEARML_HOST_IP `
CLEARML_HOST_IP=X.X.X.X clearml-agent daemon --stop
it finally killed it
add the google.storage parameters to the conf settingssdk { google.storage { credentials = [ { bucket: "clearml-storage" project: "dev" credentials_json: /path/to/SA/creds/user.json }, ] } }%
I found the task in the UI -
and in the UNCOMMITTED CHANGES execution section there is
No changes logged
Any other suggestions?
so running the command clearml-agent -d list returns the https://clearml.slack.com/archives/CTK20V944/p1657174280006479?thread_ts=1657117193.653579&cid=CTK20V944
Hi AnxiousSeal95 ,
Is there an estimate when the above feature will be available?
Feeling that we are nearly there ....
One more question -
Is there a way to configure Clearml to store all the artifacts and the Plots etc. in a bucket instead of manually uploading/downloading the artifacts from within the client's code?
Specifying the output_uri in Task.init saves the the checkpoints, what about the rest of the outputs?
https://clear.ml/docs/latest/docs/faq#git-and-storage
Hi HugeArcticwolf77
I'v run the following code - which uploads the files with compression, although compression=None
ds.upload(show_progress=True, verbose=True, output_url='
', compression=None)
ds.finalize(verbose=True, auto_upload=True)
Any idea way?
This also may help with the configuration for GCS
https://clearml.slack.com/archives/CTK20V944/p1635957916292500?thread_ts=1635781244.237800&cid=CTK20V944
not sure I understand
runningclearml-agent listI get
`
workers:
- company:
id: d1bd92...1e52b
name: clearml
id: clearml-server-...wdh:0
ip: x.x.x.x
... `
Is there any settings that we need to take into account when working with session ?
in the https://clear.ml/docs/latest/docs/apps/clearml_session#accessing-a-git-repository it mentions accessing Git Repository -
Can you run clearml sessions without accessing Git? Assuming we are using ssh - what is the correct configuration?
Well it seems that we have similar https://github.com/allegroai/clearml-agent/issues/86
currently we are just creating a new worker and on a separate queue
we want to use the dataset output_uri as a common ground to create additional dataset formats such as https://webdataset.github.io/webdataset/
This does not work -
Since all the files are stored as a single ZIP file (which if unzipped will have all the data), but we would like to have access to the raw files in there original format.
We have assets in a GCP bucket.
The dataset is created and then the assets are linked to the dataset via the add_external_files method
Strange
I ranclearml-agent daemon --stopand after 10 min I ranclearml-agent listand I still see a worker
not sure i understand
we are running the daemon in a detached mode
clearml-agent daemon --queue <execution_queue_to_pull_from> --detached
Hi SweetBadger76 -
I'm I misunderstanding how this tests worker runs?
SmugDolphin23 Where can I check the lates RC? I was not able to find it in the clearml github repo
Sorry -
After updating the repo I can see that the newest chart is 4.1.1
SweetBadger76 should I update to this version?
Hi SuccessfulKoala55
Thx again for your help
in case of the google colab, the values can be provided as environment variables
We still need to run the code in a colab environment (or remote client)
do you have any example for setting the environment variables?
For a general environment variable there is an example! export MPLBACKEND=TkAgBut what would be for the clearml.conf ?
retrieving we can use
config_obj.get('sdk.google')
but how would the setting work? we did ...
But this is not on the pods, isn't it? We're talking about the python code running from COLAB or locally...?
correct - but where is the clearml.conf file?
yes - the agent is running with --docker
Great - where do I define the volume mount?
Should I build a base image that runs on the server and then use it as the base image in the container?
Still trying to understand what is this default worker.
I've removed clearml.conf and reinstall clearml-agent
then running theclearml-agent listgets the following error
` Using built-in ClearML default key/secret
clearml_agent: ERROR: Could not find host server definition (missing ~/clearml.conf or Environment CLEARML_API_HOST)
To get started with ClearML: setup your own clearml-server, or create a free account at and run clearml-agent init Then returning the...
Thx for your reply
agree -
we understand now that the worker is the default worker that is installed after runningpip install clearml-agentis it possible to remove it ? since all tasks that use the worker don't have the correct credentials.
Possibly - thinking more of https://github.com/pytorch/data/blob/main/examples/vision/caltech256.py - using clearml dataset as root path.