Sure, all the auto magic can be configured too - https://clear.ml/docs/latest/docs/faq#experiments , search for Can I control what ClearML automatically logs?
to view all the options š
Hi UnevenDolphin73 , so all works now? With multi credentials?
PanickyMoth78 are you getting this from the app or one of the tasks?
Hi PanickyMoth78 ,
Can you try with pip install clearml==1.8.1rc0
? it should include a fix for this issue
Can you verify the paths you are using in your script?
and again - feature request - add free text there.
LethalCentipede31 can you add a new https://github.com/allegroai/clearml/issues issue with this request? Just so it wonāt get lost
Hi @Izik Golan, yes, you can configure the docker image and all the container parameters with set_base_docker, you can read about it here https://clear.ml/docs/latest/docs/references/sdk/task#set_base_docker
Thanks for the information. do you get any errors? Warnings?
Hi ReassuredTiger98 , can you share your clearml
version?
Hi GleamingGrasshopper63 , can you share how you are running the clearml agent (with venv, docker, if docker, your image?)?
For updating the agent configuration after its started to run, youāll need to restart the agent š
When you are not using the StorageManager you donāt get the OSError: [Errno 9] Bad file descriptor
errors?
yes, you could also use the containerās SETUP SHELL SCRIPT
and run command to install your python version (e.g.sudo apt install python3.8
for example)
Hi MotionlessMonkey27 ,
first, Iām getting a warning:
ClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start
This simply indicated your task did not start reporting metrics to the server yet. Once reporting started, it will go back to iterations-based.
Also, ClearML is not detecting the scalars, which are being logged as follows:
tf.summary.image(āoutputā, output_image, step=self._optimizer.iterations.numpy())
or
for key, value in...
How do I prevent the content of a uri returned by a task from being saved by clearml at all.
I think the safest way doing so it to change the clearml files server configuration in your ~/clearml.conf
file, you can set https://github.com/allegroai/clearml/blob/master/docs/clearml.conf#L10 to some local mnt path for example of some internal storage service (like minio for example) and the default, including artifacts, debug images and more will be saved in this location by defaul...
Hi PanickyMoth78 , thanks for the logs, I think I know the issue, iām trying to reproduce it my side, keeping you updated about it
Hi PanickyMoth78 ,
Note that if I change the component to return a regular meaningless string -
"mock_path"
, the pipeline completes rather quickly and the dataset is not uploaded. (edited)
I think it will use the cache from the second run, it should be much much quicker (nothing to download).
The files server is the default for saving all the artifacts, you can change this (default) with env var ( CLEARML_DEFAULT_OUTPUT_URI
) or config file ( ` sdk.development...
Hi SmarmySeaurchin8 ,
I suspect the same, can you share an example of the path? I want to try and reproduce it on my side
Not sure getting that, if you are loading the last dataset task in your experiment task code, it should take the most updated one.
Hi GleamingGiraffe20 ,
The example in the documentation is missing a filename at the end of remote URL (an error I gotten locally when I tried to upload).
In https://allegro.ai/docs/examples/examples_storagehelper/#uploading-a-file example, the filename is /mnt/data/also_file.ext
, did I miss the example you talked about? If so, can you send a link to it?
When using a trains task to track my run AND changing my scripts output directory, I get: āTRAINS Monitor: Could not d...
DilapidatedDucks58 also with 1.0.1 version?
can you try with the latest? pip install clearml==1.1.4
?
Hi DeliciousBluewhale87 ,
You can try to configure the files server in your ~/clearml.conf
file. could this work?
Hi ApprehensiveSeahorse83 , from the link, you can limit the version of protobuf, try adding
Task.add_requirements('protobuf', '<=3.20.1')
Every talk with ClearML can run remotely š
Hi GiganticTurtle0 ,
My favorite is ps -ef | grep clearml-agent
and after kill -9 <agent pid>
Can you send me the logs with and without? (you can send the logs in DM if you prefer)
Unfortunately, it is not possible to delete an experiment using the UI. You can run the script as a service like in the example or run it with job scheduler (crontab for example in linux) to execute it.
Can this do the trick?