we're using the latest version of clearml, clearml agent and clearml server, but we've been using trains/clearml for 2.5 years, so there are some old tasks left, I guess 😃
we already have cleanup service set up and running, so we should be good from now on
what if cleanup service is launched using ClearML-Agent Services container (part of the ClearML server)? adding clearml.conf to the home directory doesn't help
what if cleanup service is launched using ClearML-Agent Services container
The easiest is to use the container args and pass the AWS credentials as env variables:-e AWS_ACCESS_KEY_ID=abcd -e ....Make sense ?
DilapidatedDucks58
did you check:
https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py
What is the recommended way of providing S3 credentials to cleanup task?
cleaml.conf or OS environment (AWS_ACCESS_KEY_ID ...)
oh wow, I didn't see delete_artifacts_and_models option
I guess we'll have to manually find old artifacts that are related to already deleted tasks
two more questions about cleanup if you don't mind:
what if for some old tasks I get WARNING:root:Could not delete Task ID=a0908784a2a942c3812f947ec1f32c9f, 'Task' object has no attribute 'delete'? What's the best way of cleaning them? What is the recommended way of providing S3 credentials to cleanup task?
what if for some old tasks I get WARNING:root:Could not delete Task ID=a0908784a2a942c3812f947ec1f32c9f, 'Task' object has no attribute 'delete'? What's the best way of cleaning them?
This seems like an old SDK no?