Reputation
Badges 1
14 × Eureka!Thank you for giving me the advice.
To answer your question, here is my workflow.
First, I create the task by running the below code
python3 train.py config object_detection.yaml
And in the same docker image, I run the below command to executing an agent
clearml-agent daemon --queue default --forground
After that, use this task id created above, I run the code I shared  clearml_hyper.py
So I think argparser arguments are injected in the task itself before HPO
Yes, in the train.py, I put the  task.init
Thank you for giving me the advice @<1523701087100473344:profile|SuccessfulKoala55> @<1523701205467926528:profile|AgitatedDove14> !!
here is the full log of the failed task
and this is the Nanodet modified train code
And this is the task configuration info
And this is the HPO’s configuration info
here is the key/secret
sdk {
    aws {
        s3 {
            region: "ap-northeast-2"
            use_credentials_chain: false
            extra_args: {}
            credentials: [
                {
                    bucket: "
"
                    key: "S3_KEY"
                    secret: "S3_SECRET"
                }
            ]
        }
    }
}
					Thank you for your advice  @<1523701087100473344:profile|SuccessfulKoala55>
I am really appreciate it.
this is my client side’s  clearml.conf  file
I think it is almost similar with agent’s  clearml.conf  file
@<1523701087100473344:profile|SuccessfulKoala55> I changed s3 bucket name None , but still has same error above.
after restarting the docker-compose, then another error appeared
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[6], line 1
----> 1 StorageManager.list("
")
File ~/.clearml/venvs-builds.2/3.8/lib/python3.8/site-packages/clearml/storage/manager.py:452, in StorageManager.list(cls, remote_url, return_full_path, with_metadata)
    430 @classmethod
    431 def list(cls...
					Thank you for your advice!
I will change the s3 bucket that does not have a dot and try again.