
Reputation
Badges 1
Eureka!Hi,
I am not sure what you are trying to do, but docker containers do not usually have access to the files on your system. Have you tried mounting your local file using the -v
docker argument at startup?
Thanks! I did not know that, I think I can write some logic with that in mind
Hi, Are you sure that your workers are connected to the right queue? It looks to me like they are connected to the services and/or default queue. If you click on the worker in the web UI, you can see which queue that particular worker is listening to.
I think your idea is related to issue 71 on GitHub . As far as I undestand it, it is not very straightforward to do it currently.
This is the snippet that works for me. Please be aware that I use a custom Task.init call at the start of my script ( repo/main_scripts/train.py
), so if you don't do that you need to set add_task_init_call to True.
try:
repo = os.popen('git remote -v').read().strip().split('\n')
if len(repo) > 2:
raise RuntimeError('More than one git repository found')
repo = repo[0].split('\t')[-1].split(' ')[0]
branch = os.popen('git rev-parse --abbrev-r...
Hi,
I think the repo has to be the git location, not your local path, so something like
git@gitlab.com/repo_name or git@github.com:Project-MONAI/VISTA.git
You can run git remote -v
in the command line to find what your current repo is.
Ah yeah I also encountered this, that was actually one of the reasons we did not move over to fully incorporate docker Agents into our workflow. If you find a soliution, I would be also very curious. Maybe @<1523701070390366208:profile|CostlyOstrich36> has a solution
Hi,
I am experiencing the same thing (although I use old-fashioned dicts as my configuration object). The way that I work around this is by downloading the whole configuration using get_configuration_object_as_dict('OmegaConf'),
modifying this dict and then reuploading it using connect_configuration(new_dict)
. If there is a better way, I would definitely like to know!
Hi,
If your config file is a YAML or JSON or something similar, you can just connect it to your task using task.connect_configuration(dict) after loading it in your script. Then your full configuration can be seen in ClearML and retrieved in your script using task.get_configuration_object_as_dict()
see None
Hi, I am not familiar with GitHub, but the way that I set it up for GitLab is by creating a Personal Access token and using that under the {agent: git_pass} option in the clearml.conf file