it’s a pretty standard pytorch train/eval loop, using pytorch dataloader and https://docs.monai.io/en/stable/_modules/monai/data/dataset.html
I found place, where hang up happens
The repo detection (I assume git?) uses the git command, so .ignore
should be taken into account, I think
suppose, clearml dows not take .gitignore into account
https://github.com/allegroai/clearml/blob/a47f127679ebf5912690f7c3e60791a2daa5c984/clearml/backend_interface/task/repo/scriptinfo.py#L47
Oh, I see. It's actually the pigar
embedded in clearml
Well, I'll take a look and get back to you 🙂
or it should be fixed in pigar repo first?
Some info on the script (pseudo-code?) will be appreciated 🙂
DilapidatedDucks58 , We have a hunch we know what's wrong (we think we treat loading data like loading model and then we register each file \ files pickle as a model which takes time). How are you loading data? Is monai built inside pytorch? Or are you downloading it and loading manually? If you can share the loading code that might be helpful 🙂
That's fine 🙂 we haven't got to it yet, I'm afraid - I think the best way is to open a GitHub issue...
In any case, there's a 10sec timeout for this process, and you can simply choose not to do the detection
SuccessfulKoala55 sorry for the bump, what's the status of the fix?
we’re using latest ClearML server and client version (1.2.0)
Hey, looks like we found something. Actually the parameter which 'controls' slowing down is detect_repository
. We think that it may be caused by lots of files in repo (data folder). Do you use .gitignore
file when detecting repo?
stack traceproject_import_modules, reqs.py:46 extract_reqs, __main__.py:67 get_requirements, scriptinfo.py:49 _update_repository, task.py:298 _create_dev_task, task.py:2819 init, task.py:504 train, train_loop.py:41 <module>, train.py:88