
Reputation
Badges 1
17 × Eureka!I don't know. It looked like an ordinary weights uploading. Here's the screenshot
How can I get them? I think I follow the example from documentation, but I cant get it.
Is it ok that my template experiment is now at draft 'state'?
This is a screen with messages of an optimization process
I'm sorry but I didn't get you about original experiment. By original you mean the experiment I use as a template?
Unfortunately I still can't get it
Adding General prefix for parameters doesn't work as task parameters have no prefixes. The also doesn't have 'General' key returned (pictures 1, 2 are screen shots of my base experiment, picture 3 is key of returned task parameters dictionary) The last_metrics argument is still empty. But my template experiment actually has reported scalars (picture 4) and I use right experiment id (picture 5)
Should I run the template experiment till the end (I mean when it'not not improving) or I can run for a few epochs?
Unfortunately I still have the same issue 😢
Here is the stack on manually interupting (wights were uploaded on 14:16 and I interrupted on 14:30)
Hey, looks like we found something. Actually the parameter which 'controls' slowing down is detect_repository
. We think that it may be caused by lots of files in repo (data folder). Do you use .gitignore
file when detecting repo?
suppose, clearml dows not take .gitignore into account
https://github.com/allegroai/clearml/blob/a47f127679ebf5912690f7c3e60791a2daa5c984/clearml/backend_interface/task/repo/scriptinfo.py#L47
stack traceproject_import_modules, reqs.py:46 extract_reqs, __main__.py:67 get_requirements, scriptinfo.py:49 _update_repository, task.py:298 _create_dev_task, task.py:2819 init, task.py:504 train, train_loop.py:41 <module>, train.py:88
or it should be fixed in pigar repo first?
I found place, where hang up happens
Could you please answer the last question? 🙃
I'm I right that the is a bug with the first situation I've described ('Args' parameter)? Or I do smth wrong and It should work with prefixes? Because it does not work if I add prefix
Hope you are not tired of me. But I am using trains 0.16.1 and adding prefix does not work. I found the place where dict with keys <prefix/key>:value are transformed to nested dict with <prefix>:{<key>: value} (see screenshots). Im sorry for my annoyance but I believe there is misandastanding between us and you think that prefixes work 🙂
Yes 😅 It actually worked. Thank you! Now I got values from scalars
I'll try again, but I did it in this way 😢
Maybe I am wrong and did something wrong
Cause I ran for a few epochs only
These are the last_metrics values the task object has