
Reputation
Badges 1
40 × Eureka!args_string = ['--{} {}'.format(k, v) for k, v in task.get_parameters_as_dict().get('Args', {}).items()] args_strings = [a.replace("'", '').replace("[", '').replace("]", '').replace(",", '') for a in reduce(lambda l, s: l + s.split(' '), args_string, []) if a]
and then just parser.parse_args(ars_strings)
It's not even very clean, I could have replaced the multiple replace calls with a regex etc. just a quick hack to work around it.
Actually, I found out that if I use exe...
SuccessfulKoala55 I managed to find a workaround, by instantiating a new parser and feeding it the string values. if there'd be a cleaner solution I'll be happy to hear about it.
This is a great feature for debugging setup. kinda feels like a superpower 🙂 I already used it to work around another issue with my docker setup. now I'll only need to update the docker file after I iron everything out and I will already have the startup shell script as documentation for what should be fixed. awesome.
TimelyPenguin76 ok, I'll try it out, thanks.
TimelyPenguin76 I've been using this for a bit now, I would like to set it from code, just like I set docker image, for example. Can you point me in the right direction? I couldn't find anything in the docs
SuccessfulKoala55 on a different question that came up in the context of this use case, I want to use Task.init(continue_last_task=id) in order to add the inference results to the original task that ran the inference.
I was able to do this for a single task. I locally fetch the task ids, then for the first task I run Task.init with continue_last_task and subsequently I run task.execute_remotely() and the task is then run remotely with the outputs appended to the original training task.
Whe...
hi TimelyPenguin76 I tried doing this, but it didn't work. When enqueueing the task the contents of the textbox were emptied and the script was not run. I did make sure that it was saved before clicking on enqueue (by changing to another task and back and making sure the script appeared).
It's working as expected. Thanks!
the only solution I see is to have another script that would run the test task using (for example) popen in a separate process.
TimelyPenguin76 setting what appears in the GUI as "SETUP SHELL SCRIPT"
I checked and it now seems to work. Thanks!
AgitatedDove14 Looks goo, thanks! I see that I can also find the plot index based on the 'metric' key, so I can write something that would choose the plot by name rather than ordinal position.
AgitatedDove14 I don't understand how that's related. I am working on localhost, why should I have a problem communicating with the jupyter server? It's local. Also, the kernel has no problem communicating with the allegro server, obviously (since results are logged).
but this gives me an idea, I will try to check if the notebook is considered as trusted, perhaps it isn't and that causes issues?
I understand. Thing is I already have a bunch of tasks where I logged the tables and did not upload an artifact. If I can get them using the SDK, as something that I can possibly extract the values from as JSON (as in the web GUI) that would be great. Currently I'm just manually downloading the json one by one as I need them.
btw, I see the same thing when I start the notebook directly, i.e. "jupyter notebook path/to/notebook.ipynb" and when I start the notebook server "jupyter notebook" and then open the notebook from the jupyter web interface.
AgitatedDove14 is there some way I can update the script file manually and retrigger the git discovery process?
AgitatedDove14 same thing happens to me when I run via git bash
ok, that's a difference, I did not start with python -m, as a module. I'll try that
I'm having some issues with my github access, I'll guess I'll update later/tomorrow when I solve them
TimelyPenguin76 what version of clearml are you using? my task.set_base_docker only has a single positional command. am I using an old version?
CLEARML-AGENT version 0.17.2
allegroai 3.3.5
create a notebook, add the following lines to it's first cell:from clearml import Task task=Task.init(project_name='proj', task_name='notebook', task_type=Task.TaskTypes.custom, continue_last_task=True,)
run the cell
I'll try running using gitbash, perhaps it would work better, although I use the same conda env when I run scripts from pycharm, or from the windows cmd
(and as you said, running as a module didn't change anything)
Yeah, I know how to do it manually from the web GUI using the button, that's just not scalable. What I need is the Python SDK code. It doesn't have to be a single liner.