
Reputation
Badges 1
40 × Eureka!TimelyPenguin76 setting what appears in the GUI as "SETUP SHELL SCRIPT"
can you give me a snippet? when I tried it didn't work, since execute_Remotes terminates the local task, unless you tell it to clone, which is not what I need here.
SuccessfulKoala55 on a different question that came up in the context of this use case, I want to use Task.init(continue_last_task=id) in order to add the inference results to the original task that ran the inference.
I was able to do this for a single task. I locally fetch the task ids, then for the first task I run Task.init with continue_last_task and subsequently I run task.execute_remotely() and the task is then run remotely with the outputs appended to the original training task.
Whe...
SuccessfulKoala55 I ran a training task. I now wish to run inference on some data. My model's init function expects the parsed args Namespace as an argument. In order to load the weights of the saved model I need to instantiate the class for which I need the args variable.
This is what's working for me.for task in tasks: subprocess.run(['python', './test_task.py', '--task_id', task.id])
where in test_task.py I have the following:
` parser = ArgumentParser()
if Task.running_locally():
parser.add_argument('--task_id',type=str)
args=parser.parse_args()
task = Task.init(
continue_last_task=args.task_id
)
task.execute_remotely(queue_name='default') `and the rest is the inference code, which is only run on the remote, and include...
SuccessfulKoala55 this seems to work, but I would have preferred a continue_last_task version of enqueue, which would handle things for me, instead of introducing another level of hierarchy
AgitatedDove14 Looks goo, thanks! I see that I can also find the plot index based on the 'metric' key, so I can write something that would choose the plot by name rather than ordinal position.
I understand. Thing is I already have a bunch of tasks where I logged the tables and did not upload an artifact. If I can get them using the SDK, as something that I can possibly extract the values from as JSON (as in the web GUI) that would be great. Currently I'm just manually downloading the json one by one as I need them.
Yeah, I know how to do it manually from the web GUI using the button, that's just not scalable. What I need is the Python SDK code. It doesn't have to be a single liner.
CLEARML-AGENT version 0.17.2
allegroai 3.3.5
It's working as expected. Thanks!
TimelyPenguin76 ok, I'll try it out, thanks.
args_string = ['--{} {}'.format(k, v) for k, v in task.get_parameters_as_dict().get('Args', {}).items()] args_strings = [a.replace("'", '').replace("[", '').replace("]", '').replace(",", '') for a in reduce(lambda l, s: l + s.split(' '), args_string, []) if a]
and then just parser.parse_args(ars_strings)
It's not even very clean, I could have replaced the multiple replace calls with a regex etc. just a quick hack to work around it.
Actually, I found out that if I use exe...
TimelyPenguin76 what version of clearml are you using? my task.set_base_docker only has a single positional command. am I using an old version?
SuccessfulKoala55 I managed to find a workaround, by instantiating a new parser and feeding it the string values. if there'd be a cleaner solution I'll be happy to hear about it.
the only solution I see is to have another script that would run the test task using (for example) popen in a separate process.
hi TimelyPenguin76 I tried doing this, but it didn't work. When enqueueing the task the contents of the textbox were emptied and the script was not run. I did make sure that it was saved before clicking on enqueue (by changing to another task and back and making sure the script appeared).
I'm on windows, this is a python 3.6 conda venv, I think you can see the name of the env in the logs...
(and as you said, running as a module didn't change anything)
AgitatedDove14 - nailed it! it was due to my notebook being pwd protected rather than token protected. I shared that with Moshik. I wonder whether that could be worked around.
AgitatedDove14 same thing happens to me when I run via git bash
AgitatedDove14 I don't understand how that's related. I am working on localhost, why should I have a problem communicating with the jupyter server? It's local. Also, the kernel has no problem communicating with the allegro server, obviously (since results are logged).
but this gives me an idea, I will try to check if the notebook is considered as trusted, perhaps it isn't and that causes issues?
AgitatedDove14 is there some way I can update the script file manually and retrigger the git discovery process?
ok got it. Moshik just sent me a snippet that identified that this is indeed the problem. I will try to see if I can setup an exception. that may prove tricky since that's the IT control
btw, I see the same thing when I start the notebook directly, i.e. "jupyter notebook path/to/notebook.ipynb" and when I start the notebook server "jupyter notebook" and then open the notebook from the jupyter web interface.
I can also send you a link to the task this created on our hosted allegro web server, to look at the logs, if that helps