
Reputation
Badges 1
40 × Eureka!SuccessfulKoala55 on a different question that came up in the context of this use case, I want to use Task.init(continue_last_task=id) in order to add the inference results to the original task that ran the inference.
I was able to do this for a single task. I locally fetch the task ids, then for the first task I run Task.init with continue_last_task and subsequently I run task.execute_remotely() and the task is then run remotely with the outputs appended to the original training task.
Whe...
CLEARML-AGENT version 0.17.2
allegroai 3.3.5
It's working as expected. Thanks!
TimelyPenguin76 ok, I'll try it out, thanks.
TimelyPenguin76 setting what appears in the GUI as "SETUP SHELL SCRIPT"
SuccessfulKoala55 I managed to find a workaround, by instantiating a new parser and feeding it the string values. if there'd be a cleaner solution I'll be happy to hear about it.
TimelyPenguin76 I've been using this for a bit now, I would like to set it from code, just like I set docker image, for example. Can you point me in the right direction? I couldn't find anything in the docs
TimelyPenguin76 what version of clearml are you using? my task.set_base_docker only has a single positional command. am I using an old version?
the only solution I see is to have another script that would run the test task using (for example) popen in a separate process.
hi TimelyPenguin76 I tried doing this, but it didn't work. When enqueueing the task the contents of the textbox were emptied and the script was not run. I did make sure that it was saved before clicking on enqueue (by changing to another task and back and making sure the script appeared).
args_string = ['--{} {}'.format(k, v) for k, v in task.get_parameters_as_dict().get('Args', {}).items()] args_strings = [a.replace("'", '').replace("[", '').replace("]", '').replace(",", '') for a in reduce(lambda l, s: l + s.split(' '), args_string, []) if a]
and then just parser.parse_args(ars_strings)
It's not even very clean, I could have replaced the multiple replace calls with a regex etc. just a quick hack to work around it.
Actually, I found out that if I use exe...
same same. I ran inside clearml git repo and got the same warnings
This is a great feature for debugging setup. kinda feels like a superpower 🙂 I already used it to work around another issue with my docker setup. now I'll only need to update the docker file after I iron everything out and I will already have the startup shell script as documentation for what should be fixed. awesome.
I can also send you a link to the task this created on our hosted allegro web server, to look at the logs, if that helps
AgitatedDove14 is there some way I can update the script file manually and retrigger the git discovery process?
create a notebook, add the following lines to it's first cell:from clearml import Task task=Task.init(project_name='proj', task_name='notebook', task_type=Task.TaskTypes.custom, continue_last_task=True,)
run the cell
I'm having some issues with my github access, I'll guess I'll update later/tomorrow when I solve them
can you give me a snippet? when I tried it didn't work, since execute_Remotes terminates the local task, unless you tell it to clone, which is not what I need here.
SuccessfulKoala55 I ran a training task. I now wish to run inference on some data. My model's init function expects the parsed args Namespace as an argument. In order to load the weights of the saved model I need to instantiate the class for which I need the args variable.
I checked and it now seems to work. Thanks!
Yeah, I know how to do it manually from the web GUI using the button, that's just not scalable. What I need is the Python SDK code. It doesn't have to be a single liner.
This is what's working for me.for task in tasks: subprocess.run(['python', './test_task.py', '--task_id', task.id])
where in test_task.py I have the following:
` parser = ArgumentParser()
if Task.running_locally():
parser.add_argument('--task_id',type=str)
args=parser.parse_args()
task = Task.init(
continue_last_task=args.task_id
)
task.execute_remotely(queue_name='default') `and the rest is the inference code, which is only run on the remote, and include...
btw, I see the same thing when I start the notebook directly, i.e. "jupyter notebook path/to/notebook.ipynb" and when I start the notebook server "jupyter notebook" and then open the notebook from the jupyter web interface.
AgitatedDove14 same thing happens to me when I run via git bash
I'll try running using gitbash, perhaps it would work better, although I use the same conda env when I run scripts from pycharm, or from the windows cmd
I'm on windows, this is a python 3.6 conda venv, I think you can see the name of the env in the logs...