
Reputation
Badges 1
40 × Eureka!AgitatedDove14 Looks goo, thanks! I see that I can also find the plot index based on the 'metric' key, so I can write something that would choose the plot by name rather than ordinal position.
I understand. Thing is I already have a bunch of tasks where I logged the tables and did not upload an artifact. If I can get them using the SDK, as something that I can possibly extract the values from as JSON (as in the web GUI) that would be great. Currently I'm just manually downloading the json one by one as I need them.
TimelyPenguin76 setting what appears in the GUI as "SETUP SHELL SCRIPT"
TimelyPenguin76 ok, I'll try it out, thanks.
TimelyPenguin76 I've been using this for a bit now, I would like to set it from code, just like I set docker image, for example. Can you point me in the right direction? I couldn't find anything in the docs
hi TimelyPenguin76 I tried doing this, but it didn't work. When enqueueing the task the contents of the textbox were emptied and the script was not run. I did make sure that it was saved before clicking on enqueue (by changing to another task and back and making sure the script appeared).
It's working as expected. Thanks!
TimelyPenguin76 what version of clearml are you using? my task.set_base_docker only has a single positional command. am I using an old version?
SuccessfulKoala55 I ran a training task. I now wish to run inference on some data. My model's init function expects the parsed args Namespace as an argument. In order to load the weights of the saved model I need to instantiate the class for which I need the args variable.
SuccessfulKoala55 I managed to find a workaround, by instantiating a new parser and feeding it the string values. if there'd be a cleaner solution I'll be happy to hear about it.
args_string = ['--{} {}'.format(k, v) for k, v in task.get_parameters_as_dict().get('Args', {}).items()] args_strings = [a.replace("'", '').replace("[", '').replace("]", '').replace(",", '') for a in reduce(lambda l, s: l + s.split(' '), args_string, []) if a]
and then just parser.parse_args(ars_strings)
It's not even very clean, I could have replaced the multiple replace calls with a regex etc. just a quick hack to work around it.
Actually, I found out that if I use exe...
the only solution I see is to have another script that would run the test task using (for example) popen in a separate process.
can you give me a snippet? when I tried it didn't work, since execute_Remotes terminates the local task, unless you tell it to clone, which is not what I need here.
CLEARML-AGENT version 0.17.2
allegroai 3.3.5
SuccessfulKoala55 this seems to work, but I would have preferred a continue_last_task version of enqueue, which would handle things for me, instead of introducing another level of hierarchy
I checked and it now seems to work. Thanks!
Yeah, I know how to do it manually from the web GUI using the button, that's just not scalable. What I need is the Python SDK code. It doesn't have to be a single liner.
This is a great feature for debugging setup. kinda feels like a superpower 🙂 I already used it to work around another issue with my docker setup. now I'll only need to update the docker file after I iron everything out and I will already have the startup shell script as documentation for what should be fixed. awesome.
This is what's working for me.for task in tasks: subprocess.run(['python', './test_task.py', '--task_id', task.id])
where in test_task.py I have the following:
` parser = ArgumentParser()
if Task.running_locally():
parser.add_argument('--task_id',type=str)
args=parser.parse_args()
task = Task.init(
continue_last_task=args.task_id
)
task.execute_remotely(queue_name='default') `and the rest is the inference code, which is only run on the remote, and include...
SuccessfulKoala55 on a different question that came up in the context of this use case, I want to use Task.init(continue_last_task=id) in order to add the inference results to the original task that ran the inference.
I was able to do this for a single task. I locally fetch the task ids, then for the first task I run Task.init with continue_last_task and subsequently I run task.execute_remotely() and the task is then run remotely with the outputs appended to the original training task.
Whe...
AgitatedDove14 is there some way I can update the script file manually and retrigger the git discovery process?
AgitatedDove14 same thing happens to me when I run via git bash
btw, I see the same thing when I start the notebook directly, i.e. "jupyter notebook path/to/notebook.ipynb" and when I start the notebook server "jupyter notebook" and then open the notebook from the jupyter web interface.
(and as you said, running as a module didn't change anything)
AgitatedDove14 I don't understand how that's related. I am working on localhost, why should I have a problem communicating with the jupyter server? It's local. Also, the kernel has no problem communicating with the allegro server, obviously (since results are logged).
but this gives me an idea, I will try to check if the notebook is considered as trusted, perhaps it isn't and that causes issues?
create a notebook, add the following lines to it's first cell:from clearml import Task task=Task.init(project_name='proj', task_name='notebook', task_type=Task.TaskTypes.custom, continue_last_task=True,)
run the cell