
Reputation
Badges 1
40 × Eureka!I'm having some issues with my github access, I'll guess I'll update later/tomorrow when I solve them
in the mean time, I installed a new clean virtualenv (no conda) and got the exact same behavior. I'll try running as a module and we'll see
AgitatedDove14 - nailed it! it was due to my notebook being pwd protected rather than token protected. I shared that with Moshik. I wonder whether that could be worked around.
AgitatedDove14 Looks goo, thanks! I see that I can also find the plot index based on the 'metric' key, so I can write something that would choose the plot by name rather than ordinal position.
SuccessfulKoala55 on a different question that came up in the context of this use case, I want to use Task.init(continue_last_task=id) in order to add the inference results to the original task that ran the inference.
I was able to do this for a single task. I locally fetch the task ids, then for the first task I run Task.init with continue_last_task and subsequently I run task.execute_remotely() and the task is then run remotely with the outputs appended to the original training task.
Whe...
can you give me a snippet? when I tried it didn't work, since execute_Remotes terminates the local task, unless you tell it to clone, which is not what I need here.
(and as you said, running as a module didn't change anything)
same same. I ran inside clearml git repo and got the same warnings
I'm on windows, this is a python 3.6 conda venv, I think you can see the name of the env in the logs...
AgitatedDove14 is there some way I can update the script file manually and retrigger the git discovery process?
SuccessfulKoala55 I ran a training task. I now wish to run inference on some data. My model's init function expects the parsed args Namespace as an argument. In order to load the weights of the saved model I need to instantiate the class for which I need the args variable.
AgitatedDove14 I don't understand how that's related. I am working on localhost, why should I have a problem communicating with the jupyter server? It's local. Also, the kernel has no problem communicating with the allegro server, obviously (since results are logged).
but this gives me an idea, I will try to check if the notebook is considered as trusted, perhaps it isn't and that causes issues?
This is what's working for me.for task in tasks: subprocess.run(['python', './test_task.py', '--task_id', task.id])
where in test_task.py I have the following:
` parser = ArgumentParser()
if Task.running_locally():
parser.add_argument('--task_id',type=str)
args=parser.parse_args()
task = Task.init(
continue_last_task=args.task_id
)
task.execute_remotely(queue_name='default') `and the rest is the inference code, which is only run on the remote, and include...
args_string = ['--{} {}'.format(k, v) for k, v in task.get_parameters_as_dict().get('Args', {}).items()] args_strings = [a.replace("'", '').replace("[", '').replace("]", '').replace(",", '') for a in reduce(lambda l, s: l + s.split(' '), args_string, []) if a]
and then just parser.parse_args(ars_strings)
It's not even very clean, I could have replaced the multiple replace calls with a regex etc. just a quick hack to work around it.
Actually, I found out that if I use exe...
I can also send you a link to the task this created on our hosted allegro web server, to look at the logs, if that helps
SuccessfulKoala55 this seems to work, but I would have preferred a continue_last_task version of enqueue, which would handle things for me, instead of introducing another level of hierarchy
Yeah, I know how to do it manually from the web GUI using the button, that's just not scalable. What I need is the Python SDK code. It doesn't have to be a single liner.
CLEARML-AGENT version 0.17.2
allegroai 3.3.5
the only solution I see is to have another script that would run the test task using (for example) popen in a separate process.
ok got it. Moshik just sent me a snippet that identified that this is indeed the problem. I will try to see if I can setup an exception. that may prove tricky since that's the IT control
TimelyPenguin76 setting what appears in the GUI as "SETUP SHELL SCRIPT"
TimelyPenguin76 ok, I'll try it out, thanks.
TimelyPenguin76 I've been using this for a bit now, I would like to set it from code, just like I set docker image, for example. Can you point me in the right direction? I couldn't find anything in the docs
btw, I see the same thing when I start the notebook directly, i.e. "jupyter notebook path/to/notebook.ipynb" and when I start the notebook server "jupyter notebook" and then open the notebook from the jupyter web interface.
I checked and it now seems to work. Thanks!
I understand. Thing is I already have a bunch of tasks where I logged the tables and did not upload an artifact. If I can get them using the SDK, as something that I can possibly extract the values from as JSON (as in the web GUI) that would be great. Currently I'm just manually downloading the json one by one as I need them.
ok, that's a difference, I did not start with python -m, as a module. I'll try that
TimelyPenguin76 what version of clearml are you using? my task.set_base_docker only has a single positional command. am I using an old version?