Reputation
Badges 1
57 × Eureka!I think the fire + hydra combination is not an issue anymore. We're going to separate the 2 out, and I tried it last night and argument modification and passing worked fine with hydra only.
In any case, thanks for you help Martin!
OS - Ubuntu 20.04
Conda - 4.10.3
The agent is running in a conda env with python==3.9.7
Is this the info you were looking for?
Could it be hydra was installed on your laptop via conda not pip?
Yes, while we do use a conda env, our packages are installed using pip
. That being said, I have hydra-core==1.1.1
in my local dependencies as well.
I'm queuing the task to my laptop by cloning on the web console. I have my agent setup to use conda as the primary package manager.
(the one created when you executed the code on your laptop
I haven't executed the task myself at all. I just cloned it from the examples that are available in the SaaS console upon account creation - specifically hyper-parameters example
under the ClearML Examples
project.
The Agent pulls the Task, and then reproduces it, and now it will execute the extra_docker_shell_script that was put in the configuration file.
Does this imply the former? Env is fully setup, then script is run, then experiment is started by calling the executable?
yes, it seems like the command line args are recorded now but the connect
call with my parameter dictionary now fails with exception:
` Error executing job with overrides: ['model_name=all-test', ...]
Traceback (most recent call last):
File "/home/binoydalal/miniconda3/envs/DS974/lib/python3.9/site-packages/clearml/binding/hydra_bind.py", line 146, in _patched_task_function
return task_function(a_config, *a_args, **a_kwargs)
....
File "/home/binoydalal/miniconda3/envs/DS974/li...
Yes, but is it run after the requirements are installed and the code is mounted? The docs sayIf we look at the console output in the web UI, the third entry should start with Executing: ['docker', 'run', '-t', '--gpus...', and towards the end of the entry, where the downloaded packages are mentioned, we can see the additional shell-script apt-get install -y bindfs.
which seems like that would be the case but I'm not sure what the 1st or 2nd entries are and so want to confirm.
Yes I believe it's hydra too, so just learning how CML determines process status will be really helpful
Then we can figure out what can be changed so CML correctly registers process failures with Hydra
Sorry for the delay CostlyOstrich36 here's the relevant lines from the console:
` ...
File "/home/binoyloaner/miniconda3/envs/DS974/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/binoyloaner/miniconda3/envs/DS974/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "/home/binoyloaner/miniconda3/envs/DS974/lib/python3....
clearml's callback is never called
yeah I suspect that's what might be happening which is why I was inquiring as to how and where exactly in the CML code that happens. Once I know, I can then place breakpoints in the critical regions and debug to see what's going in.
AnxiousSeal95 I just checked and Hydra returns an exit code of 1
to mark the failure as does another toy program which just throws an exception. So my guess is CML is not using the exit code as a means to determine when the task failed. Are you able to share how CML determines when a task failed? If you could point me to the relevant code files, I'm happy to dive in and figure it out.
I didn't check with the toy task, I thought the error codes might be an issue here so was just looking for the difference. I'll check for that too.
But for my hydra task, it's always marked completed, never failed
No, we currently don't handle it gracefully. It just crashes. But we do use hydra which does sort of arrests that exception first. I'm wondering if it's Hydra causing this issue. I'll look into it later today
Thanks for confirming AgitatedDove14 . Do you have an approximate timeline as to when the RC might be out? I'm asking cause I'm going to write a workaround for it tomorrow and I'm wondering if I should just wait for the RC to come out.
Could it be the script itself is using vanilla sys.argv and not Argparser ? (edited)
Thanks for bringing this up. Our code uses fire
to parse command line args and then sort of hands off to hydra, so yes it does use sys.argv
initially. Is this a possible issue?
re you running it with an agent (that hydra triggers) ?
you mean clearml-agent? then no, I've been running the process manually up until now
This is great! Thanks!
If I have access to the logs, python env and git commits, is there an API to log those to the experiments too?
Sorry if I sounded curt. Didn't mean to. To clarify, I've created my account using Google SSO on http://app.clear.ml , and am currently on the Free tier. I am pushing all my data onto CML's servers. This error happens when I try to query those servers for the metrics and variants for a particular task of mine.
the CML free SaaS offering. It'll probably hit https://app.clear.ml/api if I'm not wrong
I'm looking at the docs on docker mode and running the script. Is this script run after the venv and code dir are setup, or immediately after the container starts but before the environment for running the experiment is setup?
no problem. Thanks for the information Erez!
agent default python is set to 3.9.7
Got it. Thanks for clearing that up!
I think there's some confusion here. I'm not running the server. My metrics are getting logged to the CML cloud.