
Reputation
Badges 1
57 × Eureka!clearml's callback is never called
yeah I suspect that's what might be happening which is why I was inquiring as to how and where exactly in the CML code that happens. Once I know, I can then place breakpoints in the critical regions and debug to see what's going in.
re you running it with an agent (that hydra triggers) ?
you mean clearml-agent? then no, I've been running the process manually up until now
I'm looking at the docs on docker mode and running the script. Is this script run after the venv and code dir are setup, or immediately after the container starts but before the environment for running the experiment is setup?
Yes, but is it run after the requirements are installed and the code is mounted? The docs sayIf we look at the console output in the web UI, the third entry should start with Executing: ['docker', 'run', '-t', '--gpus...', and towards the end of the entry, where the downloaded packages are mentioned, we can see the additional shell-script apt-get install -y bindfs.
which seems like that would be the case but I'm not sure what the 1st or 2nd entries are and so want to confirm.
Also tagged you SuccessfulKoala55
Thanks for the quick support!
I think there's some confusion here. I'm not running the server. My metrics are getting logged to the CML cloud.
We have run experiments in the past (before I put ClearML into my code) which has logged scalars, plots etc. to local tensorboard. Is there any way to import this data to ClearML cloud for tracking, visualization and comparison?
This is great! Thanks for the example Martin, much appreciated!
so there's no way to do that when running in pip or conda mode?
Ok. I think I misunderstood what you said. I thought you meant you've already opened a bug ticket. If that's not the case, do you want to me create one on github?
AnxiousSeal95 I just checked and Hydra returns an exit code of 1
to mark the failure as does another toy program which just throws an exception. So my guess is CML is not using the exit code as a means to determine when the task failed. Are you able to share how CML determines when a task failed? If you could point me to the relevant code files, I'm happy to dive in and figure it out.
Yep, I think I see it https://github.com/allegroai/clearml/commit/81de18dbce08229834d9bb0676446a151046e6a7
Yes I believe it's hydra too, so just learning how CML determines process status will be really helpful
The Agent pulls the Task, and then reproduces it, and now it will execute the extra_docker_shell_script that was put in the configuration file.
Does this imply the former? Env is fully setup, then script is run, then experiment is started by calling the executable?
the CML free SaaS offering. It'll probably hit https://app.clear.ml/api if I'm not wrong
I'm queuing the task to my laptop by cloning on the web console. I have my agent setup to use conda as the primary package manager.
agent default python is set to 3.9.7
I thought the agent created a new conda env and installed all packages, recorded during initial task run, from scratch (except for caching with venv). Is that not the case?
No, we currently don't handle it gracefully. It just crashes. But we do use hydra which does sort of arrests that exception first. I'm wondering if it's Hydra causing this issue. I'll look into it later today
Then we can figure out what can be changed so CML correctly registers process failures with Hydra
the state of the Task changes immediately when it crashes ?
I think so. It goes from running to completed immediately on crash
Could it be hydra was installed on your laptop via conda not pip?
Yes, while we do use a conda env, our packages are installed using pip
. That being said, I have hydra-core==1.1.1
in my local dependencies as well.
Will try this. Thanks for promptly looking into this. Much appreciated!
yes, it seems like the command line args are recorded now but the connect
call with my parameter dictionary now fails with exception:
` Error executing job with overrides: ['model_name=all-test', ...]
Traceback (most recent call last):
File "/home/binoydalal/miniconda3/envs/DS974/lib/python3.9/site-packages/clearml/binding/hydra_bind.py", line 146, in _patched_task_function
return task_function(a_config, *a_args, **a_kwargs)
....
File "/home/binoydalal/miniconda3/envs/DS974/li...
I think the fire + hydra combination is not an issue anymore. We're going to separate the 2 out, and I tried it last night and argument modification and passing worked fine with hydra only.
In any case, thanks for you help Martin!
Thanks! Do you have a public bug tracker? If yes, are you able to share the issue number so I can follow it?
I need to put it into my code, so will be eagerly waiting for the fix