Reputation
Badges 1
22 × Eureka!I tried mounting a config file (in the structure of the one on github but with just the relevant s3 section) into the agent-services container at /root/clearml.conf and after restarting the container, it seems to have had an impact. thank you!
When I inspect the console of the task I'm trying to run, I see there's a call to cp /tmp/clearml.conf ~/default_clearml.conf in the docker command and that the volume /tmp/clearml.conf is picked up from the host at some custom-named file ...
Oh yes. I see. Yeah, no ML here actually (doing the testing infra of endpoints), but certainly when there is its an issue.
How does clearml session avoid it? I guess only if autoscaling is used (one worker one machine)?
dug deeper. if i'm to make a guess.../root/clearml.conf -> used on startup of agent-services as a template of sorts to create .clearml_agent.<id>.cfg on demand -> this task-specific file is used to mount to /tmp/clearml_default.conf in a new container (docker in docker bc of the socket mounted to the agent-services) -> used to execute the task
I ran into something similar during deployment. Hopefully this helps with your debugging: if the agent was launched separately from the rest of the stack, it may not have proper docker-DNS resolution to None . (e.g. if in the same docker-compose, perhaps you didnt add the backend network field, or if it was launched separately through docker run without an explicit external network defined)
if the agent's on the same machine, try docker network connect to add...
i think he's saying you'd want an intermediary layer that acts like the daemon .
why not run the daemon directly im not sure, but i suspect its bc it doesn't have an "end time" for execution (stays up)
tasks that create pipelines feels like a hack and i found they dont show up in the UI (have to use the link in the console).
I've found that sometimes i need to right click "Run" a couple of times before the parameters are filled in properly.
thank you!
I'll add a volume mount to the services-agent container, and from what I understand that will become the template it uses?
is this the structure of the file?
None
or is it the "dot" syntax (like what shows up in the console when the task executes / your snippet)?
i think we may have found the frankenbug?
the argument to the dataset name was not being overridden correctly (mistyped), so the default value of an empty string (instead of a placeholder like "CHANGE_ME") in the parent task caused the dataset to basically get created with an empty name, and somehow that hid the whole project, despite hundreds of existing tasks in it.
and no way to un-hide it as far as I can tell?
the dataset, task, and pipeline were under the same project name. i'm seeing what happens if the dataset project name was different ( f"{project_name}_data" ). which project would get deleted... the dataset one or the project of the task that kicked it off?
and the answer is...
the project is preserved, the dataset's project hidden.
so ... empty dataset names due to a small typo in parameter override + the choice for the dataset to have the same project name as the task that created it (...
Can vouch, this works well. Had my server hard reboot (maybe bc of clearml? maybe bc of hardware, maybe both… haven’t figured it out), and busy remote workers still managed to update the backend once it came back up.
Re: backups… what would happen if zipped while running but no work was being performed? Still an issue potentially?
and what happens if docker compose down is run while there’s work in the services queue? Will it be restored? What are the implications if a backup is perform...
I'm guessing this is done through code-server?
I'm currently rolling a JupyterHub instance (multiuser, with codeserver inside) on the same machine as clearml-server. That’s where tasks are executed etc. so, all browser dev env.
It sounds like there’s an option to basically bypass this latter step and just use clearml’s credentialing to accomplish much the same thing? Am I understanding clearml-session correctly?
you could also take the route of NOT specifying num_layers, and instead write your own code to create a set of viable layer designs to choose from and pass that as a parameter, so optuna selects from a countable set instead of suggesting integer values .
the downside of this is the lack of gradient information in the optimization process
oh i see. you're talking about the agent-services, not a separate agent in a container.
yup, I've got the same thing going there.
fwiw...
for me, HOST_IP is 0.0.0.0 and the other "HOSTS" env vars don't contain "http" in them.
and my server is publicly reachable, not sure if that matter either.
waiting now to see if they disappear.
any problems you may have spotted with the versions used?
project hasn't disappeared just yet. but it's happened twice now
Oh neat! I want to take a look at this. Only a few more weeks at the client but it’d be nice to reduce the complexity of the software stack if I can before handoff.
Can you please elaborate on the latter point? My jupyterhub’s fully containerized and allows users to select their own containers (from a list i built) at launch, and launch multiple containers at the same time, not sure I follow how toes are stepped on.
maybe an important note: I mounted the same cache directory for the agents.