
Reputation
Badges 1
22 × Eureka!i think we may have found the frankenbug?
the argument to the dataset name was not being overridden correctly (mistyped), so the default value of an empty string (instead of a placeholder like "CHANGE_ME") in the parent task caused the dataset to basically get created with an empty name, and somehow that hid the whole project, despite hundreds of existing tasks in it.
and no way to un-hide it as far as I can tell?
then back to CLI, updated the pipeline to point the tasks to the new queue. run it, shows up in the UI (same container as default worker, just replicated w a new docker-compose and CMD to point to the new queue).
@<1541954607595393024:profile|BattyCrocodile47> put together None
waiting now to see if they disappear.
any problems you may have spotted with the versions used?
project hasn't disappeared just yet. but it's happened twice now
credentials for the server to do things with s3 will be in /opt/clearml/apiserver.conf.
Yup if you scroll through the logs in the console, near the top (post config dump), you’ll see a git clone and checkout to the specific hash.
PS You can actually change this parameter in an experiment’s configuration if it is in draft mode.
I believe pipe.connect_configuration
is what you're looking for?
I opened github.com/allegroai/clearml/pull/1083 as an attempt to help catch this.
i think he's saying you'd want an intermediary layer that acts like the daemon .
why not run the daemon directly im not sure, but i suspect its bc it doesn't have an "end time" for execution (stays up)
tasks that create pipelines feels like a hack and i found they dont show up in the UI (have to use the link in the console).
I've found that sometimes i need to right click "Run" a couple of times before the parameters are filled in properly.
you can put task.execute_remotely() to create it in draft mode. I've taken to configuring defaults to run things very quickly just in case i forget though (e.g. placeholder string for dataset, bail out early if not changed… or just do one epoch on a small subset of samples, etc).
one note is that it happened after I tried deploying a set of workers to a new queue, which she tried to use to run the tasks in parallel instead of our default queue which is only serviced by one worker (a container i built)
Oh neat! I want to take a look at this. Only a few more weeks at the client but it’d be nice to reduce the complexity of the software stack if I can before handoff.
Can you please elaborate on the latter point? My jupyterhub’s fully containerized and allows users to select their own containers (from a list i built) at launch, and launch multiple containers at the same time, not sure I follow how toes are stepped on.
I'm guessing this is done through code-server?
I'm currently rolling a JupyterHub instance (multiuser, with codeserver inside) on the same machine as clearml-server. That’s where tasks are executed etc. so, all browser dev env.
It sounds like there’s an option to basically bypass this latter step and just use clearml’s credentialing to accomplish much the same thing? Am I understanding clearml-session correctly?
thank you!
I'll add a volume mount to the services-agent container, and from what I understand that will become the template it uses?
is this the structure of the file?
None
or is it the "dot" syntax (like what shows up in the console when the task executes / your snippet)?
the project wasn't hidden before. I'm aware of the pipeline tasks being hidden, that makes sense for organization. but the actual project itself as an entirety has a ghost icon.
she created a new project and started working in there, it was visible in the UI... and just now it disappeared again. it's kind of like running the pipeline makes it disappear.
if you commit but do not push, the metadata tells clearml that it needs to pull a non-existant commit. any changes you made on top may be saved as a diff, but they'd fail to apply.
for clearml to work on un-pushed commits, it'd have to wait for a push to register a new diff target, which can become a problem (what if you have multiple remotes? which one will it wait for?) so rather, it assumes it can access the most recent commit from your remote repo, and records this as the "base" upon whi...
I ran into something similar during deployment. Hopefully this helps with your debugging: if the agent was launched separately from the rest of the stack, it may not have proper docker-DNS resolution to None . (e.g. if in the same docker-compose, perhaps you didnt add the backend
network field, or if it was launched separately through docker run
without an explicit external network defined)
if the agent's on the same machine, try docker network connect
to add...
ah, thank you for the clarity. A quarterly release schedule makes sense, it's about what I've observed.
Let me know if I can be of any assistance in early testing!
so when the task completed successfully (changed the queue to default and let it finish instead of aborting), the project disappeared.
I think you’d have to run the cleanup service. That’s what seems to be what is controlling deletion based on archived status and some other temporal filters
For reproducibility, it kind of makes sense though. The existence of the file is contingent on the worker cloning the source code. I'm sure things can be done to maintain state differently but I personally adapted to the git-based workflow for managing files pretty quickly.
though yes I will admit I had the same thought first: why must I run it each time?
Beware: squash merges will ruin the ability to reproduce the experiment at that time since the git commit will be lost (presuming th...
If you can hit the endpoint with curl, you for sure can hook it up to many frontend frameworks.
Personal recs: gradio, streamlit
Abstract the interaction into a function call, and wrap it all in some UI elements using python.
but isnt that just the same as running agent in daemon mode? thats what i was hoping James could do
@<1798162804348293120:profile|FlutteringSeahorse49> wants to start HPO though, so the desire is to deploy agents to listen to queues on the slurm cluster (perhaps the controller runs on his laptop).
would that still make sense?
youre basically asking to sample from a distribution where not all parameters are mutually independent .
the short answer is no- this is not directly supported . optuna needs each hyperparam to be independent, so its up to you to handle the dependencies between parameters yourself unfortunately .
your solution of defining them independently and then using num_layers to potentially ignore other parameters is a valid one .
i will attempt to start that now.