
Reputation
Badges 1
22 × Eureka!Weird . I recently implemented a function that talked to this exact endpoint and found it had to exclude the version and api paths . Is there some sort of redirect that happens?
i think we may have found the frankenbug?
the argument to the dataset name was not being overridden correctly (mistyped), so the default value of an empty string (instead of a placeholder like "CHANGE_ME") in the parent task caused the dataset to basically get created with an empty name, and somehow that hid the whole project, despite hundreds of existing tasks in it.
and no way to un-hide it as far as I can tell?
then back to CLI, updated the pipeline to point the tasks to the new queue. run it, shows up in the UI (same container as default worker, just replicated w a new docker-compose and CMD to point to the new queue).
one note is that it happened after I tried deploying a set of workers to a new queue, which she tried to use to run the tasks in parallel instead of our default queue which is only serviced by one worker (a container i built)
the project wasn't hidden before. I'm aware of the pipeline tasks being hidden, that makes sense for organization. but the actual project itself as an entirety has a ghost icon.
she created a new project and started working in there, it was visible in the UI... and just now it disappeared again. it's kind of like running the pipeline makes it disappear.
Can vouch, this works well. Had my server hard reboot (maybe bc of clearml? maybe bc of hardware, maybe both… haven’t figured it out), and busy remote workers still managed to update the backend once it came back up.
Re: backups… what would happen if zipped while running but no work was being performed? Still an issue potentially?
and what happens if docker compose down is run while there’s work in the services queue? Will it be restored? What are the implications if a backup is perform...
If you can hit the endpoint with curl, you for sure can hook it up to many frontend frameworks.
Personal recs: gradio, streamlit
Abstract the interaction into a function call, and wrap it all in some UI elements using python.
I believe pipe.connect_configuration
is what you're looking for?
@<1541954607595393024:profile|BattyCrocodile47> put together None
credentials for the server to do things with s3 will be in /opt/clearml/apiserver.conf.
For reproducibility, it kind of makes sense though. The existence of the file is contingent on the worker cloning the source code. I'm sure things can be done to maintain state differently but I personally adapted to the git-based workflow for managing files pretty quickly.
though yes I will admit I had the same thought first: why must I run it each time?
Beware: squash merges will ruin the ability to reproduce the experiment at that time since the git commit will be lost (presuming th...
you can put task.execute_remotely() to create it in draft mode. I've taken to configuring defaults to run things very quickly just in case i forget though (e.g. placeholder string for dataset, bail out early if not changed… or just do one epoch on a small subset of samples, etc).
I ran into something similar during deployment. Hopefully this helps with your debugging: if the agent was launched separately from the rest of the stack, it may not have proper docker-DNS resolution to None . (e.g. if in the same docker-compose, perhaps you didnt add the backend
network field, or if it was launched separately through docker run
without an explicit external network defined)
if the agent's on the same machine, try docker network connect
to add...
my approach was to spin up an EC2 and run the deployment there from within the EBS volume mount.
I symlinked /opt/clearml
to /mnt/xvda/clearml
to minimize docker-compose changes. been working out fine so far.
with aws-cdk, the deployment steps can be automated (format the volume, clone a repo with the config, etc). I can link you to a resource that may help with that if you're interested.
tasks that create pipelines feels like a hack and i found they dont show up in the UI (have to use the link in the console).
I've found that sometimes i need to right click "Run" a couple of times before the parameters are filled in properly.
if you commit but do not push, the metadata tells clearml that it needs to pull a non-existant commit. any changes you made on top may be saved as a diff, but they'd fail to apply.
for clearml to work on un-pushed commits, it'd have to wait for a push to register a new diff target, which can become a problem (what if you have multiple remotes? which one will it wait for?) so rather, it assumes it can access the most recent commit from your remote repo, and records this as the "base" upon whi...
the clearml github, search for a file named cleanup service dot py (or something to that effect)
youre basically asking to sample from a distribution where not all parameters are mutually independent .
the short answer is no- this is not directly supported . optuna needs each hyperparam to be independent, so its up to you to handle the dependencies between parameters yourself unfortunately .
your solution of defining them independently and then using num_layers to potentially ignore other parameters is a valid one .
waiting now to see if they disappear.
any problems you may have spotted with the versions used?
project hasn't disappeared just yet. but it's happened twice now
dug deeper. if i'm to make a guess.../root/clearml.conf
-> used on startup of agent-services as a template of sorts to create .clearml_agent.<id>.cfg
on demand -> this task-specific file is used to mount to /tmp/clearml_default.conf
in a new container (docker in docker bc of the socket mounted to the agent-services) -> used to execute the task
thank you!
I'll add a volume mount to the services-agent container, and from what I understand that will become the template it uses?
is this the structure of the file?
None
or is it the "dot" syntax (like what shows up in the console when the task executes / your snippet)?
probably, but the syntax would be in that of a git diff, so it’d be a touch clunky if you asked me
Are you trying to avoid local development?
yeah let's step through this, i'm having her execute these steps as we speak.
create a task with the new project name. its created as a draft. can see it in the UI under the new project.
pipeline script is updated with new project name for. execute script to create pipeline. now see in UI under this new project name. nothing hidden.
the pipeline is running. when the queue is default (only serviced by one container with agent in it ( clearml-agent==1.5.2
). abort it. everything is still ...
Yup if you scroll through the logs in the console, near the top (post config dump), you’ll see a git clone and checkout to the specific hash.
PS You can actually change this parameter in an experiment’s configuration if it is in draft mode.
I think you’d have to run the cleanup service. That’s what seems to be what is controlling deletion based on archived status and some other temporal filters
you could also take the route of NOT specifying num_layers, and instead write your own code to create a set of viable layer designs to choose from and pass that as a parameter, so optuna selects from a countable set instead of suggesting integer values .
the downside of this is the lack of gradient information in the optimization process
so when the task completed successfully (changed the queue to default and let it finish instead of aborting), the project disappeared.