Reputation
Badges 1
16 × Eureka!oh i see. you're talking about the agent-services, not a separate agent in a container.
yup, I've got the same thing going there.
fwiw...
for me, HOST_IP is 0.0.0.0 and the other "HOSTS" env vars don't contain "http" in them.
and my server is publicly reachable, not sure if that matter either.
yeah let's step through this, i'm having her execute these steps as we speak.
create a task with the new project name. its created as a draft. can see it in the UI under the new project.
pipeline script is updated with new project name for. execute script to create pipeline. now see in UI under this new project name. nothing hidden.
the pipeline is running. when the queue is default (only serviced by one container with agent in it ( clearml-agent==1.5.2
). abort it. everything is still ...
Weird . I recently implemented a function that talked to this exact endpoint and found it had to exclude the version and api paths . Is there some sort of redirect that happens?
one note is that it happened after I tried deploying a set of workers to a new queue, which she tried to use to run the tasks in parallel instead of our default queue which is only serviced by one worker (a container i built)
If you can hit the endpoint with curl, you for sure can hook it up to many frontend frameworks.
Personal recs: gradio, streamlit
Abstract the interaction into a function call, and wrap it all in some UI elements using python.
đź‘€ following.
I have much the same issue, and it's mission-critical that I resolve it soon.
if you commit but do not push, the metadata tells clearml that it needs to pull a non-existant commit. any changes you made on top may be saved as a diff, but they'd fail to apply.
for clearml to work on un-pushed commits, it'd have to wait for a push to register a new diff target, which can become a problem (what if you have multiple remotes? which one will it wait for?) so rather, it assumes it can access the most recent commit from your remote repo, and records this as the "base" upon whi...
tasks that create pipelines feels like a hack and i found they dont show up in the UI (have to use the link in the console).
I've found that sometimes i need to right click "Run" a couple of times before the parameters are filled in properly.
Can vouch, this works well. Had my server hard reboot (maybe bc of clearml? maybe bc of hardware, maybe both… haven’t figured it out), and busy remote workers still managed to update the backend once it came back up.
Re: backups… what would happen if zipped while running but no work was being performed? Still an issue potentially?
and what happens if docker compose down is run while there’s work in the services queue? Will it be restored? What are the implications if a backup is perform...