alright, so is there no way to kill it using worker id or worker name?
Okay so they run once i started a clear ml agent listening to that queue.
So I took dataset trigger from this and added it to my own test code, which needs to run a task every time this trigger is activated.
This problem occurs when I'm scheduling a task. Copies of the task keep being put on the queue even though the trigger only fired once.
So it won't work without clearml-agent? Sorry for the barrage of questions. I'm just very confused right now.
I however have another problem. I have a dataset trigger that has a schedule task.
Okay so when I add trigger_on_tags, the repetition issue is resolved.
Also, the task just prints a small string on the console.
This here shows my situation. You can see the code on the left and the tasks called 'Cassava Training' on the right. They keep getting enqueued even though I only sent a trigger once. By that I mean I only published a dataset once.
So in my head, every time i publish a dataset, it should get triggered and run that task.
Also could you explain the difference between trigger.start() and trigger.start_remotely()
Thank you for the help with that.
To be more clear. An example use case for me would be, that I'm trying to make a pipeline which every time a new dataset/batch is published using clearml-data,
Get the data Train it Save the model and publish it
I want to start this process with a trigger when a dataset is published to the server. Any example which I can look to for accomplishing something like this?
I'd like to add an update to this, when I use schedule function instead of schedule task with the dataset trigger scheduler, it works as intended. It runs the desired function when triggered. Then is asleep again next time since no other trigger was fired.
But what's happening is, that I only publish a dataset once but every time it polls, it gets triggered and enqueues a task even though the dataset was published only once.
So I just published a dataset once but it keeps scheduling task.
Thank you, I'll take a look
they're also enqueued
Yeah, I kept seeing the message but I was sure there were files in the location.
I just realized, I hadn't worked with the Datasets api for a while and I forgot that I'm supposed to call add_files(location) and then upload, not upload(location). My bad.
It works, however it shows the task is enqueued and pending. Note I am using .start() and not .start_remotely() for now
When I try to access the server with the IP I set as CLEARML_HOST_IP, it looks like this. I set that IP to the ip assigned to me by the network
You can see there's no task bar on the left. basically I can't get any credentials to the server or check queues or anything.
There's a whole task bar on the left in the server. I only get this page when i use the ip 0.0.0.0
I feel like they need to add this in the documentation 😕
Thanks for the help.
I think I get what you're saying yeah. I don't know how I would give each server a different cookie name. I can see this problem being resolved by clearing cookies or manually entering /login at the end of the url
I think maybe it does this because of cache or something. Maybe it keeps a record of an older login and when you restart the server, it keeps trying to use the older details maybe
Shouldn't I get redirected to the login page if i'm not logged in instead of the dashboard? 😞