Reputation
Badges 1
371 × Eureka!Basically saving a model on the client machine and publishing it, then trying to download it from the server.
Even though I ended my schedulers and triggers, the anonymous tasks keep increasing.
Shouldn't I get redirected to the login page if i'm not logged in instead of the dashboard? 😞
And given that I want have artifacts = task.get_registered_artifacts()
I checked the value is being returned, but I'm having issues accessing merged_dataset_id in the preexecute_callback like the way you showed me.
To be more clear. An example use case for me would be, that I'm trying to make a pipeline which every time a new dataset/batch is published using clearml-data,
Get the data Train it Save the model and publish it
I want to start this process with a trigger when a dataset is published to the server. Any example which I can look to for accomplishing something like this?
I'll test it with the updated one.
I want to maybe have a variable in the simple-pipeline.py, which has the value returned by split_dataset
AgitatedDove14 I'm also trying to understand why this is happening, is this normal and how it should be or am I doing something wrong
Also I made another thread regarding clear ml agent. can you respond to that? I'm gonna be trying to set up a clear ml server properly on a server machine. Want to test how to train models and enqueue tasks and automate this whole process with GPU training included.
Here's the screenshot TimelyPenguin76
Basically if I pass an arg with a default value of False, which is a bool, it'll run fine originally, since it just accepted the default value.
This problem occurs when I'm scheduling a task. Copies of the task keep being put on the queue even though the trigger only fired once.
And multiple agents can listen to the same queue right?
wrong image. lemme upload the correct one.
I already have the dataset id as a hyperparameter. I get said dataset. I'm only handling one dataset right now but merging multiple ones is a simple task as well.
Also I'm not very experienced and am unsure what proposed querying is and how and if it works in ClearML here.
only issue is even though it's a bool, it's stored as "False" since clearml stores the args as strings.
shouldn't checkpoints be uploaded immediately, that's the purpose of checkpointing isn't it?
Alright. Can you guide me on how to edit the task configuration object? Is it done via the UI or programatically? Is there a config file and can it work with any config file I create or is it a specific config file? Sorry for the barrage of questions.
This is the task scheduler btw which will run a function every 6 hours.
I'll look into it. Thank you everyone.
Let me try to be a bit more clear.
If I have a training task in which I'm getting multiple ClearML Datasets from multiple ClearML IDs. I get local copies, train the model, save the model, and delete the local copy in that script.
Does ClearML keep track of which data versions were gotten and used from ClearML Data?
I'm not in the best position to answer these questions right now.