I think I understand now that I first need to have clearml server up and running.
I'm on windows rn, and I work with clearml on ubuntu. I think it's 1.1.5rc4
For anyone who's struggling with this. This is how I solved it. I'd personally not worked with GRPC so I instead looked at the HTTP docs and that one was much simpler to use.
Basically saving a model on the client machine and publishing it, then trying to download it from the server.
Even though I ended my schedulers and triggers, the anonymous tasks keep increasing.
Shouldn't I get redirected to the login page if i'm not logged in instead of the dashboard? 😞
And given that I want have artifacts = task.get_registered_artifacts()
I checked the value is being returned, but I'm having issues accessing merged_dataset_id in the preexecute_callback like the way you showed me.
Thank you for the help with that.
alright, so is there no way to kill it using worker id or worker name?
I was getting a different error when I posted this question. Now i'm just getting this connection error
Previously I wasn't. I would just call model.save, but I was unsure how to do modifications in the output model, which is why I made the output model.
I hope you understood my problem statement. I want to solve the issue with or without output model. Any help would be appreciated.
Basically, at the least, would like to be able to add tags, set the name and choose to publish the model that I'm saving.
I'll try to see how to use the sdk method you just shared
Wait is it possible to do what i'm doing but with just one big Dataset object or something?
The server is on a different machine. I'm experimenting on the same machine though.
To be more clear. An example use case for me would be, that I'm trying to make a pipeline which every time a new dataset/batch is published using clearml-data,
Get the data Train it Save the model and publish it
I want to start this process with a trigger when a dataset is published to the server. Any example which I can look to for accomplishing something like this?
I'll test it with the updated one.
I want to maybe have a variable in the simple-pipeline.py, which has the value returned by split_dataset