Reputation
Badges 1
371 × Eureka!apparently it keeps caliing this register_dataset.py script
Okay so they run once i started a clear ml agent listening to that queue.
However, since a new task started in the project, it would again start a new task.
I've basically just added dataset id and model id parameters in the args.
For anyone reading this. I think I've gotten an understanding. I can add folders to a dataset so I'll be creating single dataset and will just keep adding folders to it. Then keep records of it in a database
Basically want to be able to serve a model, and also send requests to it for inference.
I feel like they need to add this in the documentation 😕
this is the console output
AgitatedDove14 Once a model is saved and published, it should be downloadable right? Cause I keep trying to upload a model from different projects and different tasks but it keeps overwriting the previous one, and the download option is grayed out in the UI.
I was getting a different error when I posted this question. Now i'm just getting this connection error
AgitatedDove14 Just wanted to confirm in what kind of file is the string artifact stored in? txt file or pkl file?
I was looking to see if I can just get away with using get_local_copy instead of the mutable one but I guess that is unavoidable.
I'll give that a try, thank you.
set the host variable to the ip assigned to my laptop by the network.
We want to get a clearer picture here to compare versioning with ClearML Data vs our own custom versioning
Then I can use ClearML-Data with it properly.
Can you give me an example url for the api call to stop_many?
basically don't want the storage to be filled up on the ClearML Server machine.
So in my head, every time i publish a dataset, it should get triggered and run that task.
I'd like to add an update to this, when I use schedule function instead of schedule task with the dataset trigger scheduler, it works as intended. It runs the desired function when triggered. Then is asleep again next time since no other trigger was fired.
I'll try to see how to use the sdk method you just shared
There's data when I manually went there. The directory was originally hidden my bad.
{"meta":{"id":"c3edee177ae348e5a92b65604b1c7f58","trx":"c3edee177ae348e5a92b65604b1c7f58","endpoint":{"name":"","requested_version":1.0,"actual_version":null},"result_code":400,"result_subcode":0,"result_msg":"Invalid request path /","error_stack":null,"error_data":{}},"data":{}}
I have a lot of anonymous tasks running which I would like to close immediately.
Let me tell you what I think is happening and you can correct me where I'm going wrong.
Under certain conditions at certain times, a Dataset is published, that activates a Dataset trigger. So if every day I publish one dataset, I activate a Dataset Trigger that day once it's published.
N publishes = N Triggers = N Anonymous Tasks, right?
Just to be absolutely clear.
Agent Listening on Machine A with GPU listening to Queue X.
Task enqueued onto queue X from Machine B with no GPU.
Task runs on Machine A and experiment gets published to server?
Can you guys let me know what finalize and publish methods do?