shouldn't checkpoints be uploaded immediately, that's the purpose of checkpointing isn't it?
Alright. Can you guide me on how to edit the task configuration object? Is it done via the UI or programatically? Is there a config file and can it work with any config file I create or is it a specific config file? Sorry for the barrage of questions.
This is the task scheduler btw which will run a function every 6 hours.
I'll look into it. Thank you everyone.
Let me try to be a bit more clear.
If I have a training task in which I'm getting multiple ClearML Datasets from multiple ClearML IDs. I get local copies, train the model, save the model, and delete the local copy in that script.
Does ClearML keep track of which data versions were gotten and used from ClearML Data?
I'm not in the best position to answer these questions right now.
I'll test it with the updated one.
AgitatedDove14 Sorry for pinging you on this old thread. I had an additional query. If you've worked on a process similar to the one mentioned above, how do you set the learning rate? And what was the learning strategy? ADAM? RMSProp?
Creating a new dataset object for each batch allows me to just publish said batches introducing immutability.
I don't think so. Also I fixed it for now. Let me mention the fix. Gimme a bit
Basically want the model to be uploaded to the server alongside the experiment results.
As of yet, I can only select ones that are visible and to select more, i'll have to click on view more, which gets extremely slow.
Also, do I have to manually keep track of dataset versions in a separate database? Or am I provided that as well in ClearML?
I just assumed it should only be triggered by dataset related things but after a lot of experimenting i realized its also triggered by tasks, if the only condition passed is dataset_project and no other specific trigger condition like on publish or on tags are added.
In the case of api call,
given that i have id of the task I want to stop, I would make a post request to [CLEARML_SERVER_URL]:8080/tasks.stop
with the request body set up like the one mentioned in the api?
I understand your problem. I think you normally can specify where you want the data to be stored in a conf file somewhere. people here can better guide you. However in my experience, it kinda uploads the data and stores it in its own format.
I'm using clear-ml agent right now. I just upload the task inside a project. I've used arg parse as well however as of yet, I have not been able find writable hyperparameters in the UI. Is there any tutorial video you can recommend that deals with this or something? I was following https://www.youtube.com/watch?v=Y5tPfUm9Ghg&t=1100s this one on youtube but I can't seem to recreate his steps as he sifts through his code.
and then also write down my git username and password.
I just followed the instructions here at https://github.com/allegroai/clearml-serving
In the end it says I can curl at the end point and mentions the serving-engine-ip but I cant find the ip anywhere.
I did what you said, and got the pipeline DAG and then the executed of the step to use as ID. Thank you it worked fine.
CostlyOstrich36
Honestly anything. I tried looking up on youtube but There's very little material there, especially which is up to date. It's understandable given that ClearML is still in beta. I can look at courses / docs. I just want to be pointed in the right direction as to what I should look up and study
Just to be absolutely clear.
Agent Listening on Machine A with GPU listening to Queue X.
Task enqueued onto queue X from Machine B with no GPU.
Task runs on Machine A and experiment gets published to server?
Takes in a name and an artifact object.
Sorry for the late reply. The situation is that when I ran the task initially, it took arguments in using ArgParse. It took in a lot of arguments. Now my understanding is that add_step() clones that task. I want that to happen but I would like to be able to modify some of the values of the args, e.g epochs or some other argument.
So I just published a dataset once but it keeps scheduling task.