Reputation
Badges 1
371 × Eureka!Alright, but is it saved as a text file or pickle file?
This is the simplest I could get for the inference request. The model and input and output names are the ones that the server wanted.
Oh oh oh. Wait a second. I think I get what you're saying. When I'm originally creating clearml-task, since I'm not passing the argument myself, so it just uses the value False.
Yes it works, thanks for the overall help.
I have the server running now and for now it seems that I'm able to get the dataset even in the other file. I'll mess around with it now to get a hang of it and see how it actually works
wrong image. lemme upload the correct one.
I already have the dataset id as a hyperparameter. I get said dataset. I'm only handling one dataset right now but merging multiple ones is a simple task as well.
Also I'm not very experienced and am unsure what proposed querying is and how and if it works in ClearML here.
only issue is even though it's a bool, it's stored as "False" since clearml stores the args as strings.
shouldn't checkpoints be uploaded immediately, that's the purpose of checkpointing isn't it?
Alright. Can you guide me on how to edit the task configuration object? Is it done via the UI or programatically? Is there a config file and can it work with any config file I create or is it a specific config file? Sorry for the barrage of questions.
This is the task scheduler btw which will run a function every 6 hours.
I'll look into it. Thank you everyone.
Let me try to be a bit more clear.
If I have a training task in which I'm getting multiple ClearML Datasets from multiple ClearML IDs. I get local copies, train the model, save the model, and delete the local copy in that script.
Does ClearML keep track of which data versions were gotten and used from ClearML Data?
I'm not in the best position to answer these questions right now.
I'll test it with the updated one.
AgitatedDove14 Sorry for pinging you on this old thread. I had an additional query. If you've worked on a process similar to the one mentioned above, how do you set the learning rate? And what was the learning strategy? ADAM? RMSProp?
Creating a new dataset object for each batch allows me to just publish said batches introducing immutability.
I don't think so. Also I fixed it for now. Let me mention the fix. Gimme a bit
Basically want the model to be uploaded to the server alongside the experiment results.
As of yet, I can only select ones that are visible and to select more, i'll have to click on view more, which gets extremely slow.
Also, do I have to manually keep track of dataset versions in a separate database? Or am I provided that as well in ClearML?
I just assumed it should only be triggered by dataset related things but after a lot of experimenting i realized its also triggered by tasks, if the only condition passed is dataset_project and no other specific trigger condition like on publish or on tags are added.
In the case of api call,
given that i have id of the task I want to stop, I would make a post request to [CLEARML_SERVER_URL]:8080/tasks.stop
with the request body set up like the one mentioned in the api?
I understand your problem. I think you normally can specify where you want the data to be stored in a conf file somewhere. people here can better guide you. However in my experience, it kinda uploads the data and stores it in its own format.
I'm using clear-ml agent right now. I just upload the task inside a project. I've used arg parse as well however as of yet, I have not been able find writable hyperparameters in the UI. Is there any tutorial video you can recommend that deals with this or something? I was following https://www.youtube.com/watch?v=Y5tPfUm9Ghg&t=1100s this one on youtube but I can't seem to recreate his steps as he sifts through his code.
and then also write down my git username and password.