
Reputation
Badges 1
86 × Eureka!I had initially just pasted the new credentials in place of the existing ones in my conf file;
Running clearml-init now fails at verifying credentials
We're initialising a task to ensure it appears on the experiments page;
Also not doing so gave us issues of ‘Missing parent pipeline task’ for a set of experiments we had done earlier
Also, does clearml by default upload models if we save them using torch.save?
So I did exactly that, and the name and path of the model on the local repo is noted;
However, I want to upload it to the fileserver
But it seems to upload the model on noticing torch.save irrespective
Configuration completed now; I t was a proxy issue from my end
However running my pipeline from a different m achine still gives me a problem
Hey David , I was able to get things uploaded to the fileserver by a change in the conf
Is there a way to store the return values after each pipeline stage in a format other than pickle?
I'm asking this because my kwargs is observed as an empty dict if printed
How do I provide a specific output path to store the model? (Say I want to server to store it in ~/models)
I'm training my model via a remote agent.
Thanks to your suggestion I could log the model as an artefact(using PipelineDecorator.upload_model()) - but only the path is reflected; I can't seem to download the model from the server
Also,
How do I just submit a pipeline to the server to be executed by an agent?
Currently I am able to use P ipeline Decorator.run_locally() to run it ;
However I just want to push it to a queue and make the agent do it's trick, any recommendations ?
Yep, the pipeline finishes but the status is still at running . Do we need to close a logger that we use for scalers or anything?