Not on the NAS storage but on my PC where the training is running
Do I have to run a task individually completely for it to function properly in a pipeline?
It was getting logged when I was running it as an individual task but it is not getting logged in the pipeline. I was plotting loss graphs too but now they're not getting plotted.
I haven't pointed it to the file server because I'm running a locally deployed docker instance of ClearML
This is how I am defining the task in the code
My models are not getting saved in the .pipeline folder. They are not getting saved in the output_uri specified in the Task.init as well.
I got this message when the training started and it is only saving the model locally
How is the model being saved/logged into clearml?
I mean code wise. Also where is it saved locally?
What if you point it to the fileserver? Does it still not upload the model?
Is output_uri
defined for both steps? Just making sure.
I have used a output_uri argument in my Task initialization for storing my models
Hi PerfectMole86 ,
How are you saving your models? How are they being saved, under .pipeline folder as well?