Reputation
Badges 1
15 × Eureka!I moved my conf file inside the docker container and edited the paths from localhost to my PC's IP address and it worked
Not on the NAS storage but on my PC where the training is running
I am not using any environments. I am running training on a docker container using a sh file but clearml is installed on my PC locally and the docker container does not have access to the clearml server that is deployed locally on my PC.
I haven't pointed it to the file server because I'm running a locally deployed docker instance of ClearML
This is how I am defining the task in the code
I got this message when the training started and it is only saving the model locally
I have used a output_uri argument in my Task initialization for storing my models
and I'm plotting the losses like this
Do I have to run a task individually completely for it to function properly in a pipeline?
The model is only getting saved locally
It was getting logged when I was running it as an individual task but it is not getting logged in the pipeline. I was plotting loss graphs too but now they're not getting plotted.
Just locally but it is saved fine
My models are not getting saved in the .pipeline folder. They are not getting saved in the output_uri specified in the Task.init as well.