Reputation
Badges 1
15 × Eureka!I am not using any environments. I am running training on a docker container using a sh file but clearml is installed on my PC locally and the docker container does not have access to the clearml server that is deployed locally on my PC.
I moved my conf file inside the docker container and edited the paths from localhost to my PC's IP address and it worked
It was getting logged when I was running it as an individual task but it is not getting logged in the pipeline. I was plotting loss graphs too but now they're not getting plotted.
Do I have to run a task individually completely for it to function properly in a pipeline?
I haven't pointed it to the file server because I'm running a locally deployed docker instance of ClearML
I have used a output_uri argument in my Task initialization for storing my models
Just locally but it is saved fine
Not on the NAS storage but on my PC where the training is running
The model is only getting saved locally
and I'm plotting the losses like this
I got this message when the training started and it is only saving the model locally
My models are not getting saved in the .pipeline folder. They are not getting saved in the output_uri specified in the Task.init as well.
This is how I am defining the task in the code