To provide an upload destination for the artifact, you can :
add the parameter default_output_uri to Task.init ( https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) set the destination into clearml.conf :
sdk.development.default_output_uri ( https://clear.ml/docs/latest/docs/configs/clearml_conf#sdkdevelopment )
To enqueue the pipeline, you simply call it, without run_locally or debug_pipeline
You will have to provide the parameter execution_queue to your steps, or default_queue to the PipelineDecorator.pipeline
How do I provide a specific output path to store the model? (Say I want to server to store it in ~/models)
I'm training my model via a remote agent.
Thanks to your suggestion I could log the model as an artefact(using PipelineDecorator.upload_model()) - but only the path is reflected; I can't seem to download the model from the server
How do I just submit a pipeline to the server to be executed by an agent?
Currently I am able to use P ipeline Decorator.run_locally() to run it ;
However I just want to push it to a queue and make the agent do it's trick, any recommendations ?
yep i am working on it - i have something that i suspect not to work as expected. nothing sure though
for the step that reports the model :
from clearml import Task
import torch.nn as nn
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 256),
def forward(self, x):
x = self.encoder(x)
def save(self, path):
mymodel = nn_model() mymodel.save('./mymodel.pth') `
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
I am trying to find you some example, hold on 🙂