Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey,

Hey,
how exactly do we store models generated from a stage in my pipeline? I was looking to store them in h5/ p th formats in particular.
Is there an example?

  
  
Posted 2 years ago
Votes Newest

Answers 8


hi WickedElephant66
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#models-artifacts-and-metrics
I am trying to find you some example, hold on 🙂

  
  
Posted 2 years ago

yep i am working on it - i have something that i suspect not to work as expected. nothing sure though
for the step that reports the model :
`
@PipelineDecorator.component(return_values=['res'],
parents=['step_one'],
cache=False,
monitor_models=['mymodel'])
def step_two():
import torch
from clearml import Task
import torch.nn as nn
class nn_model(nn.Module):
def init(self):
super(nn_model, self).init()
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 256),
nn.ReLU(True),
)
def forward(self, x):
x = self.encoder(x)
return x
def save(self, path):
torch.save(self.state_dict(), path)

mymodel = nn_model()
mymodel.save('./mymodel.pth') `
  
  
Posted 2 years ago

regarding the file extension, it should not be a problem

  
  
Posted 2 years ago

To provide an upload destination for the artifact, you can :
add the parameter default_output_uri to Task.init ( https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) set the destination into clearml.conf : sdk.development.default_output_uri ( https://clear.ml/docs/latest/docs/configs/clearml_conf#sdkdevelopment )
To enqueue the pipeline, you simply call it, without run_locally or debug_pipeline
You will have to provide the parameter execution_queue to your steps, or default_queue to the PipelineDecorator.pipeline

  
  
Posted 2 years ago

How do I provide a specific output path to store the model? (Say I want to server to store it in ~/models)
I'm training my model via a remote agent.
Thanks to your suggestion I could log the model as an artefact(using PipelineDecorator.upload_model()) - but only the path is reflected; I can't seem to download the model from the server

  
  
Posted 2 years ago

Also,
How do I just submit a pipeline to the server to be executed by an agent?
Currently I am able to use P ipeline Decorator.run_locally() to run it ;
However I just want to push it to a queue and make the agent do it's trick, any recommendations ?

  
  
Posted 2 years ago

Thanks for actively replying, David
Any update on the example for saving a model from within a pipeline( specifically in .pth or h5 formats?)

  
  
Posted 2 years ago

however the model is saved in the step task - this is what i am trying to figure out

  
  
Posted 2 years ago
952 Views
8 Answers
2 years ago
one year ago
Tags