Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Have A Situation Where I’D Like To “Promote” The Pipeline (And Dataset) By Creating It In A Completely Separate Instance Of Clearml Server Which Is Used For Production Retraining (Vs. The Dev. Clearml Server That Is Used For Experiments) A) Is This Some

I have a situation where I’d like to “promote” the pipeline (and dataset) by creating it in a completely separate instance of clearml server which is used for production retraining (vs. the dev. clearml server that is used for experiments)
a) Is this something that makes sense to do? if not, what’s the idiomatic way of “promoting” tasks to be used by prod. retraining?
b) can the clearml client support conncting to the “dev” clearml server and the “prod” server and pass objects between them? or will I need to register the pipeline in prod “from scratch” by registering tasks from python code?

  
  
Posted 2 years ago
Votes Newest

Answers 3


RoughTiger69 Hi 🙂

Regarding your questions:
Moving certain tasks/datasets from server to server would require a data merge. This process basically requires merging the databases (mongodb, elasticsearch, files etc.). I think it's something that can be done in the paid version as a service but not in the free. I think if you'd like to 'promote' tasks to production you can either work on a special project for that OR upload the models to S3 and then re-run the experiment and point it to the model Currently there is option to pass tasks between different servers. The agent can connect to any server depending on configuration but not pass data between them. I think you'd have to register the pipeline in prod "from scratch" for now

  
  
Posted 2 years ago

Hi RoughTiger69
A. Yes makes total sense . Basically you can use Task.export Task.import to do achieve this process (notice we assume the dataset artifacts links are available on both, usually this is the case)

B. The easiest way would be to use Process , then one subprocess is exporting from dev , where the credentials and configuration is passed with os environment. The another subprocess imports it to the prod server (again with os environment pointing to the prod server). Make sense?

  
  
Posted 2 years ago

amazing. Thanks so much!

  
  
Posted 2 years ago
630 Views
3 Answers
2 years ago
one year ago
Tags