Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, I'Ve Been Using Clearml For A Month Now, And Must Say It'S A Really Good Product! I'M Mostly Working With Huggingface Transformers, I Integrated Clearml In My Solution:

Hello,
I've been using Clearml for a month now, and must say it's a really good product!
I'm mostly working with huggingface transformers, I integrated clearml in my solution:
task initialization task logging (text, scalar and plot)Now I'm wondering how to properly save the output model. Currently, it stores one binary file automatically because of the underlying call to torch.save. The problem is transformers has multiple binary files that should be stored in order to be reused afterwards.
As anybody find a solution? Does it mean that I should use the manual model logging?
kind regards

  
  
Posted 2 years ago
Votes Newest

Answers 7


HungryArcticwolf62 the new clearml-serving is almost out (eta late next week), you can already start playing here:
https://github.com/allegroai/clearml-serving/tree/dev
Example:
train+serve
https://github.com/allegroai/clearml-serving/tree/dev/examples/sklearn

  
  
Posted 2 years ago

HungryArcticwolf62 , I couldn't find something relevant 😞
AgitatedDove14 , wdyt?

  
  
Posted 2 years ago

After you store the model in ClearML server accessing it later becomes almost trivial 🙂

  
  
Posted 2 years ago

HungryArcticwolf62 transformer model is at the end a pytorch/tf model, with pre/post processing.
the pytorch/tf model inference is done with Triton (probably the most efficient engine today), where clearml runs the pre/post on a different CPU machine (making sure we fully utilize all the HW. Does that answer the question?
Latest docs here:
https://github.com/allegroai/clearml-serving/tree/dev

expect a release after the weekend 😉

  
  
Posted 2 years ago

Actually, this opens my mind on what I'm trying to achieve. I'm trying to find a way to store the model (will try using the output_uri argument), and also a way to serve models using clearml-serving. Since I don't know yet how clearml-serving works, I wanted first to archive the correct files.

  
  
Posted 2 years ago

Hi AgitatedDove14 , CostlyOstrich36
Thanks for the links. I see that clearml-serving supports a predefined list of engines, transformer no included. Do you have any documentation on how one would implement an engine and integrate it into the on prem version?

  
  
Posted 2 years ago

Hi HungryArcticwolf62 ,
from what I understand you simply want to access models afterwards - correct me if I'm wrong.
What I think would solve your problem is the following:
task = Task.init(...., output_uri=True)This should upload the model to the server and thus make it accessible by other entities within the system.
Am I on track?

  
  
Posted 2 years ago
1K Views
7 Answers
2 years ago
one year ago
Tags