Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
And One More Question. How Can I Get Loaded Model In Preporcess Class In Clearml Serving?

And one more question. How can i get loaded model in Preporcess class in ClearML Serving?

  
  
Posted 2 years ago
Votes Newest

Answers 15


Hey, maybe AgitatedDove14 or ExasperatedCrab78 can help

  
  
Posted 2 years ago

How can i get loaded model in Preporcess class in ClearML Serving?

ComfortableShark77
You mean your preprocess class needs a python package or is it your own module ?

  
  
Posted 2 years ago

AgitatedDove14 My model has method generate. i would like to call it. How can i get loaded automaticly model from Preprocess object. Preprocess file
` from typing import Any, Callable, Optional

from transformers import TrOCRProcessor
import numpy as np

Notice Preprocess class Must be named "Preprocess"

class Preprocess(object):

def __init__(self):
    self.processor = TrOCRProcessor.from_pretrained("microsoft/trocr-small-printed")      

def preprocess(self, body: dict, state: dict, collect_custom_statistics_fn=None) -> Any:
    return self.processor.batch_decode(np.array(body.get("image")))

def process(
        self,
        data: Any,
        state: dict,
        collect_custom_statistics_fn: Optional[Callable[[dict], None]],
) -> Any: 
    model = #get model
    data = model.generate(data.pixel_values)
    return data

def postprocess(self, data: Any, state: dict, collect_custom_statistics_fn=None) -> dict:
    return dict(predict=data.tolist()) `I trained model and log it to ClearML Server. I try to add it to ClearML Serving, but it call  ` forward `  method by default
  
  
Posted 2 years ago

ComfortableShark77 are you saying you need "transformers" in the serving container?
CLEARML_EXTRA_PYTHON_PACKAGES: "transformers==x.y"https://github.com/allegroai/clearml-serving/blob/6005e238cac6f7fa7406d7276a5662791ccc6c55/docker/docker-compose.yml#L97

  
  
Posted 2 years ago

AgitatedDove14 I need to call generate method of my model, but by default it calls forward

  
  
Posted 2 years ago

Yes, I'm

  
  
Posted 2 years ago

I try to add it to ClearML Serving, but it call

forward

method by default

If this is the case, then the statement above is odd to me, if this is a custom engine, who exactly is calling " forward " ?
(in you code example you specifically call generate, as you should)

  
  
Posted 2 years ago

AgitatedDove14 Hi. I and Vlad work together and I think I can paraphrase his question.
We’ve got out clearml-serving and we trained our model. Then we want to add that model to serving. But we need to write our custom preprocess.py in which we need to call method generate from our model. But we do not exactly understand how we can load/refer to our model.
In examples about custom engine we’ve got this
class Preprocess(object): """ Notice the execution flows is synchronous as follows: 1. RestAPI(...) -> body: dict 2. preprocess(body: dict, ...) -> data: Any 3. process(data: Any, ...) -> data: Any 4. postprocess(data: Any, ...) -> result: dict 5. RestAPI(result: dict) -> returned request """ def __init__(self): """ Set any initial property on the Task (usually model object) Notice these properties will be accessed from multiple threads. If you need a stateful (per request) data, use thestatedict argument passed to pre/post/process functions """ # set internal state, this will be called only once. (i.e. not per request) self._model = Noneand the question is what we need to write in self._model to load our model in serving? Or how we can refer to our model in Preprocess class

  
  
Posted 2 years ago

AgitatedDove14 KindChimpanzee37 Can you help with this question pls?

  
  
Posted 2 years ago

AbruptHedgehog21 what exactly do you store as a Mode file ? is this a python object pickled ?

  
  
Posted 2 years ago

AgitatedDove14 we store .pt model. And we need for model inference method generate . If we want to load model we need torch.jit.load

  
  
Posted 2 years ago

ohh AbruptHedgehog21 if this is the case, why don't you store the model with torch.jit.save and use Triton to run the model ?
See example:
https://github.com/allegroai/clearml-serving/tree/main/examples/pytorch
(BTW: if you want a full custom model serve, in this case you would need to add torch to the list of python packages)

  
  
Posted 2 years ago

AgitatedDove14 we will try to use Triton, but it’s a bit hard with transformer model.
All extra packages we add in serving)
We made now like this. We in preprocessor.py load our model from S3 bucket and then use it. But it’s maybe not the best solution

  
  
Posted 2 years ago

we will try to use Triton, but it’s a bit hard with transformer model.

Yes ...

All extra packages we add in serving)

So it should work, you can also run your preprocess class manually from your own machine (for debugging), if you pass to it a local file (basically the downloaded model file from the UI, it should work

it. But it’s maybe not the best solution

Yes... it is not, separating the pre/post to CPU instance and letting triton do the GPU serving is a lot more efficient than using pytorch vanilla

  
  
Posted 2 years ago
1K Views
15 Answers
2 years ago
one year ago
Tags