Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
{"Detail":"Error Processing Request: Error: Failed Loading Preprocess Code For 'Py_Code_Best_Model': [Errno 2] No Such File Or Directory: '/Root/.Clearml/Cache/Storage_Manager/Global/Cd46Dd0091D71B5294Dc6870Ac6D17Dc..._Artifacts_Archive_Py_Code_Best_Model

{"detail":"Error processing request: Error: Failed loading preprocess code for 'py_code_best_model': [Errno 2] No such file or directory: '/root/.clearml/cache/storage_manager/global/cd46dd0091d71b5294dc6870ac6d17dc..._artifacts_archive_py_code_best_model/__init__.py'"}

I have this error when trying to call the endpoint. When deploying the model, I specify the preprocessor with: --preprocess ".."
Is this a good idea? I try to send all the code from the repo, or at least the modelling part to the endpoint because the preprocessing step is quite involved. But I am not sure how to verify if the right code got to the right place, with the right package versions, etc etc etc. What I would like is more logs, more feedback, more errors messages at each point. Otherwise this is just shooting darts in the dark.

  
  
Posted 2 years ago
Votes Newest

Answers 30


I know there is a aux cfg with key value pairs but how can use it in the python code?

This is actually for helping to configure Triton services, you cannot (I think) easily access it from the code

  
  
Posted 2 years ago

now, I need to pass a variable to the Preprocess class

you mean for the construction ?

  
  
Posted 2 years ago

I passed an env variable to the docker container so I figure this out

  
  
Posted 2 years ago

Hmm, as a quick solution you can use the custom example and load everything manually:
https://github.com/allegroai/clearml-serving/blob/219fa308df2b12732d6fe2c73eea31b72171b342/examples/custom/preprocess.py
But you have a very good point, I'm not sure how one could know what's the xgboost correct class, do you?

  
  
Posted 2 years ago

while in our own code:
if model_type == 'XGBClassifier': model = XGBClassifier() model.load_model(filename)

  
  
Posted 2 years ago

now on to the next pain point:

  
  
Posted 2 years ago

The DSes would expect the same interface as they used in the code that saved the model (me too TBH)

I'd rather just fail if they try to use a model that is unknown.

  
  
Posted 2 years ago

it worked!!!!

  
  
Posted 2 years ago

And then get_model is what I wrote above, just uses the CML API to pick up the right model from the task_id and model_name and the model config contains the class name so get_model has an if/else structure in it to create the right class.

  
  
Posted 2 years ago

yeah so in docker run:
-e TASK_ID='b5f339077b994a8ab97b8e0b4c5724e1' \ -e MODEL_NAME='best_model' \and then in Preprocess:
self.model = get_model(task_id=os.environ['TASK_ID'], model_name=os.environ['MODEL_NAME'])

  
  
Posted 2 years ago

Ok, but I need to think with the head of the DS, this way they only need to remember (and me only need to teach them where to find) one id.

I expect the task to be the main entry point for all their work, and the above interface is easy to remember, check etc etc. Also it is the same as getting artifacts so less friction.

` def get_task(task_id):
return Task.get_task(task_id)

def get_artifact(task_id, artifact_name):
task = Task.get_task(task_id)
return task.artifacts[artifact_name].get()

def get_model(task_id, model_name):
task = Task.get_task(task_id)
... `

  
  
Posted 2 years ago

Yes, this is exactly how I solved it at the end

  
  
Posted 2 years ago

Nice!!!

  
  
Posted 2 years ago

because we already had these get_artifact(), get_model() functions that the DSes use to get the data into notebooks to further analyse their stuff, I might as well just use those with a custom preprocess and call the predict myself.

  
  
Posted 2 years ago

I pass the IDs to the docker container as environment variables, so this does need restart for the docker container but I guess we can live with that for now

  
  
Posted 2 years ago

and then in Preprocess:

self.model = get_model(task_id=os.environ['TASK_ID'], model_name=os.environ['MODEL_NAME'])That's the part I do not get, Models have their own entity (with UID), this is in contrast to artifacts that are only stored on Tasks.
The idea when you are registering a model with clearml-serving, you can specify the model ID, this should replace the need for the TASK_ID+model_name in your code, and the clearml-serving will basically bring it to you
Basically this function, get a path to the locally downloaded Model, so you can load it the way you need
https://github.com/allegroai/clearml-serving/blob/219fa308df2b12732d6fe2c73eea31b72171b342/examples/custom/preprocess.py#L27

wdyt?

  
  
Posted 2 years ago

this is a bit WIP but we save it with the design of the model:
` parameters = dict(self.parameters, model_type='XGBClassifier')
...

output_model.update_design(config_dict=parameters) `

  
  
Posted 2 years ago

I know there is a aux cfg with key value pairs but how can use it in the python code?
"auxiliary_cfg": { "TASK_ID": "b5f339077b994a8ab97b8e0b4c5724e1", "V": 132 }

  
  
Posted 2 years ago

I absolutely need to pin the packages (incl main DS packages) I use.

  
  
Posted 2 years ago

and then have a wrapper that gets the model data and selects which way to construct and deserialise the model class.
def get_model(task_id, model_name): task = Task.get_task(task_id) try: model_data = next(model for model in task.models['output'] if model.name == model_name) except StopIteration as ex: raise ValueError(f'Model {model_name} not found in: {[model.name for model in task.models["output"]]}') filename = model_data.get_local_copy() model_type = model_data.config_dict['model_type'] if model_type == 'XGBClassifier': model = XGBClassifier() model.load_model(filename) elif model_type == 'BaseEstimator': model = joblib.load(filename) else: raise ValueError(f'Unknown model type {model_type}') return model

  
  
Posted 2 years ago

it worked!!!!

YEY!

I pass the IDs to the docker container as environment variables, so this does need restart for the docker container but I guess we can live with that for now

So this would help you decide on which actual Model file to download ? (trying to understand how the argument is being used, meaning should we have it stored somewhere? there is meta-data on the Model itself so we can use that to store the data)

  
  
Posted 2 years ago

{"detail":"Error processing request: ('Expecting data to be a DMatrix object, got: ', <class 'pandas.core.frame.DataFrame'>)"}

  
  
Posted 2 years ago

DS, this way they only need to remember (and me only need to teach them where to find) one id.

Yes that's the point, this ID is the Model UID (as opposed to the Task ID), the reason I kind if "insist" on it is that the Model ID is built into the system meaning, this is how you register it, as opposed to the Task ID that somehow needs to be hacked/passed externally

TBH the main reason I went with our API is that because of the custom model loading, we need to use the "custom" framework anyway.

The custom model loading supports it:
https://github.com/allegroai/clearml-serving/blob/e09e6362147da84e042b3c615f167882a58b8ac7/examples/custom/preprocess.py#L37
This function basically gets the output of:
local_file_name = Model(model_id_here).get_local_copy() Preprocess.load(local_file_name)This means your custom load function can be:
def load(self, local_file_name: str) -> Optional[Any]: # noqa self._model = XGBClassifier() self._model.load_model(local_file_name)wdyt?

  
  
Posted 2 years ago

import xgboost # noqa self._model = xgboost.Booster() self._model.load_model(self._get_local_model_file())

  
  
Posted 2 years ago

think this is because of the version of xgboost that serving installs. How can I control these?

That might be

I absolutely need to pin the packages (incl main DS packages) I use.

you can basically change CLEARML_EXTRA_PYTHON_PACKAGES
https://github.com/allegroai/clearml-serving/blob/e09e6362147da84e042b3c615f167882a58b8ac7/docker/docker-compose-triton-gpu.yml#L100
for example:
export CLEARML_EXTRA_PYTHON_PACKAGES="xgboost==1.2.3 numpy==1.2.3"

  
  
Posted 2 years ago

I wonder if the try/except approach would work for XGboost load, could we just try a few classes one after the other?

  
  
Posted 2 years ago

Hmm yes, that is a good point, maybe we should allow to specify a parameter on the model configuration to help with the actual type ...

  
  
Posted 2 years ago

I think this is because of the version of xgboost that serving installs. How can I control these?

  
  
Posted 2 years ago

TBH the main reason I went with our API is that because of the custom model loading, we need to use the "custom" framework anyway.

  
  
Posted 2 years ago
1K Views
30 Answers
2 years ago
one year ago
Tags
Similar posts