I know there is a aux cfg with key value pairs but how can use it in the python code?
This is actually for helping to configure Triton services, you cannot (I think) easily access it from the code
now, I need to pass a variable to the Preprocess class
you mean for the construction ?
I passed an env variable to the docker container so I figure this out
Hmm, as a quick solution you can use the custom example and load everything manually:
https://github.com/allegroai/clearml-serving/blob/219fa308df2b12732d6fe2c73eea31b72171b342/examples/custom/preprocess.py
But you have a very good point, I'm not sure how one could know what's the xgboost correct class, do you?
while in our own code:if model_type == 'XGBClassifier': model = XGBClassifier() model.load_model(filename)
The DSes would expect the same interface as they used in the code that saved the model (me too TBH)
I'd rather just fail if they try to use a model that is unknown.
And then get_model
is what I wrote above, just uses the CML API to pick up the right model from the task_id and model_name and the model config contains the class name so get_model has an if/else structure in it to create the right class.
yeah so in docker run:-e TASK_ID='b5f339077b994a8ab97b8e0b4c5724e1' \ -e MODEL_NAME='best_model' \
and then in Preprocess:self.model = get_model(task_id=os.environ['TASK_ID'], model_name=os.environ['MODEL_NAME'])
Ok, but I need to think with the head of the DS, this way they only need to remember (and me only need to teach them where to find) one id.
I expect the task to be the main entry point for all their work, and the above interface is easy to remember, check etc etc. Also it is the same as getting artifacts so less friction.
` def get_task(task_id):
return Task.get_task(task_id)
def get_artifact(task_id, artifact_name):
task = Task.get_task(task_id)
return task.artifacts[artifact_name].get()
def get_model(task_id, model_name):
task = Task.get_task(task_id)
... `
Yes, this is exactly how I solved it at the end
because we already had these get_artifact(), get_model() functions that the DSes use to get the data into notebooks to further analyse their stuff, I might as well just use those with a custom preprocess and call the predict myself.
I pass the IDs to the docker container as environment variables, so this does need restart for the docker container but I guess we can live with that for now
and then in Preprocess:
self.model = get_model(task_id=os.environ['TASK_ID'], model_name=os.environ['MODEL_NAME'])
That's the part I do not get, Models have their own entity (with UID), this is in contrast to artifacts that are only stored on Tasks.
The idea when you are registering a model with clearml-serving, you can specify the model ID, this should replace the need for the TASK_ID+model_name in your code, and the clearml-serving will basically bring it to you
Basically this function, get a path to the locally downloaded Model, so you can load it the way you need
https://github.com/allegroai/clearml-serving/blob/219fa308df2b12732d6fe2c73eea31b72171b342/examples/custom/preprocess.py#L27
wdyt?
this is a bit WIP but we save it with the design of the model:
` parameters = dict(self.parameters, model_type='XGBClassifier')
...
output_model.update_design(config_dict=parameters) `
I know there is a aux cfg with key value pairs but how can use it in the python code?"auxiliary_cfg": { "TASK_ID": "b5f339077b994a8ab97b8e0b4c5724e1", "V": 132 }
I absolutely need to pin the packages (incl main DS packages) I use.
and then have a wrapper that gets the model data and selects which way to construct and deserialise the model class.def get_model(task_id, model_name): task = Task.get_task(task_id) try: model_data = next(model for model in task.models['output'] if model.name == model_name) except StopIteration as ex: raise ValueError(f'Model {model_name} not found in: {[model.name for model in task.models["output"]]}') filename = model_data.get_local_copy() model_type = model_data.config_dict['model_type'] if model_type == 'XGBClassifier': model = XGBClassifier() model.load_model(filename) elif model_type == 'BaseEstimator': model = joblib.load(filename) else: raise ValueError(f'Unknown model type {model_type}') return model
it worked!!!!
YEY!
I pass the IDs to the docker container as environment variables, so this does need restart for the docker container but I guess we can live with that for now
So this would help you decide on which actual Model file to download ? (trying to understand how the argument is being used, meaning should we have it stored somewhere? there is meta-data on the Model itself so we can use that to store the data)
{"detail":"Error processing request: ('Expecting data to be a DMatrix object, got: ', <class 'pandas.core.frame.DataFrame'>)"}
DS, this way they only need to remember (and me only need to teach them where to find) one id.
Yes that's the point, this ID is the Model UID (as opposed to the Task ID), the reason I kind if "insist" on it is that the Model ID is built into the system meaning, this is how you register it, as opposed to the Task ID that somehow needs to be hacked/passed externally
TBH the main reason I went with our API is that because of the custom model loading, we need to use the "custom" framework anyway.
The custom model loading supports it:
https://github.com/allegroai/clearml-serving/blob/e09e6362147da84e042b3c615f167882a58b8ac7/examples/custom/preprocess.py#L37
This function basically gets the output of:local_file_name = Model(model_id_here).get_local_copy() Preprocess.load(local_file_name)
This means your custom load
function can be:def load(self, local_file_name: str) -> Optional[Any]: # noqa self._model = XGBClassifier() self._model.load_model(local_file_name)
wdyt?
import xgboost # noqa self._model = xgboost.Booster() self._model.load_model(self._get_local_model_file())
think this is because of the version of xgboost that serving installs. How can I control these?
That might be
I absolutely need to pin the packages (incl main DS packages) I use.
you can basically change CLEARML_EXTRA_PYTHON_PACKAGES
https://github.com/allegroai/clearml-serving/blob/e09e6362147da84e042b3c615f167882a58b8ac7/docker/docker-compose-triton-gpu.yml#L100
for example:export CLEARML_EXTRA_PYTHON_PACKAGES="xgboost==1.2.3 numpy==1.2.3"
I wonder if the try/except approach would work for XGboost load, could we just try a few classes one after the other?
Hmm yes, that is a good point, maybe we should allow to specify a parameter on the model configuration to help with the actual type ...
I think this is because of the version of xgboost that serving installs. How can I control these?
TBH the main reason I went with our API is that because of the custom model loading, we need to use the "custom" framework anyway.