Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello! It'S My Second Time Trying Clearml - Hoping This Time I Will Succeed

Hello!
It's my second time trying ClearML - hoping this time I will succeed 🙌

I've trained a simple random forest model and uploaded it to clearml-serving endpoint.
When I try to do inference, I get the following:

422 {'detail': "Error [<class 'ValueError'>] processing request: node array from the pickle has an incompatible dtype:\n- expected: [('left_child', '<i8'), ('right_child', '<i8'), ('feature', '<i8'), ('threshold', '<f8'), ('impurity', '<f8'), ('n_node_samples', '<i8'), ('weighted_n_node_s @amples', '<f8')]\n- got     : {'names': ['left_child', 'right_child', 'feature', 'threshold', 'impurity', 'n_node_samples', 'weighted_n_node_samples', 'missing_go_to_left'], 'formats': ['<i8', '<i8', '<i8', '<f8', '<f8', '<i8', '<f8', 'u1'], 'offsets': [0, 8, 16, 24, 32, 40, 48, 56], 'itemsize': 64}"}

Meaning, I have a skicit-learn version mismatch between my dev environment (version 1.7.1) and the serving server (version 1.2.2).

My questions:

  • When running the Task, it logs the PYTHON PACKAGES under the EXECUTION. Deploying an endpoint will not use it to create the environment? If not - is there a way to use it, or it is just for the sake of logging?
  • To be able to serve multiple models on the same server (each with its own package versions), do I need to have a separated composing of the docker?- If no - how to do it?
  • If yes - each docker compose creates all of the containers again (Grafana, Prometheus, etc.), should all of these need to be created for each model?
    Thank you
  
  
Posted 12 days ago
Votes Newest

Answers 2


Thank you for your answer, @<1523701205467926528:profile|AgitatedDove14> !
I've managed to compose a docker with the needed version.

Should I be deploying the entire docker file for every model, with the updated requirements?
Or, can I deploy everything (Prometheus, Grafana, etc.) once and make a serving docker yml for each model with a different port?

Eventually, I want many models (with different package versions) served within a single machine

  
  
Posted 9 days ago

Hi @<1838387863251587072:profile|JealousCrocodile85>
I'm assuming this is with clearml-serving, notice that it cannot install the correct scikit learn package per endpoint, you have to specify it in the docker compose or k8s helm, see example here
Example
https://github.com/clearml/clearml-serving/blob/5c7077537ad46439f864f24e99e2ea5d4d5b35b3/docker/docker-compose.yml#L103
'''
services:
clearml-serving-inference:
image: allegroai/clearml-serving:latest
environment:
- CLEARML_API_ACCESS_KEY=${CLEARML_API_ACCESS_KEY}
- CLEARML_API_SECRET_KEY=${CLEARML_API_SECRET_KEY}
- CLEARML_SERVING_TASK_ID=${CLEARML_SERVING_TASK_ID}
# Add your extra packages here (space‑separated)
- CLEARML_EXTRA_PYTHON_PACKAGES=scikit-learn==1.7.1
ports:
- "8080:8080"

'''

  
  
Posted 12 days ago
217 Views
2 Answers
12 days ago
9 days ago
Tags
Similar posts