Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Guys, I'M Currently Work With Clearml-Serving For Deployment Of My Model, But I Have Few Questions And Errors: 1. In The Preprocess Class, I Need To Get Some Value That I Got From Training Process For Example, In My Time Series Anomaly Detection I Save

Hi guys,
I'm currently work with clearml-serving for deployment of my model, but I have few questions and errors:

  1. In the Preprocess class, I need to get some value that i got from training process for example, in my time series anomaly detection I save my training threshold value to the artifact of the task. How do I call the artifact value in the preprocess function in Preprocess class?
  2. When I try to deploy sklearn model from the example code at clearml-serving github. It running normally but when I try to deploy tensorflow model i got this error:
    InactiveRpcError of RPC that terminated with:status = StatusCode.UNAVAILABLE
    details = "failed to connect to all addresses "
    debug_error_string = "{"created": "@1655106794.179518774 ","description": "Failed to pick subchannel ","file": "src/core/ext/filters/client_channel/client_channel.cc","file_line ":3158, "referenced_errors ":[{ "created ": "@1655106794.179516721 ", "description ": "failed to connect to all addresses ", "file ": "src/core/lib/transport/error_utils.cc ", "file_line ":147, "grpc_status ":14}]}. The screenshot of the error can be seen in the image attachment below.
    I have do docker compose on triton container in my docker desktop and still got the error.
  3. When I do add model command (from github example) to my model, I got error saying "could not find the model". So I upload my model manually and deploy it. Is it normal or did I miss something?

Sorry I am new to Slack and Clearml. i don't know how to check if someone has asked the same questions before. Thank you

  
  
Posted 2 years ago
Votes Newest

Answers 5


Okay thank you Sonckie.
Yup I tried that method. In my application's use case it needs to connect to multiple model endpoint with the same preprocess method. So I need to change the task id in the preproccess.py code everytime I deploy the model? My docker-compose version is 1.29.2. I am running docker desktop apps in my windows 10 with WSL. My Laptop use AMD APU, I haven't searched about configuring my docker to run with GPU. Thank you for the answer. It seems the error is same with the sklearn documentation command

  
  
Posted 2 years ago

Hi William!

1 So if I understand correctly, you want to get an artifact from another task into your preprocessing.

You can do this using the Task.get_task() call. So imagine your anomaly detection task is called anomaly_detection it produces an artifact called my_anomaly_artifact and is located in the my_project project you can do:
` from clearml import Task

anomaly_task = Task.get_task(project_name='my_project', task_name='anomaly_detection')
treshold = anomaly_task.artifacts['my_anomaly_artifact'].get() You can do this anywhere to get details from any task! So also in preprocessing 🙂 In this case I use project name and task name to get the task you need, but you can also use id! Task.get_task(task_id='...') `

2 I will take a look, which docker-compose version do you have? Are you running linux, windows, mac? Which GPU are you running on and have you configured docker to allow access to your GPU?

3 Can you give the exact command you ran and also a screenshot of the error? 🙂 You should not have to upload it manually, the add model command should work!

  
  
Posted 2 years ago

Ok I check 3: The command
clearml-serving --id <your_id> model add --engine triton --endpoint "test_model_keras" --preprocess "examples/keras/preprocess.py" --name "train keras model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32should be
clearml-serving --id <your_id> model add --engine triton --endpoint "test_model_keras" --preprocess "examples/keras/preprocess.py" --name "train keras model - serving_model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32It seems to have been a mistype in the docs 🙂

  
  
Posted 2 years ago

1 Can you give a little more explanation about your usecase? It seems I don't fully understand yet. So you have multiple endpoints, but always the same preprocessing script to go with it? And you need to gather a different threshold for each of the models?

2 Not completely sure of this, but I think an AMD APU simply won't work. ClearML serving is using triton as inference engine for GPU based models and that is written by nvidia, specifically for nvidia hardware. I don't think triton will run on an AMD APU

3 Well spotted! Indeed it seems the sklearn documentation has the same problem. Would you mind opening a PR for it, then you can be contributor 😄

  
  
Posted 2 years ago

yup so the multiple endpoints has only difference in weight value but the same preprocessing script. And each preprocessing script use different threshold. okay I will test it on my friend's pc then. I will give the update soon when this problem solved. I'm not familiar with term 'PR' since I've never joined a community before, is it pull request? If yes, then I will open a PR and informed the update at this thread

  
  
Posted 2 years ago
1K Views
5 Answers
2 years ago
one year ago
Tags
Similar posts