Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello Everyone, I'M Currently Trying Clearml-Serving To Serve A Model Via An Endpoint. I Followed The Tutorial In The Documentation, But When I Try A Request, I Get An Error. Here It Is: Curl -X Post "

Hello everyone, I'm currently trying clearml-serving to serve a model via an endpoint. I followed the tutorial in the documentation, but when I try a request, I get an error. Here it is: curl -X POST " None " -H "accept: application/json" -H "Content-Type: application/json" -d '{"Pregnancies": 6, "Glucose": 148, "BloodPressure": 72, "SkinThickness": 35, "Insulin": 0, "BMI": 33.6, "DiabetesPedigreeFunction": 0.627,"Age": 50}'
{"detail":"Error processing request: expected str, bytes or os.PathLike object, not NoneType"}. How can I debug this?

  
  
Posted 2 months ago
Votes Newest

Answers 24


@<1523701205467926528:profile|AgitatedDove14> , Thank you very much, I will follow your recommendation.

  
  
Posted 2 months ago

Is it not possible to serve a model with preprocessing pipeline from scikit-learn using clearml-serving?

of course it is, did you first try the example , here: None
If you need to run your own LogisticRegression call you can use this example:
None
Notice this is where the custom endpoint actually calls the prediction: None

  
  
Posted 2 months ago

Do you have any advice for this step, (monitoring)? I feel like it's not very well documented.

  
  
Posted 2 months ago

Yes, look for the clearml serving session ID in the web UI (just go to the home screen and put the UID in the search 🙂 )

  
  
Posted 2 months ago

Hi @<1673501397007470592:profile|RelievedDuck3> , can you share the code you used? What's the preprocess code you're using?

  
  
Posted 2 months ago

I test this: None , and I haven't encountered any error. I will test the custom example and I will provide you with feedback, thank you very much for your response.

  
  
Posted 2 months ago

Interesting question, should work and looks like an interesting combination, I'm curious what you come up with.
btw: grafana itself can already provide a lot of alerts for drift etc, this is basically their histogram delta feature

  
  
Posted 2 months ago

I have other similar endpoints for testing; that's why, if not, there is no error at this level. Even with the two endpoints, I get the same error. One clarification: I built my ML model with scikit-learn pipeline and Optuna. Now, by building another simple model without Optuna and the preprocessing pipeline with scikit-learn, that is, by simply using, for example, LogisticRegression().fit(X, y), I do not encounter any error for serving with clearml-serving; the request via its endpoint gives me the prediction. Is it not possible to serve a model with preprocessing pipeline from scikit-learn using clearml-serving?

  
  
Posted 2 months ago

I've gone through the tutorial, and I've more or less understood it. I will perform a test to make sure. Thank you very much for sharing. Please, I have a question to submit to you, do you think it's a good idea to combine this monitoring with Evidently, to calculate new metrics and visualize them in Grafana?

  
  
Posted 2 months ago

This p is not in the original code.

  
  
Posted 2 months ago

Here it is:

  
  
Posted 2 months ago

Also what's the additional p doing at the last line if the screenshot ?

  
  
Posted 2 months ago

And how is the endpoint registered ?

  
  
Posted 2 months ago

I will work on it and provide you with feedback. Do you have a list of monitoring metrics provided by clearml-serving?

  
  
Posted 2 months ago

from typing import Any

import numpy as np

Notice Preprocess class Must be named "Preprocess"

class Preprocess(object):
def init(self):
# set internal state, this will be called only once. (i.e. not per request)
pass

def preprocess(self, body: dict, state: dict, collect_custom_statistics_fn=None) -> Any:
    # we expect to get two valid on the dict x0, and x1
    return [[body.get("Pregnancies", None), body.get("Glucose", None), body.get("BloodPressure", None), body.get("SkinThickness", None),
             body.get("Insulin", None), body.get("BMI", None), body.get("DiabetesPedigreeFunction", None), body.get("Age", None)], ]

def postprocess(self, data: Any, state: dict, collect_custom_statistics_fn=None) -> dict:
    # post process the data returned from the model inference engine
    # data is the return value from model.predict we will put is inside a return value as Y
    return dict(y=data.tolist() if isinstance(data, np.ndarray) else data)
  
  
Posted 2 months ago

Out of curiosity, what ended up being the issue?

  
  
Posted 2 months ago

Okay that makes sense.
best_diabetes_detection is different from your example curl -X POST " None " notice best_mage_diabetes_detection` ?

  
  
Posted 2 months ago

BTW: @<1673501397007470592:profile|RelievedDuck3> we just released 1.3.1 with better debugging, it prints full exception stack on failure to the clearml Serving Session Task.
I suggest you pull the latest image re run the docker compose and check what you have on the serving session Task in the UI

  
  
Posted 2 months ago

image

  
  
Posted 2 months ago

Do you have any advice for this step, (monitoring)? I feel like it's not very well documented.

Yeah I think it is complicated.
I would start with the example here: None
Basically what it does is create histogram over time of the values the Rest API gets. Then in graphana it visualizes those values.
Notice that the request latency / frequency are automatically logged into grafana for all the endpoints, no need to do anything specific

  
  
Posted 2 months ago

Is it in the serve instance task console that I should check the exception stack?

  
  
Posted 2 months ago

@<1523701205467926528:profile|AgitatedDove14> , thank you very much for your help, I was able to fix most of my bugs thanks to your recommendations

  
  
Posted 2 months ago

How is the endpoint rehistred: clearml-serving --id 6c9c2c38e70b41e0a63547e3c16db234 model add --engine sklearn --endpoint "best_diabetes_detection" --preprocess "/home/caleb/diabetes_clearml/preprocess.py" --model-id e7532b8017ad4a0f92d5b537401f0585

  
  
Posted 2 months ago

Thanks to the exception stack I examined, I understood that I had a model registry issue. I had used joblib to save the model file on my system, and I believed that the model registration in ClearML storage was automatic. So when I made the API call, the model path returned NoneType. Once I fixed that, I was able to serve my model and make API calls giving prediction results. Also, thanks to your help, I understood that I needed custom serving, and I was able to modify the preprocess.py file to suit my problem. Once again, thank you very much for your help. I was able to test ClearML serving, and I greatly appreciated its simplicity and scalability. I believe it's a tool I'll adopt in my work. The next step for me is to test ClearML's monitoring features.

  
  
Posted 2 months ago