Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi All

Hi all 👋 ! I got an issue with clearml-serving : I'm doing the sklearn tutorial from GitHub None . When I run the train_model.py on my machine, I get the prediction with the curl command, everything is fine. When I run the script on my agent (which is setup in a docker), the pickle path of the model is not the same (when I run locally : ***/serving%20examples/train%20sklearn%20model.a1b2c3d4/models/sklearn-model.pkl; when I run on my agent : file:///root/.clearml/venvs-builds/3.6/task_repository/clearml.git/sklearn-model.pkl). When I try to have a prediction with curl -X POST "***/serve/test_model_sklearn" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}' , I get this error {"detail":"Error processing request: expected str, bytes or os.PathLike object, not NoneType"} . If anyone has an idea about why this is not working, pls let me know 👍

  
  
Posted one year ago
Votes Newest

Answers 13


Hi @<1546303293918023680:profile|MiniatureRobin9> !

Would you mind sending me a screenshot of the model page (incl the model path) both for the task you trained locally as well as the one you trained on the agent?

  
  
Posted one year ago

Hi @<1523701118159294464:profile|ExasperatedCrab78> !
Here there are (left: locally, right: remotely)
image
image

  
  
Posted one year ago

Thanks! I know that you posted these locations before in text, I just wanted to make sure that they are the ones I was thinking. It seems like the model isn't properly uploaded to the clearml server. Instead, it's saving only the local path to the model file.

Normally that's what the output_uri=True in the Task.init(...) call is for, but it seems there is a bug that's not uploading the model.

Would you mind testing out manual model uploading ?

So in your case it should be adding these lines to the end of the train_model.py

output_model = OutputModel(task=task, framework="scikitlearn")
output_model.update_weights(weights_filename='sklearn-model.pkl')

They should make sure your model-url is a https link to a file saved on the clearml server

  
  
Posted one year ago

No problem, I tried with this code :

from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_blobs
from joblib import dump
from clearml import Task, OutputModel

task = Task.init(project_name="serving examples", task_name="train sklearn model", output_uri=True)

# generate 2d classification dataset
X, y = make_blobs(n_samples=100, centers=2, n_features=2, random_state=1)
# fit final model
model = LogisticRegression()
model.fit(X, y)

#dump(model, filename="sklearn-model.pkl", compress=9)

output_model = OutputModel(task=task, framework="ScikitLearn")
output_model.update_weights(weights_filename='sklearn-model.pkl')

And sadly I got the same error 😞 : $ curl -X POST "***/serve/sklearn_slack" -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'
{"detail":"Error processing request: expected str, bytes or os.PathLike object, not NoneType"}
I'm also working in http with my server, could it be the reason of it not working ?

  
  
Posted one year ago

Oh wait, you have a self-hosted server?

  
  
Posted one year ago

yeah

  
  
Posted one year ago

That might make things harder indeed 🙂 It does explain some things

  
  
Posted one year ago

With the screenshots above, the locally run experiment (left), does it have an http url for the model url field? The one you whited out?

  
  
Posted one year ago

Yes, that's right

  
  
Posted one year ago

Ah I see. So then I would guess it is due to the remote machine (the clearml agent) not being able to properly access your clearml server

  
  
Posted one year ago

Check your agent logs (through clearml console tab) and check if there isn't any error thrown.

What is probably happening is that your agent tries to upload the model but fails due to some kind of networking/firewall/port issue. For example: make sure you host your self-hosted server on 0.0.0.0 host so it's able to accept external connections other than localhost

  
  
Posted one year ago

Oh okay, it could explain a lot of stuff. Thank you for your answer 👍 My server isn't on 0.0.0.0, so would I need to setup a new one to solve this problem, or is there an alternative ?
I checked the logs as you suggested, I didn't find any error of this type (maybe I didn't put an important parameter). My agent is setup as a docker. Here are the logs.

  
  
Posted one year ago

With my team we found a solution: to execute tasks with agent, we use clearml-task in CLI. We add the argument --output-uri : ***:1234 where *** is the link to our self-hosted server. Then models in pickle are automatically exported to the server, and not the path of the agent

  
  
Posted one year ago