Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Is It Possible To Create A Serving Endpoint With Pytorch Jit File In Web Interface Only?

Is it possible to create a serving endpoint with Pytorch JIT file in web interface only?

  
  
Posted 2 years ago
Votes Newest

Answers 18


First let's try to test if everything works as expected. Since 405 really feels odd to me here. Can I suggest following one of the examples start to end to test the setup, before adding your model?

  
  
Posted 2 years ago

this is really hard to debug

  
  
Posted 2 years ago

I made it working with full port reassignment to 9090 in clearml-serving-inference
which still send me an error that the format of my request is somehow wrong
but then I started from scratch by creating completely new project and new endpoint

  
  
Posted 2 years ago

How can I clean database or whatever to get to the beginning?

  
  
Posted 2 years ago

I don't know why it requests localhost

  
  
Posted 2 years ago

CLEARML_FILES_HOST=" "

  
  
Posted 2 years ago

clearml-serving-inference | 2022-07-03 22:06:26,893 - clearml.storage - ERROR - Could not download , err: HTTPConnectionPool(host='localhost', port=8081): Max retries exceeded with url: /DevOps/serving%20example%2010.0a76d264e30940c2b600375fa839f1a2/artifacts/py_code_test_model_pytorch2/preprocess.py (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc3f41b1790>: Failed to establish a new connection: [Errno 111] Connection refused'))

  
  
Posted 2 years ago

In my understanding requests still go through

clearml-server

which configuration I left

DefiantHippopotamus88 actually this is Not correct.
clearml-server only acts as a control plane, no actual requests are routed to it, it is used to sync model state, stats etc. not part of the request processing flow itself.
curl: (56) Recv failure: Connection reset by peerThis actually indicates 9090 port is not being listened to...
What's the final docker-compose you are using?
and what are you seeing when running netstat -natp | grep LISTEN ?

  
  
Posted 2 years ago

I tried to switch off auto-refresh, but it doesn't help

  
  
Posted 2 years ago

In my understanding requests still go through clearml-server which configuration I left intact. Maybe due to the port change in clearml-serving I need to adjust smth.

  
  
Posted 2 years ago

curl -X POST " " -H "accept: application/json" -H "Content-Type: application/json" -d '{"url": " "}' {"detail":"Error processing request: Error: Failed loading preprocess code for 'py_code_test_model_pytorch2': 'NoneType' object has no attribute 'loader'"}

  
  
Posted 2 years ago

curl -X POST " " -H "accept: application/json" -H "Content-Type: application/json" -d '{"url": " "}' curl: (56) Recv failure: Connection reset by peer

  
  
Posted 2 years ago

basicaly I don't want to train a new model and I try to create an endpoint following the example but I finally get
$ curl -X POST " " -H "accept: application/json" -H "Content-Type: application/json" -d '{"url": " ` "}'

<html> <head><title>405 Not Allowed</title></head> <body> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx/1.20.1</center> </body> </html> `

  
  
Posted 2 years ago

DefiantHippopotamus88
HTTPConnectionPool(host='localhost', port=8081):This will not work because inside the container of the second docker compose "fileserver" is not defined
CLEARML_FILES_HOST=" "
You have two options:
configure to the docker compose to use the networkhost on all conrtainers (as oppsed to the isolated mode they are now running ing)2. Configure all of the CLEARML_* to point to the Host IP address (e.g. 192.168.1.55) , then rerun the entire thing.

  
  
Posted 2 years ago

the url above i accessible from the container:
$ docker-compose exec clearml-serving-inference bash root@a041497a554d:~/clearml# curl `

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">

<title>405 Method Not Allowed</title> <h1>Method Not Allowed</h1> <p>The method is not allowed for the requested URL.</p> root@a041497a554d:~/clearml# `

  
  
Posted 2 years ago

DefiantHippopotamus88 you can create a custom endpoint and do that, but it will be running I the same instance , is this what you are after? Notice that Triton actually supports it already, you can check the pytorch example

  
  
Posted 2 years ago

I tried, step by step from here
https://github.com/allegroai/clearml-serving/tree/main/examples/pytorch
and result is the same.
Then I tried to remove my old serving examples to start checking from scratch by the immediately restart after stopping

  
  
Posted 2 years ago

DefiantHippopotamus88 you are sending the curl to the wrong port , it should be 9090 (based on what remember from the unified docker compose) on your setup

  
  
Posted 2 years ago