Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Can I Change The Clearml-Serving Inference Port? 8080 Is Already Used For My Self-Hosted Server.. I Guess I Can Just Change It In The Docker-Compose, But I Find A Little Weird That You Are Using This Port If The Self-Hosted Server Web Is Hosted In It..

Can I change the clearml-serving inference port? 8080 is already used for my self-hosted server..
I guess I can just change it in the docker-compose, but I find a little weird that you are using this port if the self-hosted server web is hosted in it..

  
  
Posted 2 years ago
Votes Newest

Answers 5


Yeah, I simply used a different port but I got this output:
(prediction_module) emilio@unicorn:~/clearml-serving$ docker run -v ~/clearml.conf:/root/clearml.conf -p 9501:9501 -e CLEARML_SERVING_TASK_ID=7ce187d2218048e68fc594fa49db0051 -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest CLEARML_SERVING_TASK_ID=7ce187d2218048e68fc594fa49db0051 CLEARML_SERVING_PORT= CLEARML_USE_GUNICORN= EXTRA_PYTHON_PACKAGES= CLEARML_SERVING_NUM_PROCESS= CLEARML_SERVING_POLL_FREQ=5 CLEARML_DEFAULT_KAFKA_SERVE_URL= CLEARML_DEFAULT_KAFKA_SERVE_URL= WEB_CONCURRENCY= SERVING_PORT=8080 GUNICORN_NUM_PROCESS=4 GUNICORN_SERVING_TIMEOUT= GUNICORN_EXTRA_ARGS= UVICORN_SERVE_LOOP=asyncio UVICORN_EXTRA_ARGS= CLEARML_DEFAULT_BASE_SERVE_URL= CLEARML_DEFAULT_TRITON_GRPC_ADDR=127.0.0.1:8001 Starting Uvicorn server ClearML Task: created new task id=e24dece9bd2c41a3ba11fd010f654837 ClearML results page: 2022-03-28 14:38:36,581 - clearml.Task - INFO - No repository found, storing script code instead INFO: Started server process [8] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on (Press CTRL+C to quit)

  
  
Posted 2 years ago

And this is what I get with the curl inference example on the README.md
(prediction_module) emilio@unicorn:~/clearml-serving$ curl -X POST " ` " -H "accept: application/json" -H "Content-Type: application/json" -d '{"x0": 1, "x1": 2}'

<html> <head><title>405 Not Allowed</title></head> <body> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx/1.20.1</center> </body> </html> `

  
  
Posted 2 years ago

Hi ElegantCoyote26 ,

It doesn't seem that using port 8080 is mandatory and you can simply change it when you run ClearML-Serving - i.e docker run -v ~/clearml.conf:/root/clearml.conf -p 8085:8085

My guess is that the example uses port 8080 because usually the ClearML backend and the Serving would run on different machines

  
  
Posted 2 years ago

ElegantCoyote26 what you are after is:
docker run -v ~/clearml.conf:/root/clearml.conf -p 9501:8085
Notice the internal port (i.e. inside the docker is 8080, but the external one is changed to 9501)

  
  
Posted 2 years ago

So it still looks like it's using port 8080? I'm not really sure

  
  
Posted 2 years ago