
Reputation
Badges 1
51 × Eureka!hi WickedElephant66
I have the same issue, but port is not the only problem
https://clearml.slack.com/archives/CTK20V944/p1656446563854059
basicaly I don't want to train a new model and I try to create an endpoint following the example but I finally get$ curl -X POST "
" -H "accept: application/json" -H "Content-Type: application/json" -d '{"url": "
` "}'
<html> <head><title>405 Not Allowed</title></head> <body> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx/1.20.1</center> </body> </html> `
I have to step away for a couple of hours
please let me know if you find something wrong
curl -X POST "
" -H "accept: application/json" -H "Content-Type: application/json" -d '{"url": "
"}' curl: (56) Recv failure: Connection reset by peer
In my understanding requests still go through clearml-server
which configuration I left intact. Maybe due to the port change in clearml-serving
I need to adjust smth.
I don't thing WEB_HOST is important, but what about FILE_HOST?
do I need to change it accordingly?
it suppose to have access_key and secret_key which should correspond to this file
I got only smth like this:
clearml-serving-triton | I0701 08:32:58.580705 46 server.cc:250] Waiting for in-flight requests to complete.
clearml-serving-triton | I0701 08:32:58.580710 46 server.cc:266] Timeout 30: Found 0 model versions that have in-flight inferences
clearml-serving-triton | I0701 08:32:58.580713 46 server.cc:281] All models are stopped, unloading models
clearml-serving-triton | I0701 08:32:58.580717 46 server.cc:288] Timeout 30: Found 0 live ...
and my ~/clearml.conf
api {
web_server:
api_server:
files_server:
# test 3
credentials {
"access_key" = "91SFEX4BYUQ9YCZ9V6WP"
"secret_key" = "4WTXT7tAW3R6tnSi8hzSKNjgkmgUoyv22lYT2FIzIfLoeGERRO"
}
}
clearml-serving-inference | 2022-07-03 22:06:26,893 - clearml.storage - ERROR - Could not download
, err: HTTPConnectionPool(host='localhost', port=8081): Max retries exceeded with url: /DevOps/serving%20example%2010.0a76d264e30940c2b600375fa839f1a2/artifacts/py_code_test_model_pytorch2/preprocess.py (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc3f41b1790>: Failed to establish a new connection: [Errno 111] Connection refused'))
curl
{"meta":{"id":"59bbb55b6ddc456092658ae588c9a436","trx":"59bbb55b6ddc456092658ae588c9a436","endpoint":{"name":"auth.login","requested_version":"2.18","actual_version":"1.0"},"result_code":401,"result_subcode":20,"result_msg":"Unauthorized (missing credentials)","error_stack":null,"error_data":{}},"data":{}}
my example.env
CLEARML_WEB_HOST="
"
CLEARML_API_HOST="
"
CLEARML_FILES_HOST="
"
CLEARML_API_ACCESS_KEY="91SFEX4BYUQ9YCZ9V6WP"
CLEARML_API_SECRET_KEY="4WTXT7tAW3R6tnSi8hzSKNjgkmgUoyv22lYT2FIzIfLoeGERRO"
CLEARML_SERVING_TASK_ID="450231049bba42f69c6507cb774f7dc6
seems like an issue about 2 compose apps using different networks which are not accessible from each other
I wonder if I just need to join 2 docker-compose files to run everything in one session