Answered
Hi, I Try To Run Locally
Hi, I try to run locally clearml-server
and clearml-serving
to create inference endpoint that utilize Triton server. So far I had port issues so I changed clearml-serving-inference
outbound port to 9090. But after that I get the following issue:
clearml-serving-triton | Retrying (Retry(total=237, connect=237, read=240, redirect=240, status=240)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02a2602250>: Failed to establish a new connection: [Errno 111] Connection refused')': /auth.login
Is there any best practices for running both services locally? What kind of configuration I suppose to do?
I already tried to set ~/clearml.conf
with access_key
and provide it in example.env
but it didn't help. Maybe Ido something wrong with host:port configurations. Thanks!
Show more results
replies
24K Views
60
Answers
2 years ago
7 months ago
Tags