Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone, I'M Trying To Setup Clearml-Serving As Per

Hi everyone, I'm trying to setup clearml-serving as per None instructions, but I'm having troubles with the docker-compose command.
I have the self hosted version of clearml-server on a win11 machine. When I execute the "docker-compose --env-file example.env -f docker-compose.yml up" command, two images - clearml-serving-inference and clearml-serving-statistics, give me the following connection error: Failed to establish a new connection: [Errno 111] Connection refused')': /auth.login

I'm using port 8082 (which is free) for the serving server, as the 8080 is already taken by clearml server. I'm also exposing the 8082 port in the Dockerfile. I checked the credentials used (CLEARML_API_ACCESS_KEY and CLEARML_API_SECRET_KEY) and they are correct, and also is the CLEARML_SERVING_TASK_ID.
What am i missing? I am a noob with docker, so a help would be much appreciated!

  
  
Posted one year ago
Votes Newest

Answers 11


Hi @<1578193378640662528:profile|MoodySeaurchin4> , this seems like a reachability issue from the machine where you're running the docker-compose to the server machine - you can try to use curl None :port to see if you can reach the clearml server at all (any error response in JSON format will indicate it can reach the server)

  
  
Posted one year ago

Hello Jack and thank you for your answer.
The server is reachable, curl gives me HTTP/1.1 200 OK.
I'm running both the clearml server and (attempting to run) the serving server on the same machine with Windows11. Both of them use docker containers

  
  
Posted one year ago

It appears that the containers of clearml-serving couldn't access my host machine, because locahost referred to the container and not the host. However , changing the server address in the docker-compose.yml from localhost to host.docker.internal gives me another error:
clearml.backend_api.session.session.LoginError: Unrecognized Authentication Error: <class 'requests.exceptions.InvalidSchema'> No connection adapters were found for 'host.docker.internal:8008/auth.login'

Any idea what i am missing here?

  
  
Posted one year ago

Hi @<1578193378640662528:profile|MoodySeaurchin4> , you need to configure those with the correct ports for the server:
None
and here for the Serving:
None

  
  
Posted one year ago

Yes, I realized I didn't put "http://" before the ip (host.docker.internal).

  
  
Posted one year ago

Unfortunately, now the problem is that, although the serving server can see the clearml server and I can create an endpoint to expose a model, the serving server can't access the needed artifacts (like the preprocessing script and the model chosen) because their url starts with "localhost", while the serving server needs the url to start with "host.docker.internal" to actually access them...

  
  
Posted one year ago

Do you see a specific error you can share

  
  
Posted one year ago

The error is a NewConnectionError caused by the fact that the serving server has to download the artifacts needed to deploy the model (aka the model itself and the preprocess script), but these have an address starting with localhost (since they are on the clearml server on my host machine), hence are not accessible by the serving, because the serving needs the url to start with "host.docker.internal" and not localhost

  
  
Posted one year ago

Hi @<1523701087100473344:profile|SuccessfulKoala55> ,
I'm running in almost the same error (see below) but I want to connect the the free clearml server version at None so I have set up the corresponding env variables in example.env:

CLEARML_WEB_HOST="
"
CLEARML_API_HOST="
"
CLEARML_FILES_HOST="
"
CLEARML_API_ACCESS_KEY="---"
CLEARML_API_SECRET_KEY="---"
CLEARML_SERVING_TASK_ID="---"

I have set up the right values from my ~/clearml.conf file that I use to connect to the clearml server for CLEARML_API_ACCESS_KEY and CLEARML_API_SECRET_KEY while CLEARML_SERVING_TASK_ID comes from the clearml-serving service.

Am I missing something obvious here?

Thanks a lot.

clearml-serving-inference     | Traceback (most recent call last):
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/session/session.py", line 800, in _do_refresh_token
clearml-serving-inference     |     res = self._send_request(
clearml-serving-inference     |           ^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/session/session.py", line 390, in _send_request
clearml-serving-inference     |     res = self.__http_session.request(
clearml-serving-inference     |           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 587, in request
clearml-serving-inference     |     resp = self.send(prep, **send_kwargs)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/utils.py", line 85, in send
clearml-serving-inference     |     return super(SessionWithTimeout, self).send(request, **kwargs)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 695, in send
clearml-serving-inference     |     adapter = self.get_adapter(url=request.url)
clearml-serving-inference     |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 792, in get_adapter
clearml-serving-inference     |     raise InvalidSchema(f"No connection adapters were found for {url!r}")
clearml-serving-inference     | requests.exceptions.InvalidSchema: No connection adapters were found for '"
"/auth.login'
clearml-serving-inference     | 
clearml-serving-inference     | During handling of the above exception, another exception occurred:
clearml-serving-inference     | 
clearml-serving-inference     | Traceback (most recent call last):
clearml-serving-inference     |   File "<frozen runpy>", line 198, in _run_module_as_main
clearml-serving-inference     |   File "<frozen runpy>", line 88, in _run_code
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/__main__.py", line 4, in <module>
clearml-serving-inference     |     uvicorn.main()
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
clearml-serving-inference     |     return self.main(*args, **kwargs)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1055, in main
clearml-serving-inference     |     rv = self.invoke(ctx)
clearml-serving-inference     |          ^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
clearml-serving-inference     |     return ctx.invoke(self.callback, **ctx.params)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/click/core.py", line 760, in invoke
clearml-serving-inference     |     return __callback(*args, **kwargs)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 403, in main
clearml-serving-inference     |     run(
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/main.py", line 568, in run
clearml-serving-inference     |     server.run()
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 59, in run
clearml-serving-inference     |     return asyncio.run(self.serve(sockets=sockets))
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
clearml-serving-inference     |     return runner.run(main)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
clearml-serving-inference     |     return self._loop.run_until_complete(task)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/server.py", line 66, in serve
clearml-serving-inference     |     config.load()
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/config.py", line 471, in load
clearml-serving-inference     |     self.loaded_app = import_from_string(self.app)
clearml-serving-inference     |                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/uvicorn/importer.py", line 21, in import_from_string
clearml-serving-inference     |     module = importlib.import_module(module_str)
clearml-serving-inference     |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/importlib/__init__.py", line 126, in import_module
clearml-serving-inference     |     return _bootstrap._gcd_import(name[level:], package, level)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
clearml-serving-inference     |   File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
clearml-serving-inference     |   File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
clearml-serving-inference     |   File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
clearml-serving-inference     |   File "<frozen importlib._bootstrap_external>", line 940, in exec_module
clearml-serving-inference     |   File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
clearml-serving-inference     |   File "/root/clearml/clearml_serving/serving/main.py", line 42, in <module>
clearml-serving-inference     |     serving_service_task_id = setup_task()
clearml-serving-inference     |                               ^^^^^^^^^^^^
clearml-serving-inference     |   File "/root/clearml/clearml_serving/serving/init.py", line 17, in setup_task
clearml-serving-inference     |     serving_task = ModelRequestProcessor._get_control_plane_task(task_id=serving_service_task_id)
clearml-serving-inference     |                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/root/clearml/clearml_serving/serving/model_request_processor.py", line 1238, in _get_control_plane_task
clearml-serving-inference     |     task = Task.get_task(task_id=task_id)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/task.py", line 931, in get_task
clearml-serving-inference     |     return cls.__get_task(
clearml-serving-inference     |            ^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/task.py", line 4135, in __get_task
clearml-serving-inference     |     return cls(private=cls.__create_protection, task_id=task_id, log_to_backend=False)
clearml-serving-inference     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/task.py", line 200, in __init__
clearml-serving-inference     |     super(Task, self).__init__(**kwargs)
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_interface/task/task.py", line 159, in __init__
clearml-serving-inference     |     super(Task, self).__init__(id=task_id, session=session, log=log)
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_interface/base.py", line 145, in __init__
clearml-serving-inference     |     super(IdObjectBase, self).__init__(session, log, **kwargs)
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_interface/base.py", line 39, in __init__
clearml-serving-inference     |     self._session = session or self._get_default_session()
clearml-serving-inference     |                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_interface/base.py", line 115, in _get_default_session
clearml-serving-inference     |     InterfaceBase._default_session = Session(
clearml-serving-inference     |                                      ^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/session/session.py", line 157, in __init__
clearml-serving-inference     |     self._connect()
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/session/session.py", line 211, in _connect
clearml-serving-inference     |     self.refresh_token()
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/session/token_manager.py", line 112, in refresh_token
clearml-serving-inference     |     self._set_token(self._do_refresh_token(self.__token, exp=self.req_token_expiration_sec))
clearml-serving-inference     |                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
clearml-serving-inference     |   File "/usr/local/lib/python3.11/site-packages/clearml/backend_api/session/session.py", line 839, in _do_refresh_token
clearml-serving-inference     |     raise LoginError('Unrecognized Authentication Error: {} {}'.format(type(ex), ex))
clearml-serving-inference     | clearml.backend_api.session.session.LoginError: Unrecognized Authentication Error: <class 'requests.exceptions.InvalidSchema'> No connection adapters were found for '"
"/auth.login'
clearml-serving-inference exited with code 1
  
  
Posted one year ago

Hi @<1569858449813016576:profile|JumpyRaven4> , seems to me you simply have an extra " in your host values, I think you should change them to:

CLEARML_WEB_HOST=

CLEARML_API_HOST=

CLEARML_FILES_HOST=

...
  
  
Posted one year ago

I had this same issue and I had to update the clearml.conf to use the public names instead of localhost then register the model and endpoint. After that it was able to resolve the hostname and load the artifacts.

  
  
Posted one year ago
1K Views
11 Answers
one year ago
one year ago
Tags