Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Try To Run Locally

Hi, I try to run locally clearml-server and clearml-serving to create inference endpoint that utilize Triton server. So far I had port issues so I changed clearml-serving-inference outbound port to 9090. But after that I get the following issue:

clearml-serving-triton        | Retrying (Retry(total=237, connect=237, read=240, redirect=240, status=240)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02a2602250>: Failed to establish a new connection: [Errno 111] Connection refused')': /auth.login

Is there any best practices for running both services locally? What kind of configuration I suppose to do?
I already tried to set ~/clearml.conf with access_key and provide it in example.env but it didn't help. Maybe Ido something wrong with host:port configurations. Thanks!

  
  
Posted 2 years ago
Votes Newest

Answers 60


does it work for you?

  
  
Posted 2 years ago

except access_key of course, they should be yours

  
  
Posted 2 years ago

It throws the same error

  
  
Posted 2 years ago

oh, I see one error, let me check fast

  
  
Posted 2 years ago

I don't thing WEB_HOST is important, but what about FILE_HOST?
do I need to change it accordingly?

  
  
Posted 2 years ago

you are right, for some reason it doesn't resolve inside a container

root@dd0252a8f93e:~/clearml# curl 

curl: (7) Failed to connect to localhost port 8008: Connection refused
root@dd0252a8f93e:~/clearml# curl 

curl: (7) Failed to connect to 127.0.0.1 port 8008: Connection refused
root@dd0252a8f93e:~/clearml# 
  
  
Posted 2 years ago

I tried that, it didn't work. I was confused by the separate port parameter:

CLEARML_SERVING_PORT: ${CLEARML_SERVING_PORT:-8080}

which is only one port related in docker-compose-triton.yml
Can I test /auth.login somehow independently? Using curl or any other way. Which address does it suppose to have and which creds should I use?

  
  
Posted 2 years ago

yeah, ok
but it didn't

  
  
Posted 2 years ago

same thing

clearml-serving-inference     | Retrying (Retry(total=236, connect=236, read=240, redirect=240, status=240)) after connection broken by 'NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f899dc4e8b0>: Failed to establish a new connection: [Errno 111] Connection refused')': /auth.login
  
  
Posted 2 years ago

my clearml.conf

api { 
    web_server: 

    api_server: 

    files_server: 

    # test 3
    credentials {
        "access_key" = "91SFEX4BYUQ9YCZ9V6WP"
        "secret_key" = "4WTXT7tAW3R6tnSi8hzSKNjgkmgUoyv22lYT2FIzIfLoeGERRO"
    }
}
  
  
Posted 2 years ago

When i run this it says can't run multi containers

  
  
Posted 2 years ago

but it actually looks ok

  
  
Posted 2 years ago

seems like an issue about 2 compose apps using different networks which are not accessible from each other
I wonder if I just need to join 2 docker-compose files to run everything in one session

  
  
Posted 2 years ago

can you share your log items?

  
  
Posted 2 years ago

I can make a PR if it works

  
  
Posted 2 years ago

No i use
docker compose instead of docker-compose

  
  
Posted 2 years ago

you should also use my example.env

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

Did you get the same as well?

  
  
Posted 2 years ago

But I'm getting a timeout issue, when i docker-compose up 😢

  
  
Posted 2 years ago

Are you using native Linux? Or wsl

  
  
Posted 2 years ago

Hey i tried your docker-compose
After all the initial setup, clearml-serving-triton
clearml-serving-statistics
clearml-serving-inference, throw read time out error?

  
  
Posted 2 years ago

doesn't work anyway

  
  
Posted 2 years ago

yeah, I tried the following
None
but haven't managed yet to make it work

  
  
Posted 2 years ago

do I need to change anything else?

  
  
Posted 2 years ago

serving

  
  
Posted 2 years ago

I changed port here:

clearml-serving-inference:
    image: allegroai/clearml-serving-inference:latest
    container_name: clearml-serving-inference
    restart: unless-stopped
    ports:
      - "9090:8080"
  
  
Posted 2 years ago

Hi @<1523706266315132928:profile|DefiantHippopotamus88>
The idea is that clearml-server acts as a control plane and can sit on a different machine, obviously you can run both on the same machine for testing. Specifically it looks like the clearml-sering is not configured correctly as the error points to issue with initial handshake/login between the triton containers and the clearml-server. How did you configure the clearml-serving docker compose?

  
  
Posted 2 years ago

it suppose to have access_key and secret_key which should correspond to this file

  
  
Posted 2 years ago

server

  
  
Posted 2 years ago