Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Hi Crew! A Bit Stuck On Something Basic Again: I’M Running A Clearml Server On Aws Ec2 Using The Latest Community Ami (Ami-01Edf47969E2515Dd - Allegroai-Clearml-Server-1.0.2-108-21). The Only Thing I’Ve Done Is Copy The

Hi crew! A bit stuck on something basic again: I’m running a ClearML server on AWS EC2 using the latest Community AMI (ami-01edf47969e2515dd - allegroai-clearml-server-1.0.2-108-21).

The only thing I’ve done is copy the https://github.com/allegroai/clearml-server/blob/master/apiserver/config/default/apiserver.conf https://github.com/allegroai/clearml-server/blob/master/apiserver/config/default/apiserver.conf into /opt/clearml/config/ (because that directory was empty), so I could set up web login authentication for myself as a user.

I then did:
sudo su ec2-user cd # on the AMI, the docker-compose file is at /home/ec2-user/docker-compose.yml sudo docker-compose down sudo docker-compose up
Everything seems to stop/start up ok, except for the clearml-agent container that’s meant to monitor the “services” queue. It keeps printing out every ~15-30 seconds:
clearml-agent-services | http://{my instance's IPv4 address}:8081 http://{my instance's IPv4 address}:8080 clearml-agent-services | WARNING: You are using pip version 20.3.3; however, version 21.1.3 is available. clearml-agent-services | You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command. clearml-apiserver | [2021-07-22 09:39:47,703] [9] [WARNING] [clearml.service_repo] Returned 401 for auth.login in 2ms, msg=Unauthorized (invalid credentials) (failed to locate provided credentials) clearml-agent-services | clearml-agent-services | clearml_agent: ERROR: Failed getting token (error 401 from ` ): Unauthorized (invalid credentials) (failed to locate provided credentials)

clearml-agent-services exited with code 1 I’m not sure what’s up. Do I need to get a clearml.conf ` file into the agent container, or maybe do something with secure.conf (which I haven’t copied into /opt/clearml/config )? Or is it because the agent is for some reason trying to access http://apiserver:8008 instead of http://{my instance’s IPv4 address}:8008?

Huge thanks in advance! ⭐

Posted 2 years ago
Votes Newest

Answers 4

Oh wow thanks SuccessfulKoala55 , so sorry I didn’t think to check the agent docs! 😅

Posted 2 years ago

Hi QuaintPelican38 , the http://apiserver:8008 is the correct setting, as this is the address of the apiserver docker-compose service in the internal docker network

Posted 2 years ago

EDIT: Turns out in that AMI, the dockerfile has:
agent-services: networks: - backend container_name: clearml-agent-services image: allegroai/clearml-agent-services:latest restart: unless-stopped privileged: true environment: CLEARML_HOST_IP: ${CLEARML_HOST_IP} CLEARML_WEB_HOST: ${CLEARML_WEB_HOST:-} CLEARML_API_HOST: CLEARML_FILES_HOST: ${CLEARML_FILES_HOST:-}
So I changed
And now the error line has changed to:
clearml-agent-services | clearml_agent: ERROR: Connection Error: it seems *api_server* is misconfigured. Is this the ClearML API server http://{my instance's IPv4 address}:8008 ?
I.e. it’s printing out the right API host address (the same one configured in my local machine’s clearml.conf file), but apparently unable to access it

Posted 2 years ago

As for your original error, see https://clear.ml/docs/latest/docs/clearml_agent#setting-server-credentials .
I'll make sure we add a reference to it in the AWS setup guide.

Posted 2 years ago
4 Answers
2 years ago
one year ago