Well, it seems that everything is constantly restarting... 🙂
sudo docker logs clearml-mongo
and sudo docker logs clearml-elastic
Hi ManiacalPuppy53 glad to see that you got over the mongo problem by advancing to PI4 🙂
Hi ManiacalPuppy53 ,
Raspberry Pi is new 😁
I suggest the first thing you do is ssh to the server itself and try to curl
http://localhost:8008
SuccessfulKoala55 Just the regular "destination invalid" in the browser.
What relevant logs are there for this situation?
And when you try curl
http://localhost:8008 from the server's command line?
Hi ManiacalPuppy53 , sorry for the delay.
It seems the reason for this issue is that the ClearML docker image is not built for the ARM architecture. To resolve that, you'll need to rebuild this image appropriately. We plan to release a public Dockerfile for this image soon, so when we do you'll be able to try editing it and rebuilding the image for your needs.
SuccessfulKoala55 It seems that the server is constantly restarting.
GrumpyPenguin23 And it was so easy. I'm thinking of giving up and just use a PC instead of Pi.
OK, I got further into the RPI4 issue.
The steps I took:
Fresh installation of Ubuntu 64bit on RPI4B Install Docker Changed the ClearML yml file to the new Mongo + Elastic versions, to support arm644. ubuntu@clearmlserver:~$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
995fc4aa9dcb allegroai/clearml-agent-services:latest "/usr/agent/entrypoi…" 2 hours ago Restarting (1) 16 seconds ago clearml-agent-services
587bfabfff9d allegroai/clearml:latest "/opt/trains/wrapper…" 2 hours ago Restarting (1) 52 seconds ago clearml-webserver
8c891b6a92a9 allegroai/clearml:latest "/opt/trains/wrapper…" 2 hours ago Restarting (1) 56 seconds ago clearml-apiserver
097af9489ba5 redis:5.0 "docker-entrypoint.s…" 2 hours ago Up 2 hours 6379/tcp clearml-redis
26c1667b6055 allegroai/clearml:latest "/opt/trains/wrapper…" 2 hours ago Restarting (1) About a minute ago clearml-fileserver
74966692fad7
http://docker.elastic.co/elasticsearch/elasticsearch:7.8.0-arm64 "/tini -- /usr/local…" 2 hours ago Up 2 hours 9200/tcp, 9300/tcp clearml-elastic
674ddfb82e6f mongo:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 27017/tcp clearml-mongo
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
995fc4aa9dcb allegroai/clearml-agent-services:latest "/usr/agent/entrypoi…" 2 hours ago Restarting (1) 16 seconds ago clearml-agent-services
587bfabfff9d allegroai/clearml:latest "/opt/trains/wrapper…" 2 hours ago Restarting (1) 52 seconds ago clearml-webserver
8c891b6a92a9 allegroai/clearml:latest "/opt/trains/wrapper…" 2 hours ago Restarting (1) 56 seconds ago clearml-apiserver
097af9489ba5 redis:5.0 "docker-entrypoint.s…" 2 hours ago Up 2 hours 6379/tcp clearml-redis
26c1667b6055 allegroai/clearml:latest "/opt/trains/wrapper…" 2 hours ago Restarting (1) About a minute ago clearml-fileserver
74966692fad7 http://docker.elastic.co/elasticsearch/elasticsearch:7.8.0-arm64 "/tini -- /usr/local…" 2 hours ago Up 2 hours 9200/tcp, 9300/tcp clearml-elastic
674ddfb82e6f mongo:latest "docker-entrypoint.s…" 2 hours ago Up 2 hours 27017/tcp clearml-mongo
I don't know how to make it readable in Slack.. Sorry.
5. I got the logs for the ClearML containers:standard_init_linux.go:219: exec user process caused: exec format error
for all the ClearML containers
6. I looked in stackoverflow - it seems like an encoding issue on your part. I just don't really understand where these files are stored and created.
For all the clearml containers:
{"log":"standard_init_linux.go:219: exec user process caused: exec format error\n","stream":"stderr","time":"2021-01-18T15:02:37.789800817Z"}
{"log":"standard_init_linux.go:219: exec user process caused: exec format error\n","stream":"stderr","time":"2021-01-18T15:02:41.536122964Z"}
{"log":"standard_init_linux.go:219: exec user process caused: exec format error\n","stream":"stderr","time":"2021-01-18T15:02:47.973756035Z"}
{"log":"standard_init_linux.go:219: exec user process caused: exec format error\n","stream":"stderr","time":"2021-01-18T15:02:56.191745979Z"}
and so forth.
This is the only message for all the containers.
Can you attach a complete log for one of the ClearML containers (for example, the clearml-fileserver
container)?
Start by looking into the logs for MongoDB and ElasticSearch - both are independent
SuccessfulKoala55 Thanks for the reply.
I tried that, but it doesn't work.