SuccessfulKoala55 i tried comment off fileserver, clearml dockers started but it doesn't seems to be able to start well. When I access clearml via webbrowser, site cannot be reached.
Just to confirm, I commented off these in docker-compose.yaml.
apiserver:command:- apiservercontainer_name: clearml-apiserverimage: allegroai/clearml:latestrestart: unless-stoppedvolumes:- /opt/clearml/logs:/var/log/clearml- /opt/clearml/config:/opt/clearml/config- /opt/clearml/data/fileserver:/mnt/fileserverdepends_on:- redis- mongo- elasticsearch# - fileserverenvironment:CLEARML_ELASTIC_SERVICE_HOST: elasticsearchCLEARML_ELASTIC_SERVICE_PORT: 9200CLEARML_MONGODB_SERVICE_HOST: mongoCLEARML_MONGODB_SERVICE_PORT: 27017CLEARML_REDIS_SERVICE_HOST: redisCLEARML_REDIS_SERVICE_PORT: 6379CLEARML_SERVER_DEPLOYMENT_TYPE: ${CLEARML_SERVER_DEPLOYMENT_TYPE:-linux}# CLEARML__apiserver__pre_populate__enabled: "true"# CLEARML__apiserver__pre_populate__zip_files: "/opt/clearml/db-pre-populate"# CLEARML__apiserver__pre_populate__artifacts_path: "/mnt/fileserver"ports:- "8008:8008"networks:- backend- frontend
elasticsearch:networks:- backendcontainer_name: clearml-elasticenvironment:ES_JAVA_OPTS: -Xms2g -Xmx2gbootstrap.memory_lock: "true"cluster.name: clearmlcluster.routing.allocation.node_initial_primaries_recoveries: "500"cluster.routing.allocation.disk.watermark.low: 500mbcluster.routing.allocation.disk.watermark.high: 500mbcluster.routing.allocation.disk.watermark.flood_stage: 500mbdiscovery.zen.minimum_master_nodes: "1"discovery.type: "single-node"http.compression_level: "7"node.ingest: "true"node.name: clearmlreindex.remote.whitelist: '*.*'xpack.monitoring.enabled: "false"xpack.security.enabled: "false"ulimits:memlock:soft: -1hard: -1nofile:soft: 65536hard: 65536image: http://docker.elastic.co/elasticsearch/elasticsearch:7.6.2restart: unless-stoppedvolumes:- /opt/clearml/data/elastic_7:/usr/share/elasticsearch/data- /usr/share/elasticsearch/logs
# fileserver:# networks:# - backend# - frontend# command:# - fileserver# container_name: clearml-fileserver# image: allegroai/clearml:latest# restart: unless-stopped# volumes:# - /opt/clearml/logs:/var/log/clearml# - /opt/clearml/data/fileserver:/mnt/fileserver# - /opt/clearml/config:/opt/clearml/config# ports:# - "8081:8081"
Hi. If we disable the API service, how will it affect the system? How do we disable?
You can simply comment out the  fileserver  service in the docker-compose file
Hi  SubstantialElk6  - this is a client definition, in the  api.files_server  configuration field of  clearml.conf  (or by using an environment variable). If you want, you can simply disable this service in the server  🙂
Hi  SubstantialElk6 ,
I would also remove:CLEARML__apiserver__pre_populate__enabled: "true" CLEARML__apiserver__pre_populate__zip_files: "/opt/clearml/db-pre-populate"As these pre-populated examples depend on files stored in the fileserver (if you want these examples with missing files, you can keep it).
would they need the fileserver to route to minio then?
Correct :)
But if user forgot to do above, they will be saved on ClearML server. If I switch off file_server, the configuration above will break right?
Yeah, there's no way around that - if the SDK tries to access a non-existing storage service, you'll get an error.
However, if you can control each user's environment and make sure to set  CLEARML_FILES_HOST= s3://ecs.ai:80/clearml-data/default  env var, than in that case, the users will only have to configure the credentials (and if they forget, they will get an error saying they don't have permissions to this bucket)
If you disable, it will only mean any client who will try to access the fileserver will receive an error (service not found, I assume).
Hi SuccessfulKoala55 , would they need the fileserver to route to minio then? E.g.
This will ensure that any actions by clearml-data and models are saved into the S3 object store.
api {
files_server:  s3://ecs.ai:80/clearml-data/default
}
aws {
s3 {
credentials {
host:  http://ecs.ai:80
## Insert the iam credentials provided by your SAs here.
}
}
}
But if user forgot to do above, they will be saved on ClearML server. If I switch off file_server, the configuration above will break right?
Hi  SuccessfulKoala55 , can i confirm the following comments in the docker-compose.yml ?
And after that to run docker-compose commands without loss of data?
docker-compose down docker-compose up
docker-compose.yml
`
version: "3.6"
services:
apiserver:
command:
- apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
- /opt/clearml/logs:/var/log/clearml
- /opt/clearml/config:/opt/clearml/config
#COMMENTED - /opt/clearml/data/fileserver:/mnt/fileserver
depends_on:
- redis
- mongo
- elasticsearch
#COMMENTED - fileserver
environment:
CLEARML_ELASTIC_SERVICE_HOST: elasticsearch
CLEARML_ELASTIC_SERVICE_PORT: 9200
CLEARML_ELASTIC_SERVICE_PASSWORD: ${ELASTIC_PASSWORD}
CLEARML_MONGODB_SERVICE_HOST: mongo
CLEARML_MONGODB_SERVICE_PORT: 27017
CLEARML_REDIS_SERVICE_HOST: redis
CLEARML_REDIS_SERVICE_PORT: 6379
CLEARML_SERVER_DEPLOYMENT_TYPE: ${CLEARML_SERVER_DEPLOYMENT_TYPE:-linux}
CLEARML__apiserver__pre_populate__enabled: "true"
CLEARML__apiserver__pre_populate__zip_files: "/opt/clearml/db-pre-populate"
#COMMENTED CLEARML__apiserver__pre_populate__artifacts_path: "/mnt/fileserver"
ports:
- "8008:8008"
networks:
- backend
- frontend
elasticsearch:
networks:
- backend
container_name: clearml-elastic
environment:
ES_JAVA_OPTS: -Xms2g -Xmx2g -Dlog4j2.formatMsgNoLookups=true
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
bootstrap.memory_lock: "true"
cluster.name: clearml
cluster.routing.allocation.node_initial_primaries_recoveries: "500"
cluster.routing.allocation.disk.watermark.low: 500mb
cluster.routing.allocation.disk.watermark.high: 500mb
cluster.routing.allocation.disk.watermark.flood_stage: 500mb
discovery.zen.minimum_master_nodes: "1"
discovery.type: "single-node"
http.compression_level: "7"
node.ingest: "true"
node.name: clearml
reindex.remote.whitelist: '.'
xpack.monitoring.enabled: "false"
xpack.security.enabled: "false"
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
restart: unless-stopped
volumes:
- /opt/clearml/data/elastic_7:/usr/share/elasticsearch/data
- /usr/share/elasticsearch/logs
#COMMENTED fileserver:
#COMMENTED   networks:
#COMMENTED     - backend
#COMMENTED     - frontend
#COMMENTED   command:
#COMMENTED   - fileserver
#COMMENTED   container_name: clearml-fileserver
#COMMENTED   image: allegroai/clearml:latest
#COMMENTED   restart: unless-stopped
#COMMENTED   volumes:
#COMMENTED   - /opt/clearml/logs:/var/log/clearml
#COMMENTED   - /opt/clearml/data/fileserver:/mnt/fileserver
#COMMENTED   - /opt/clearml/config:/opt/clearml/config
#COMMENTED   ports:
#COMMENTED   - "8081:8081"
mongo:
networks:
- backend
container_name: clearml-mongo
image: mongo:3.6.23
restart: unless-stopped
command: --setParameter internalQueryExecMaxBlockingSortBytes=196100200
volumes:
- /opt/clearml/data/mongo/db:/data/db
- /opt/clearml/data/mongo/configdb:/data/configdb
redis:
networks:
- backend
container_name: clearml-redis
image: redis:5.0
restart: unless-stopped
volumes:
- /opt/clearml/data/redis:/data
webserver:
command:
- webserver
container_name: clearml-webserver
image: allegroai/clearml:latest
restart: unless-stopped
depends_on:
- apiserver
ports:
- "8080:80"
networks:
- backend
- frontend
agent-services:
networks:
- backend
container_name: clearml-agent-services
image: allegroai/clearml-agent-services:latest
deploy:
restart_policy:
condition: on-failure
privileged: true
environment:
CLEARML_HOST_IP: ${CLEARML_HOST_IP}
CLEARML_WEB_HOST: ${CLEARML_WEB_HOST:-}
CLEARML_API_HOST:  
#COMMENTED   CLEARML_FILES_HOST: ${CLEARML_FILES_HOST:-}
CLEARML_API_ACCESS_KEY: ${CLEARML_API_ACCESS_KEY:-}
CLEARML_API_SECRET_KEY: ${CLEARML_API_SECRET_KEY:-}
CLEARML_AGENT_GIT_USER: ${CLEARML_AGENT_GIT_USER}
CLEARML_AGENT_GIT_PASS: ${CLEARML_AGENT_GIT_PASS}
CLEARML_AGENT_UPDATE_VERSION: ${CLEARML_AGENT_UPDATE_VERSION:-">=0.17.0"}
CLEARML_AGENT_DEFAULT_BASE_DOCKER: "ubuntu:18.04"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID:-}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY:-}
AWS_DEFAULT_REGION: ${AWS_DEFAULT_REGION:-}
AZURE_STORAGE_ACCOUNT: ${AZURE_STORAGE_ACCOUNT:-}
AZURE_STORAGE_KEY: ${AZURE_STORAGE_KEY:-}
GOOGLE_APPLICATION_CREDENTIALS: ${GOOGLE_APPLICATION_CREDENTIALS:-}
CLEARML_WORKER_ID: "clearml-services"
CLEARML_AGENT_DOCKER_HOST_MOUNT: "/opt/clearml/agent:/root/.clearml"
SHUTDOWN_IF_NO_ACCESS_KEY: 1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/clearml/agent:/root/.clearml
depends_on:
- apiserver
networks:
backend:
driver: bridge
frontend:
driver: bridge `
 
				 
				