Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey, So We Noticed The

Hey, so we noticed the /var/lib/docker/overlay2 directory on the clearml-server is growing a lot in size, we added more disk space but we want to put something in place to stop this growing too much.
These are the options I’ve looked into:

  1. docker system prune - removes all stopped containers, all networks not used by at least one container, all dangling images, all dangling build cache, Problem: we don’t really know what this is pruning
  2. docker image prune --all - removes all images without at least one container associated to them
  3. Set the max-size in docker-compose.yaml for logging

Are the first 2 options safe to run without killing the server? I’m not happy on removing files without knowing what they are.
Are there any plans to automate this in the future?

  
  
Posted 4 years ago
Votes Newest

Answers 43


think I found the issue, a typo in apiserver.conf

  
  
Posted 4 years ago

it looks like clearml-apiserver and clearml-fileserver are continually restarting

  
  
Posted 4 years ago

Hey there waves

Not sure about plans to automate this in the future, as this is more how docker behaves and not really clearml, especially with the overlay2 filesystem. The biggest offender usually is your json logfiles. have a look in /var/lib/docker/containers/ for *.log

assuming this IS the case, you can tell docker to only log upto a max-size .. I have mine set to 100m or some such

  
  
Posted 4 years ago

Try looking at their logs

  
  
Posted 4 years ago

yes those have all been created

  
  
Posted 4 years ago

btw - if you remove the docker-compose changes, do the containers start normally?

  
  
Posted 4 years ago

I believe you can set it on a 'per container' way as well.

  
  
Posted 4 years ago

In the publicly available AMI these are created. However, if you used a previously released Trains AMI and upgraded to ClearML, part of the upgrade process was to create those directories (required by the new docker-compose.yml ), as explained here: None

  
  
Posted 4 years ago

back up and running again, thanks for your help

  
  
Posted 4 years ago

... from the AMI creation script:

# prepare directories to store data
sudo mkdir -p /opt/clearml/data/elastic_7
sudo mkdir -p /opt/clearml/data/redis
sudo mkdir -p /opt/clearml/data/mongo/db
sudo mkdir -p /opt/clearml/data/mongo/configdb
sudo mkdir -p /opt/clearml/logs
sudo mkdir -p /opt/clearml/config
sudo mkdir -p /opt/clearml/data/fileserver
sudo chown -R 1000:1000 /opt/clearml/data/elastic_7

So it seems the AMI is using the correct directories... Do you have these?

  
  
Posted 4 years ago

not entirely sure on this as we used the custom AMI solution, is there any documentation on it?

  
  
Posted 4 years ago

I think that if these directories are not mounted, you should first of all take care not to shut down the server. You'll probably want to exec /bin/bash into the mongo and elastic containers, and copy their data outside to the host storage

  
  
Posted 4 years ago

Can you perhaps attach your docker-compose.yml file's contents?

  
  
Posted 4 years ago
68K Views
43 Answers
4 years ago
one year ago
Tags