Hi @<1747790893814910976:profile|MotionlessSeaturtle56> ,
If you're setting up the ClearML open source server on a single machine (I assume using the docker-compose deployment option) than all databases storage (where all metadata and administraitve data is stored) as well as the fileserver storage (where artifacts and models are usually stored) will be mounted to a local storage (i.e. on the host machine).
When using ClearML (specifically when running experiments using the ClearML SDK) your code may upload artifacts, models either to the fileserver (which, as explained, stores all data locally on the server's machine in this case) or to other object-storage solutions (such as S3, Azure storage, Google Cloud Storage etc.) - this is a client-side setting (i.e. you can set the upload target in your clearml.conf
file on your workstation when running your python training code which uses the ClearML SDK), so assuming you'll be using the fileserver, all data will indeed be stored on the server machine.
Please note that when running experiments (locally on your workstation, or in case you deploy a ClearML Agent on a remote machine to run experiments there), the SDK running in either location will cache downloaded models, artifacts and debug images to the local cache folder (defined in the clearml.conf
file).
If in your case the server machine and your workstation are the same machine, than of course data will only love there 🙂