It seems like https://github.com/allegroai/clearml-helm-charts/blob/main/charts/clearml-agent/values.yaml#L72-L80 doesn't actually do anything as the values set here aren't applied in the agent template
Yes, as an example: My task starts up and checks the mounted EFS volume for x data, if x data does not exist there, it then pulls x data from S3.
Hi BoredHedgehog47
You mean like EFS for caching ?
Okay, makes sense. So there is no copying of the data to the pod, it is simply references via the EFS
. Curious what advantage it would be to use the StorageManager
Basically if you set the clearml cache folder to the EFS, users can always do:from clearml import StorageManager local_file = StorageManager.get_local_copy("
")
where local_file is stored on persistent cache (EFS) and the cache is automatically cleaned based on last accessed file
Does the file on the EFS get downloaded to the k8 pod local volume?
So there is no copying of the data to the pod, it is simply references via the EFS
Correct
I got the EFS volume mounted. Curious what advantage it would be to use the StorageManager
EFS get downloaded to the k8 pod local volume?
EFS is an Amazon service that mounts a persistent FS into ec2 instances, I believe they have support for k8s as a service as well, which would make it kind of like a PV only as a service.
Does that make sense ?
My task starts up and checks the mounted EFS volume for x data, if x data does not exist there, it then pulls x data from S3.
BoredHedgehog47 you can just use StorageManager and configure clearml cache for the EFS, it will essentially do the same 🙂
Regrading helm chart with EFS,
you need to configure the clearml-glue pod template with the EFS mount
example :
https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/e7f647f4e6fc76f983d61522e635353005f1472f/examples/kubernetes/volume_path/specs/example.yaml#L18
Then you need to point clearml cache to the mount point by setting the env var CLEARML_CACHE_DIR
wdyt?