
Reputation
Badges 1
17 × Eureka!Thank you! That makes sense, and adding default_output_uri solved the issue.
Unfortunately, I now have the next issue. When deleting a Dataset I created for test purposes in the UI, it is not automatically deleted by clearml in minio. Do you know if I have missconfigured clearml in some way?
Do you know what is going on? 🙂 Thank you!
Thanks john.
the project is “legacy_data”, “5images” is the dataset name. How would i fetch a project and delete it (+ all it’s contents)?
So the solution would be to custom develop a backup routine that is fetching the data increments from Elastic Search, MongoDB & Redis directly?
Thank you for getting back!
I have reduced it to a max of 2GB for the container and 1GB for the java heap inside the container. Up to now I haven’t experienced any issues 👍
Thank you so much for your reply! Regarding merging - we have decided to overwrite one server by the other, so this issue is solved.
What Database is clearml using? How to access them?
@<1576381444509405184:profile|ManiacalLizard2> Do you have an observation/experience as to what happens when ES hits the limit?
Can you explain why a user needs to configure the minio connection locally? My understanding was, that the data is up/downloaded to/from clearml-server, and where clearml-server is storing the data is to be configured on the server itself.
Yes, that solved the problem. Thank you!
Thank you for replying 🙂
I need to configure this in every clearml.conf for each data-scientist. Correct?
That sounds interesting. Will this also work in a on-prem hosting environment?
It seems elastic has allocated a heap of 32GiB, but uses only 4GiB. Where/why have 32GiB been allocated?
And how much memory does ElasticSearch realistically need?
Question is resolved. I found a full clearml.conf with agent configuration here: None
Hi John,
This happens locally.