
Reputation
Badges 1
123 × Eureka!But it seems like the data is gone, not sure how to get them back
I hope that its all the experiments
I also dont have side panel for some reason
I purged all docker images and it still doesnt seem right
I see no side panel and it doesnt ask for login name
I also see that elastisearch and mongo has some data
@<1523701601770934272:profile|GiganticMole91> Thats rookie numbers. We are at 228 GB for elastic now
What you want is to have a service script that cleans up archived tasks, here is what we used: None
We dont need a port
"s3" is part of url that is configured on our routers, without it we cannot connect
I solved the problem.
I had to add tensorboard loggger and pass it to pytorch_lightning trainer logger=logger
Is that normal?
Hi, ok im really close now to working system
Debug image is uploading to s3, im seeing the files, all ok there
Problem now is viewing these images in web UI
Going to Debug Samples panel in Task drops me a popup to fill in s3 credentials
I cant figure out what the right setup is for the creds to work
This is what I have now (Note that we dont have region)
Our datasets are more than 1TB in size and will grow in size (probably 4TB and up), this means we also need 4TB local storage just to upload the dataset back in zipped format. This is not a good solution.
What we can do I guess is do the downloading locally by some chunks of files?
Download locally 100 files, add_to_clearml dataset, repeat