Reputation
Badges 1
123 × Eureka!is there any way to see if I even have the data in mongodb?
I hope that its all the experiments
@<1523701070390366208:profile|CostlyOstrich36> It it still needed since Eugene thinks there is a bug?
i can add "source /workspace/.venv/bin/activate", to clearml.conf docker_init_bash_script
However it then tries to access pip, but i dont need no pip, how to disable it, i already have my packages, and uv doesnt even require pip
I solved the problem.
I had to add tensorboard loggger and pass it to pytorch_lightning trainer logger=logger
Is that normal?
im also batch uploading, maybe thats the problem?
- The dataset is about 1TB containing 1 million files
- I dont have the SSD space locally to do the upload
- So i download a part of the dataset, use add_files() and then upload() to that batch
- Upload the dataset
I noticed that each batch is slower and slower
Here are my clearml versions and elastisearch taking up 50GB
also, when uploading artifacts, I see where they are stored on the s3 bucket, but I cant find where the debug images are stored at
here is also another magic stuff
The incident happened last friday (5 january)
Im giving you logs from around that time
Is is even known if the bug is fixed on that version?

