Reputation
Badges 1
606 × Eureka!Thank you for answering. So your suggestion would be similar to VexedCat68 's first idea, right?
If I understood correctly, if I tried to print(os.environ["MUJOCO_GL"])
after the clearml Task is created, this should be set?
Good to know the --debug
flag exists in master! 😄
Yea I know, I reported this 🙂 .
I ll add creating an issue to my todo list
Maybe a related question: Anyone every worked with datasets larger than the clearml-agent cache? Some colleague of mine has a dataset of ~ 1 tera byte...
Give me 5min and I send the full log
So the environment variables are not set by the clearml-agent, but by clearml itself
And clearml-agent should pull these datasets from network storage...
Unfortunately, not. Quick question: Is there caching happening somewhere besides .clearml
? Does the boto3 driver create cache?
Maybe deletion happens "async" and is not reflected in parts of clearml? It seems that if I try to delete often enough at some point it is successfull
How can I see that?
I have venv_update.enabled: true
and detect_with_conda_freeze: true
I got some warnings about broken packages. I cleaned the conda cache with conda clean -a
` and now it installed fine!
AgitatedDove14 SuccessfulKoala55 Could you briefly explain whether clearml supports no-copy add for datasets?
Yea, the real problem is that I have very large datasets in network storage. I am looking for a way to add the datasets on the networks storage as clearml-dataset.
But it is not possible to aggregate scalars, right? Like taking the mean, median or max of the scalars of multiple experiments.
Yea, tensorboardX is using moviepy.
pytorch.tensorboard is the same as tensorboardx https://github.com/pytorch/pytorch/blob/6d45d7a6c331ddb856ac34a76bcd3613aa05185b/torch/utils/tensorboard/summary.py#L461
Sure, no problem!
I installed as told on pytorch.org : pip3 install --pre torch torchvision torchaudio --index-url
None
And how to specify this fileserver as output_uri
?