Reputation
Badges 1
25 × Eureka!AgitatedDove14 Hi, thanks for the update, I switched to the new version, because I am using PyCharm 2022, but still it isnt working. In the UI I found that the plugin is using the keys, but the repository is not found. The repo is in GitLab. The base docker image is http://nvcr.io/nvidia/pytorch:22.09-py3 . What could be wrong?
Hi. Did you find a solution?
It happened again. get_local_copy()
worked as expected, but then when I tried:.get_mutable_local_copy(local_data_path, overwrite=True, raise_on_error=False)
contents of evry 'data' folder on the share were deleted and the same error was displayed.
TimelyPenguin76
data in the folder, the dataset task is not deleted
Unfortunately, same thing happens
sorry, should have specified better
full path: /mnt/machine_learning/datasets/my-project/.datasets/cool-dataset/cool-dataset.7873a3b0764145f086a2af139c5f1fc9/artifacts
Yes this is with the latest plugin, and I can confirm that I have not checked that.
AgitatedDove14 Hi, sorry for the late reply, this is the output:
SmugDolphin23 Yes it works now! Thank you both very much for fast and great support!
The dataset is downloaded to my local clearml cache. But specifically all of the contents of the data folders, under artifacts folder are removed. The state folder is not affected.
AgitatedDove14 , I am using Ubuntu 20.04. git is recognized in shell and also I use pycharm git UI sometimes and it works there.
Hi. Sorry for the very late reply. I just tried and yes it works. Thanks for the help!
SmugDolphin23 Hello, sorry to bother, but are there any updates on this issue?
No, the samll test dataset has only 32MB. I created the dataset by using Dataset.create(...)
datasset.add_files(...)
and then dataset.finalize()
. I unfortunately dont have s3. I poked around in the saved data on the share and it seems that for some reaseon folders 'data' to 'data_11' have their contents deleted. Whats even weirder is that they were deleted right at the time when i first tried to get a mutable copy today, the other folders are untouched since monday when i cr...
Just had the same issue. Your reply helped me fix it, thanks!
Hi, I changed it to 1.13.0, but it still threw the same error. In the end I just changed to a bullseye container instead(since the nvidia container is not a must have), and it works now, but for some reason it doesnt auto detect all of my packages so I had to explicitly add them. But yeah, thanks for the help, I should have dug a bit deeper on my issue.
[('CLEARML_API_SECRET_KEY', 'my_secret_key'), ('CLEARML_FILES_HOST', '
'), ('CLEARML_WEB_HOST', '
'), ('CLEARML_API_ACCESS_KEY', 'VOQ1JYD64MYMH1O76A0L'), ('CLEARML_API_HOST', '
')]
And in that folder there are 5 data
folders, I assume for every compressed chunk, and a state
folder
AgitatedDove14 Hi, yeah for some reason it isnt working. I should have specified that I am running a remote interpreter in docker on my machine. For now I fixed it by manually adding the code repository as a env variable in the Dockerfile.