Reputation
Badges 1
83 × Eureka!When the package installation is done in the task
I am not able to see cu117 there
yes same env for all the components
how's that?
So I am running a pipeline on a GCP VM, my VM has 1 NVIDIA GPU, and my requirements.txt has torch==1.13.1+cu117
torchvision==0.14.1+cu117
When I am running the Yolo training step I am getting the above error.
inside the containers that are spinning on the host machine
thanks for the help though!!
I don't think it has issues with this
Let me know if this is enough information or not
File "/opt/conda/envs/bumlo/lib/python3.10/site-packages/clearml/binding/artifacts.py", line 745, in upload_artifact
pickle.dump(artifact_object, f)
_pickle.PicklingError: Can't pickle <class 'mongoengine.base.metaclasses.samples.6627e5ecc60879fe5e49cee6'>: attribute lookup samples.6627e5ecc60879fe5e49cee6 on mongoengine.base.metaclasses failed
One more thing in my git repo there is a dataset folder that contains hash-ids, these hash-ids are used to download the dataset. When I am running the pipeline remotely the files/images are downloaded in the cloned git repo inside the .clearml/venvs but when I check inside that venvs folder there are not images present.
Can you explain how running two agents would help me run the whole pipeline remotely? Sorry if its a very basic question
Is there a way to clone the whole pipeline, just like we clone tasks
If you can let me know @<1576381444509405184:profile|ManiacalLizard2> @<1523701087100473344:profile|SuccessfulKoala55> how to resolve this, that would be very much helpful
Is there a way to change the path inside the .txt file to clearml cache, because my images are stored in clearml cache only
when I am running the pipeline remotely, I am getting the following error message
There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Is there a way to work around this?
This didn't work, is there a way that I can set up this environment variable in my docker container?
Because I think I need to have the following two lines in the .bashrc and the Google_Application_credentials
git config --global user.email 'email'
git config --global user.name "user_name"
because when I was running both agents on my local machine everything was working perfectly fine
while we spin up the autoscaler instance
@<1523701070390366208:profile|CostlyOstrich36>
Heyy guys, I was able to run the pipeline using autoscaler, thanks to @<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> for all your help and suggestions!!
And one more thing is there a way to make changes to the .bashrc which is present inside the docker container
dataset = fo.Dataset.from_dir(labels_path=labels_path,dataset_type=fo.types.COCODetectionDataset,label_field="ground_truth",use_polylines=True)task.upload_artifact(name="Dataset",artifact_object=dataset,)