
Reputation
Badges 1
83 × Eureka!Ok, I think I was able to resolve that issue, but now when it's installing the packages I am getting Double requirement given Error for pillow
I am providing pillow>=8.3.1 in my req.txt and I think clearml has Pillow==10.0.0 already
@<1523701205467926528:profile|AgitatedDove14> I was able to resolve that, but now I am having issues with fiftyone, it's showing me the following error
import fiftyone as fo
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/fiftyone/init.py", line 25, in <module>
from fiftyone.public import *
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/fiftyone/public.py", line 15, in <module>
_foo.establish_db_conn(config)
File "/root/.clearml...
while we spin up the autoscaler instance
Still giving me the same error
Ok I'll try that out, enable_git_ask_pass: true is not working
inside the containers that are spinning on the host machine
Let me know if this is enough information or not
Just a follow up on this issue, @<1523701087100473344:profile|SuccessfulKoala55> @<1523701205467926528:profile|AgitatedDove14> I would very much appreciate it if you could help me with this.
If you can let me know @<1576381444509405184:profile|ManiacalLizard2> @<1523701087100473344:profile|SuccessfulKoala55> how to resolve this, that would be very much helpful
The issue I am facing is when i do get_local_copy() the dataset(used for tarining yolov8) is downloaded inside the clearml cache (my image dataset contains images, labels, .txt files which has path to the images and a .yaml file). The downloaded .txt files shows that the image files are downloaded in the git repo present inside the clearml venvs, but actually that path doesn't exist and it is giving me an error
Is there a way to clone the whole pipeline, just like we clone tasks
I think I got it resolved
So for my project I have a dataset present in my local system, when I am running the pipeline remotely is there a way the remote machine can access it?
Is there a way to work around this?
Is there a way to change the path inside the .txt file to clearml cache, because my images are stored in clearml cache only
Do I need not make changes into clearml.conf so that it doesn't ask for my credentials or is there another way around
Ok, it's cloning but it's asking for my github credentials
Can you explain how running two agents would help me run the whole pipeline remotely? Sorry if its a very basic question
I want to understand what's happening at the backend. I want to know how running the pipeline logic and the tasks on separate agents gonna sync everything up
I have a pipeline which I am able to run locally, the pipeline has a pipeline controller along with 4 tasks, download data, training, testing and predict. How do I run execute this whole pipeline remotely so that each task is executed sequentially?
My git repo only contains the hash-ids which are used to download the dataset into my local machine
I am uploading the dataset (for Yolov8 training) as an artifact, when I am downloading the artifact (.zip file) from the UI the path to images is something like /Users/adityachaudhry/.clearml/cache/......, but when I am doing .get_local_copy() I am getting the local folder structure where I have my images locally in my system as path. For running the pipeline remotely I want the path to be like /Users/adityachaudhry/.clearml/cache/......
when I am running the pipeline remotely, I am getting the following error message
There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
So I am running a pipeline on a GCP VM, my VM has 1 NVIDIA GPU, and my requirements.txt has torch==1.13.1+cu117
torchvision==0.14.1+cu117
When I am running the Yolo training step I am getting the above error.
I am not able to see cu117 there
When the package installation is done in the task