Reputation
Badges 1
83 × Eureka!individual steps are failing
I want to know how to execute pip install . to import all the custom packages
So what I want to do is import the custom packages into my remote execution
All I need to do is
pip install -r requirements.txt
pip install .
If you can let me know @<1576381444509405184:profile|ManiacalLizard2> @<1523701087100473344:profile|SuccessfulKoala55> how to resolve this, that would be very much helpful
Just a follow up on this issue, @<1523701087100473344:profile|SuccessfulKoala55> @<1523701205467926528:profile|AgitatedDove14> I would very much appreciate it if you could help me with this.
The issue I am facing is when i do get_local_copy() the dataset(used for tarining yolov8) is downloaded inside the clearml cache (my image dataset contains images, labels, .txt files which has path to the images and a .yaml file). The downloaded .txt files shows that the image files are downloaded in the git repo present inside the clearml venvs, but actually that path doesn't exist and it is giving me an error
And also I have a requirements file which I want to be installed when I run the pipeline remotely
Ok so here's what I want to do, I want to export Google Application credentials to my docker container. Here's what I have tried so far
agent.extra_docker_shell_script: [
"echo -E '{
'type': 'xxx',
'project_id': 'xxx',
'private_key_id': 'xxx',
....
}' > google-api-key.json", "export GOOGLE_APPLICATION_CREDENTIALS=google-api-key.json"]
Hey, @<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> I would very much appreciate it if you could help me with this
inside the containers that are spinning on the host machine
While creating a GCP credentials using None
What values should I insert in the following step so that the autoscaler has access, as of now I left this field blank
Also @<1523701087100473344:profile|SuccessfulKoala55> when autoscaler spins up my GCP instance, when I look inside it I am not able to find the clearml.conf file, does it not install clearml automatically when it spins up the VM?
Also I was facing another issue, the task is not able to clone the github repo, it's showing authentication error even though I have passed my git credentials
So funny thing I was making a typo while writing the GPU type, I was writing NVIDIA T4 instead of nvidia-tesla-t4
@<1523701205467926528:profile|AgitatedDove14> I was able to resolve that, but now I am having issues with fiftyone, it's showing me the following error
import fiftyone as fo
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/fiftyone/init.py", line 25, in <module>
from fiftyone.public import *
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/fiftyone/public.py", line 15, in <module>
_foo.establish_db_conn(config)
File "/root/.clearml...
I am able to run the pipeline locally though
Is there a way to change the path inside the .txt file to clearml cache, because my images are stored in clearml cache only
So for my project I have a dataset present in my local system, when I am running the pipeline remotely is there a way the remote machine can access it?
Ok, I think I was able to resolve that issue, but now when it's installing the packages I am getting Double requirement given Error for pillow
I am providing pillow>=8.3.1 in my req.txt and I think clearml has Pillow==10.0.0 already
is there a way that there is only one environment for the whole pipeline?
how's that?
so inside /Users/adityachaudhry/.clearml/venvs-builds.1/3.10/task_repository/ I have my git repo, I have one component that make a dataset directory inside this git repo, but when the other component starts executing this dataset directory is not there
This didn't work, is there a way that I can set up this environment variable in my docker container?
yes same env for all the components
My git repo only contains the hash-ids which are used to download the dataset into my local machine