Reputation
Badges 1
83 × Eureka!how's that?
I am not able to see cu117 there
So for my project I have a dataset present in my local system, when I am running the pipeline remotely is there a way the remote machine can access it?
Do I need not make changes into clearml.conf so that it doesn't ask for my credentials or is there another way around
Heyy guys, I was able to run the pipeline using autoscaler, thanks to @<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> for all your help and suggestions!!
Ok I was able to resolve the above issue, but now I am getting the following error while executing a task
import cv2
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/cv2/init.py", line 181, in <module>
bootstrap()
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/cv2/init.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/usr/lib/python3.8/importlib/init.py", line 127, in import_module
return _boots...
All I need to do is
pip install -r requirements.txt
pip install .
So, one of my tasks requires GCP credentials json file, is there a way that I can pass in the json file and set the environment variable for that?
I want to know how to execute pip install . to import all the custom packages
And also I have a requirements file which I want to be installed when I run the pipeline remotely
dataset = fo.Dataset.from_dir(
labels_path=labels_path,
dataset_type=fo.types.COCODetectionDataset,
label_field="ground_truth",
use_polylines=True
)
task.upload_artifact(
name="Dataset",
artifact_object=dataset,
)
Thanks, I got that issue resolved
is there a way that there is only one environment for the whole pipeline?
I was able to set up a GCP VM manually earlier, like without the autoscaler
I am able to run the pipeline locally though
So I am running a pipeline(using tasks) remotely and one of my task is importing stuff from one of my local repository, but it's giving me an error when I run the pipeline remotely
individual steps are failing
So the issue I am facing is, I am running the pipeline controller task on my local system agent and the steps of the pipeline on an agent running on GCP VM, the first step of the pipeline is failing showing clearml_agent: ERROR: Failed cloning repository.
Ok I'll try that out, enable_git_ask_pass: true is not working
While creating a GCP credentials using None
What values should I insert in the following step so that the autoscaler has access, as of now I left this field blank
because when I was running both agents on my local machine everything was working perfectly fine
Note: switching to 'commit_id'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting ...
So, I am able to resolve the above issues
Can you tell me how clearml get access to my repo even if I didn't pass any information about it?
File "/opt/conda/envs/bumlo/lib/python3.10/site-packages/clearml/binding/artifacts.py", line 745, in upload_artifact
pickle.dump(artifact_object, f)
_pickle.PicklingError: Can't pickle <class 'mongoengine.base.metaclasses.samples.6627e5ecc60879fe5e49cee6'>: attribute lookup samples.6627e5ecc60879fe5e49cee6 on mongoengine.base.metaclasses failed
@<1523701070390366208:profile|CostlyOstrich36>
I provided the credentials while setting up the autoscaler instance, where can I look for the clearml.conf. When I ssh into the instance, spin up by the autoscaler, I am not able to see the clearml.conf