Reputation
Badges 1
83 × Eureka!2023-10-03 20:46:07,100 - clearml.Auto-Scaler - INFO - Spinning new instance resource='clearml-autoscaler-vm', prefix='dynamic_gcp', queue='default'
2023-10-03 20:46:07,107 - googleapiclient.discovery_cache - INFO - file_cache is only supported with oauth2client<4.0.0
2023-10-03 20:46:07,122 - clearml.Auto-Scaler - INFO - Creating regular instance for resource clearml-autoscaler-vm
2023-10-03 20:46:07,264 - clearml.Auto-Scaler - INFO - --- Cloud instances (0):
2023-10-03 20:46:07,482 - clearm...
I think I got it resolved
Because I think I need to have the following two lines in the .bashrc and the Google_Application_credentials
git config --global user.email 'email'
git config --global user.name "user_name"
while we spin up the autoscaler instance
I want to understand what's happening at the backend. I want to know how running the pipeline logic and the tasks on separate agents gonna sync everything up
thanks for the help though!!
@<1523701205467926528:profile|AgitatedDove14> I was able to resolve that, but now I am having issues with fiftyone, it's showing me the following error
import fiftyone as fo
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/fiftyone/init.py", line 25, in <module>
from fiftyone.public import *
File "/root/.clearml/venvs-builds/3.8/lib/python3.8/site-packages/fiftyone/public.py", line 15, in <module>
_foo.establish_db_conn(config)
File "/root/.clearml...
because when I was running both agents on my local machine everything was working perfectly fine
Note: switching to 'commit_id'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting ...
So, I am able to resolve the above issues
So I am running a pipeline(using tasks) remotely and one of my task is importing stuff from one of my local repository, but it's giving me an error when I run the pipeline remotely
Heyy guys, I was able to run the pipeline using autoscaler, thanks to @<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> for all your help and suggestions!!
This didn't work, is there a way that I can set up this environment variable in my docker container?
I am able to run the pipeline locally though
I want to know how to execute pip install . to import all the custom packages
And also I have a requirements file which I want to be installed when I run the pipeline remotely
Can you tell me how clearml get access to my repo even if I didn't pass any information about it?
File "/opt/conda/envs/bumlo/lib/python3.10/site-packages/clearml/binding/artifacts.py", line 745, in upload_artifact
pickle.dump(artifact_object, f)
_pickle.PicklingError: Can't pickle <class 'mongoengine.base.metaclasses.samples.6627e5ecc60879fe5e49cee6'>: attribute lookup samples.6627e5ecc60879fe5e49cee6 on mongoengine.base.metaclasses failed
@<1523701070390366208:profile|CostlyOstrich36>
So the issue I am facing is, I am running the pipeline controller task on my local system agent and the steps of the pipeline on an agent running on GCP VM, the first step of the pipeline is failing showing clearml_agent: ERROR: Failed cloning repository.
dataset = fo.Dataset.from_dir(
labels_path=labels_path,
dataset_type=fo.types.COCODetectionDataset,
label_field="ground_truth",
use_polylines=True
)
task.upload_artifact(
name="Dataset",
artifact_object=dataset,
)