Reputation
Badges 1
83 × Eureka!Do I need not make changes into clearml.conf so that it doesn't ask for my credentials or is there another way around
so inside /Users/adityachaudhry/.clearml/venvs-builds.1/3.10/task_repository/ I have my git repo, I have one component that make a dataset directory inside this git repo, but when the other component starts executing this dataset directory is not there
While creating a GCP credentials using None
What values should I insert in the following step so that the autoscaler has access, as of now I left this field blank
Ok so here's what I want to do, I want to export Google Application credentials to my docker container. Here's what I have tried so far
agent.extra_docker_shell_script: [
"echo -E '{
'type': 'xxx',
'project_id': 'xxx',
'private_key_id': 'xxx',
....
}' > google-api-key.json", "export GOOGLE_APPLICATION_CREDENTIALS=google-api-key.json"]
inside the containers that are spinning on the host machine
I think I got it resolved
So I should clone the pipeline, run the agent and then enqueue the cloned pipeline?
So for my project I have a dataset present in my local system, when I am running the pipeline remotely is there a way the remote machine can access it?
Is there a way to work around this?
so you mean when i ssh into my VM i need to do a git clone and then spin up the agent, right?
Just a follow up on this issue, @<1523701087100473344:profile|SuccessfulKoala55> @<1523701205467926528:profile|AgitatedDove14> I would very much appreciate it if you could help me with this.
And one more thing is there a way to make changes to the .bashrc which is present inside the docker container
So I am running a pipeline(using tasks) remotely and one of my task is importing stuff from one of my local repository, but it's giving me an error when I run the pipeline remotely
If you can let me know @<1576381444509405184:profile|ManiacalLizard2> @<1523701087100473344:profile|SuccessfulKoala55> how to resolve this, that would be very much helpful
Heyy guys, I was able to run the pipeline using autoscaler, thanks to @<1523701070390366208:profile|CostlyOstrich36> @<1523701087100473344:profile|SuccessfulKoala55> for all your help and suggestions!!
I am uploading the dataset (for Yolov8 training) as an artifact, when I am downloading the artifact (.zip file) from the UI the path to images is something like /Users/adityachaudhry/.clearml/cache/......, but when I am doing .get_local_copy() I am getting the local folder structure where I have my images locally in my system as path. For running the pipeline remotely I want the path to be like /Users/adityachaudhry/.clearml/cache/......
I want to understand what's happening at the backend. I want to know how running the pipeline logic and the tasks on separate agents gonna sync everything up
2023-10-03 20:46:07,100 - clearml.Auto-Scaler - INFO - Spinning new instance resource='clearml-autoscaler-vm', prefix='dynamic_gcp', queue='default'
2023-10-03 20:46:07,107 - googleapiclient.discovery_cache - INFO - file_cache is only supported with oauth2client<4.0.0
2023-10-03 20:46:07,122 - clearml.Auto-Scaler - INFO - Creating regular instance for resource clearml-autoscaler-vm
2023-10-03 20:46:07,264 - clearml.Auto-Scaler - INFO - --- Cloud instances (0):
2023-10-03 20:46:07,482 - clearm...
Is there a way to change the path inside the .txt file to clearml cache, because my images are stored in clearml cache only
while we spin up the autoscaler instance
Can you explain how running two agents would help me run the whole pipeline remotely? Sorry if its a very basic question
dataset = fo.Dataset.from_dir(labels_path=labels_path,dataset_type=fo.types.COCODetectionDataset,label_field="ground_truth",use_polylines=True)task.upload_artifact(name="Dataset",artifact_object=dataset,)
@<1523701070390366208:profile|CostlyOstrich36>
I am able to get the requirements installed for each task
File "/opt/conda/envs/bumlo/lib/python3.10/site-packages/clearml/binding/artifacts.py", line 745, in upload_artifact
pickle.dump(artifact_object, f)
_pickle.PicklingError: Can't pickle <class 'mongoengine.base.metaclasses.samples.6627e5ecc60879fe5e49cee6'>: attribute lookup samples.6627e5ecc60879fe5e49cee6 on mongoengine.base.metaclasses failed
I am providing pillow>=8.3.1 in my req.txt and I think clearml has Pillow==10.0.0 already
And also I have a requirements file which I want to be installed when I run the pipeline remotely


