
Reputation
Badges 1
36 × Eureka!for now we used fixed number of cpu agents but it will be better if it was dynamic with glue agent
I will check it and update you
since the gpu is expensive we want the glue to manage the pods
not urgent after we used the workaround
For now we used a workaround and forked the helm charts repo and we changed in the agents deployment.yaml, instead of taking the key and secret from the clearml-conf secret we take them from another secret we created so the server does not “know” about this new key and secret and does not reset them
but the system account key and secret can’t be the same for every installation, no? i need to generate specific one for my installation, no?
no, it work if the tag exist and i do other change to the task. but if the task does not have the task and I just add the tag the event is not trigger.
Where should i put this line? inside the pipeline function?
Works locally but not when running inside agent.
I found that adding the:Task.add_requirements('protobuf', '<=3.20.1')
before the pipeline decorator it works
Thanks David,
Yes, I have the issue in pipelines and I use the decorators method for the pipelines.
The pipeline include scripts from several files.
If I put the import in the pipeline main file it works, but if the import is in another file (which is imported by the pipeline main file) it does not work.
The import is needed in one of the tasks (which is also on a different file) - I tried put ting the import on the top of the task file and also inside the task method but it didn’t work.
Also...
no problem, you and your team are very responsive and help us a lot.
can we use multiple k8s-glue - one for cpu and one for gpu pods?
Not sure I understand, you are saying I should not create user credentials and add them in values.yaml at secret.credentials.apiserver and secret.credentials.tests. ?
Thanks, (works when using sdk 1.4.1)
Hi SuccessfulKoala55
Yes, it seems version v1.6.2rc0 solved the problem.
where my configuration is to use gcp storage with gs://…..
we already using glue to manage our gpu pods. The agents we use for the pipelines are simple cpu agent.
Hi,
res is None but the trigger works when doing other changes so i guess it was added.
For now we store only one file in a dataset.
I just use:dataset = Dataset.create( dataset_name=dataset_name, dataset_project=dataset_project, # parent_datasets=[d1], )
Thanks CostlyOstrich36 OutrageousSheep60 , using output_uri = “<GS_BUCKET>” solve it.
Immediately after i use “add_external_files” i see it changed the original file link and removed from it the “ gs://bucket_name ”
I just downgraded to 1.7.2, will wait for the fix.
Thanks for the workaround.
I try to install package from our github
not getting any error when uploading.
I use “add_external_files” so it is not really uploading the file just the dataset info