Reputation
Badges 1
39 × Eureka!Works locally but not when running inside agent.
I found that adding the:Task.add_requirements('protobuf', '<=3.20.1')
before the pipeline decorator it works
I am getting this error:clearml.task - WARNING - Requirement ignored, Task.add_requirements() must be called before Task.init()
I just downgraded to 1.7.2, will wait for the fix.
Thanks for the workaround.
not getting any error when uploading.
I use “add_external_files” so it is not really uploading the file just the dataset info
I try to install package from our github
I use the decorators method and there is no “packages” param to the pipeline decorator, i added it to the task decorator “packages” param but it didn’t help
For now we store only one file in a dataset.
I just use:dataset = Dataset.create( dataset_name=dataset_name, dataset_project=dataset_project, # parent_datasets=[d1], )
Hi SuccessfulKoala55
Yes, it seems version v1.6.2rc0 solved the problem.
I do not remember exactly all the steps we did, but with the help of clearml team we found solution by adding:Task.add_requirements((…)
outside the pipeline code
no problem, you and your team are very responsive and help us a lot.
Thanks CostlyOstrich36 OutrageousSheep60 , using output_uri = “<GS_BUCKET>” solve it.
Thanks, (works when using sdk 1.4.1)
Where should i put this line? inside the pipeline function?
Thanks David,
Yes, I have the issue in pipelines and I use the decorators method for the pipelines.
The pipeline include scripts from several files.
If I put the import in the pipeline main file it works, but if the import is in another file (which is imported by the pipeline main file) it does not work.
The import is needed in one of the tasks (which is also on a different file) - I tried put ting the import on the top of the task file and also inside the task method but it didn’t work.
Also...
but the system account key and secret can’t be the same for every installation, no? i need to generate specific one for my installation, no?
For now we used a workaround and forked the helm charts repo and we changed in the agents deployment.yaml, instead of taking the key and secret from the clearml-conf secret we take them from another secret we created so the server does not “know” about this new key and secret and does not reset them
for now we used fixed number of cpu agents but it will be better if it was dynamic with glue agent
we already using glue to manage our gpu pods. The agents we use for the pipelines are simple cpu agent.
can we use multiple k8s-glue - one for cpu and one for gpu pods?
not urgent after we used the workaround
Immediately after i use “add_external_files” i see it changed the original file link and removed from it the “ gs://bucket_name ”
I will check it and update you