
Reputation
Badges 1
39 × Eureka!ok,
just a small question, is the parameter “defaultContainerImage” tell the glue agent which image to use for the spawned agent ?
no problem, you and your team are very responsive and help us a lot.
Hi SuccessfulKoala55
Yes, it seems version v1.6.2rc0 solved the problem.
I am getting this error:clearml.task - WARNING - Requirement ignored, Task.add_requirements() must be called before Task.init()
I use the decorators method and there is no “packages” param to the pipeline decorator, i added it to the task decorator “packages” param but it didn’t help
I try to install package from our github
I just use:dataset = Dataset.create( dataset_name=dataset_name, dataset_project=dataset_project, # parent_datasets=[d1], )
Immediately after i use “add_external_files” i see it changed the original file link and removed from it the “ gs://bucket_name ”
not getting any error when uploading.
I use “add_external_files” so it is not really uploading the file just the dataset info
I just downgraded to 1.7.2, will wait for the fix.
Thanks for the workaround.
For now we store only one file in a dataset.
Thanks David,
Yes, I have the issue in pipelines and I use the decorators method for the pipelines.
The pipeline include scripts from several files.
If I put the import in the pipeline main file it works, but if the import is in another file (which is imported by the pipeline main file) it does not work.
The import is needed in one of the tasks (which is also on a different file) - I tried put ting the import on the top of the task file and also inside the task method but it didn’t work.
Also...
Where should i put this line? inside the pipeline function?
Thanks CostlyOstrich36 OutrageousSheep60 , using output_uri = “<GS_BUCKET>” solve it.
we already using glue to manage our gpu pods. The agents we use for the pipelines are simple cpu agent.
can we use multiple k8s-glue - one for cpu and one for gpu pods?
I will check it and update you
not urgent after we used the workaround
Not sure I understand, you are saying I should not create user credentials and add them in values.yaml at secret.credentials.apiserver and secret.credentials.tests. ?
For now we used a workaround and forked the helm charts repo and we changed in the agents deployment.yaml, instead of taking the key and secret from the clearml-conf secret we take them from another secret we created so the server does not “know” about this new key and secret and does not reset them
since the gpu is expensive we want the glue to manage the pods
for now we used fixed number of cpu agents but it will be better if it was dynamic with glue agent
but the system account key and secret can’t be the same for every installation, no? i need to generate specific one for my installation, no?
I do not remember exactly all the steps we did, but with the help of clearml team we found solution by adding:Task.add_requirements((…)
outside the pipeline code