Reputation
Badges 1
383 × Eureka!I am providing a helper to run a task in queue after running it locally in the notebook
Hey SuccessfulKoala55 - this was related to the previous message. Had it clarified with AgitatedDove14
Yeah got it, thanks!
Only one param, just playing around
tasks.add_or_update_artifacts/v2.10 (Invalid task status: expected=created, status=completed
AgitatedDove14 - these instructions are out of date? https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_kubernetes_helm.html
Can you let me know if i can override the docker image using template.yaml?
I deploy to kubernetes and have an ingress and ALB on top that terminates ssl
AgitatedDove14 is it possible to get the pipeline task running a step in a step? Is task.parent something that could help?
As the verify param was deprecated and now removed
What happens if I do blah/dataset_url ?
Yes using clearml-data.
Can I pass a s3 path to ds.add_files() essentially so that I can directly store a dataset without having to get the files to local and then upload again. Makes sense?
AgitatedDove14 - apologies for late reply. So to give context this in a Sagemaker notebook which has conda envs.
I use a lifecycle like this to pip install a package (a .tar.gz downloaded from s3) in a conda env- https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/install-pip-package-single-environment/on-start.sh
In the notebook I can do things like create experiments and so on. Now the problem is in running the cloned experimen...
pipeline code itself is pretty standard
Will try it out. A weird one this.
If i were to push the private package to, say artifactory, is it possible to use that do the install?
Can I switch off git diff (change detection?)
AgitatedDove14 - i had not used the autoscaler since it asks for access key. Mainly looking for GPU use cases - with sagemaker one can choose any instance they want and use it, autoscaler would need set instance configured right? need to revisit. Also I want to use the k8s glue if not for this. Suggestions?
For different workloads, I need to habe different cluster scaler rules and account for different gpu needs
Would like to get to the Maturity Level 2 here
Running multiple k8s_daemon rightt? k8s_daemon("1xGPU") and k8s_daemon('cpu') right?
forking and using the latest code fixes the boto issue at least
If i publish a keras_mnist model and experiment on, each of it gets pushed as a separate Model entity right? But there’s only one unique model with multiple different version of it