Reputation
Badges 1
383 × Eureka!AgitatedDove14 - are there cases when it tries to skip steps?
In this case I have data and then set of pickles created from the data
Something like:
with Task() as t: #train
AgitatedDove14 - it does have boto but the clearml-serving installation and code refers to older commit hash and hence the task was not using them - https://github.com/allegroai/clearml-serving/blob/main/clearml_serving/serving_service.py#L217
Thanks you! Does this go as a root logging {} element in the main conf? outside SDK right?
Would this be a good use case to have?
# Python 3.6.13 | packaged by conda-forge | (default, Feb 19 2021, 05:36:01) [GCC 9.3.0] argparse == 1.4.0 boto3 == 1.17.70 minerva == 0.1.0 torch == 1.7.1 torchvision == 0.8.2
Ok, just my ignorance then? 🙂
AgitatedDove14 - just saw about start_remotely - https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#start_remotely
This means the services agent will take care of taking it to completion right?
Generally like the kedro project and pipeline setup that I have seen so far, but haven’t started using it in anger yet. Been looking at clearml as well, so wanted to check how well these two work together
But you have to do config.pbtxt stuff right?
AgitatedDove14 - i had not used the autoscaler since it asks for access key. Mainly looking for GPU use cases - with sagemaker one can choose any instance they want and use it, autoscaler would need set instance configured right? need to revisit. Also I want to use the k8s glue if not for this. Suggestions?
Having a pipeline controller and running actually seems to work as long as i have them as separate notebooks
If you don’t mind, can you point me at the code where this happens?
I am providing a helper to run a task in queue after running it locally in the notebook
Hey SuccessfulKoala55 - this was related to the previous message. Had it clarified with AgitatedDove14
Yeah got it, thanks!
Only one param, just playing around
tasks.add_or_update_artifacts/v2.10 (Invalid task status: expected=created, status=completed
AgitatedDove14 - these instructions are out of date? https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_kubernetes_helm.html
Can you let me know if i can override the docker image using template.yaml?
I deploy to kubernetes and have an ingress and ALB on top that terminates ssl
AgitatedDove14 is it possible to get the pipeline task running a step in a step? Is task.parent something that could help?