Reputation
Badges 1
383 × Eureka!But that itself is running in a task right?
I don’t want to though. Will run it as part of a pipeline
Which kind of access specifically? I handle permissions with IAM roles
With the human activity being a step where some manual validations, annotations, feedback might be required
Documentations is not very clear
It would be good if new agent package is published
I was thinking such limitations will exist only for published
Essentially. Not about removing all WARNINGS, but removing this as it actually works right and the WARNING is wrong.
This worked well:
if project_name is None and Task.current_task() is not None: project_name = Task.current_task().get_project_name()
AgitatedDove14 - any pointers on how to run gpu tasks with k8s glue. How to control the queue and differentiate tasks that need cpu vs gpu in this context
you will have to update this in your local clearml.conf, or wherever you are doing the Task.init from.
And anyway once a model is published can’t update it right? Which means there will be atleast multiple published models entries of same model over time?
Beyond this have the UI running, have to start playing with it. Any suggestions for agents with k8s?
Thanks! Is there GPU support, not clear from the Readme AgitatedDove14
Only one param, just playing around
Do let me know when task scheduler docs are up 🙂
tasks.add_or_update_artifacts/v2.10 (Invalid task status: expected=created, status=completed
more like testing especially before a pipeline
yeah i was trying in local and it worked as expected. But in local I was creating a Task first and then seeing if it’s able to get project name from it
AgitatedDove14 aws autoscaler is not k8s native right? That's sort of the loose point I am coming at.
The image to run is empty essentially
Currently we train from Sagemaker notebooks, push models to S3 and create containers for model serving
This is my code, but it’s pretty standard