
Reputation
Badges 1
383 × Eureka!I don’t want to though. Will run it as part of a pipeline
Would this be a good use case to have?
BTW AgitatedDove14 - that 1.0.0 is the helm chart version, not necessarily the version of the app the chart deploys
AgitatedDove14 - any pointers on how to run gpu tasks with k8s glue. How to control the queue and differentiate tasks that need cpu vs gpu in this context
AgitatedDove14 is it possible to get the pipeline task running a step in a step? Is task.parent something that could help?
Also going off this 🙂
The GCP image and Helm chart for ClearML Server maybe slightly delayed for purely man-power reasons.
I can contribute as well as needed
I would like to create a notebook instance and start using it without having to do anything on a dev box
This worked well:
if project_name is None and Task.current_task() is not None: project_name = Task.current_task().get_project_name()
(I need this because I refer to datasets in the same project but without specifying the project name)
I only see published getting preference, not a way to filter only to published
Is there a good reference to get started with k8s glue?
Thanks that works. Had to use Task.completed()
for my version
How is clearml-session intended to be used?
AgitatedDove14 - i had not used the autoscaler since it asks for access key. Mainly looking for GPU use cases - with sagemaker one can choose any instance they want and use it, autoscaler would need set instance configured right? need to revisit. Also I want to use the k8s glue if not for this. Suggestions?
I am essentially creating a EphemeralDataset abstraction and creating controlled lifecycle for it such that the data is removed after a day in experiments. Additionally and optionally, data created during a step in a pipeline can be cleared once the pipeline completes
As the verify
param was deprecated and now removed
So packages have to be installed and not just be mentioned in requirements / imported?
But ok the summary is I guess it doesn’t work in a k8s env
If i publish a keras_mnist model and experiment on, each of it gets pushed as a separate Model entity right? But there’s only one unique model with multiple different version of it
Isn't clearml local storage as well if needed?
Ah ok. Kind of getting it, will have to try the glue mode
Any specific use case for the required “draft” mode?
Nothing except that Draft makes sense feels like the task is being prepped and Aborted feels like something went wrong
Without some sort of automation on top feels a bit fragile