Reputation
Badges 1
383 × Eureka!Yeah please if you can share some general active ones to discuss both algos and engineering side
Essentially - 1. run a task normally. 2. clone 3. edit to have only those two lines.
Question - since this is a task, why is Task.currnet_task() None?
The Optimizer task is taking a lot of time to complete. Is it doing something here:
You mean the job with the exact same arguments ?
Yes
BTW when I started using s3, I was thinking I needed to specify ouput_uri for each task. Soon realized that you just need the prefix where you want to put it into, and clearml will take care of project etc being appended to the path. So for most usecases, a single output uri set in conf should work.
Lot of us are averse to using git repo directly
This worked well:
if project_name is None and Task.current_task() is not None: project_name = Task.current_task().get_project_name()
AgitatedDove14 - i had not used the autoscaler since it asks for access key. Mainly looking for GPU use cases - with sagemaker one can choose any instance they want and use it, autoscaler would need set instance configured right? need to revisit. Also I want to use the k8s glue if not for this. Suggestions?
Is there a good way to get the project of a task?
How can a task running like this know its own project name?
This is for building my model package for inference
Generally like the kedro project and pipeline setup that I have seen so far, but haven’t started using it in anger yet. Been looking at clearml as well, so wanted to check how well these two work together
AgitatedDove14 - on a similar note, using this is it possible to add to requirements of task with task_overrides?
So General would have created a General instead of Args?
AgitatedDove14 aws autoscaler is not k8s native right? That's sort of the loose point I am coming at.
I deploy to kubernetes and have an ingress and ALB on top that terminates ssl
As in if there are jobs, first level is new pods, second level is new nodes in the cluster.
IF there’s a post-task script, I can add a way to zip and upload pip cache etc to s3 - as in do any caching that I want without having first class support in clearml
AgitatedDove14 - mean this - says name=None but text says default is General.
Sounds great!
Is nested projects in the release? I see it in the community server but no mention in the blog or the release notes?
AgitatedDove14 - the actual replication failed. When we run a task by cloning and enqueueing, there is a current task even if I am yet to do a Task.init right?
Which would also mean that the system knows which datasets are used in which pipelines etc
yeah i was trying in local and it worked as expected. But in local I was creating a Task first and then seeing if it’s able to get project name from it
Interesting. How do you do PVC? by using the yaml template optionn?
I am going to be experimenting a bit as well, will get back on this topic in a couple of weeks 🙂