
Reputation
Badges 1
53 × Eureka!I didnβt get the point, if you need to change it, you can just override image name and tag by Helm
basically a new helm chart π
@<1673863788857659392:profile|HomelyRabbit25> I see your comment on that PR, if no further feedback will be there, I will try to eventually improve that PR by myself when possible.
kubectl get svc -n clearml
?
sorry but did you deployed in clearml
namesopace?
thanks for letting us know, I took a n ote for more tests on liveness, ty again!
(probably it's not possible)
an implementation of this kind is interesting for you or do you suggest to fork? I mean, I don't want to impact your time reviewing
It's in values.yaml but yes, I need to improve this part, I agree
In my case I have a similar need; I wrote a never-ending Task similar to this one used for cleanup: https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_service.py
at that point we define a queue and the agents will take care of training π
uh, using clearml-task
params π
how do you point tasks to git repo?
I think yes, at least this is whatI saw in docs
clearml --help says--version Display the clearml-task utility version
ty AgitatedDove14 , your fixes work like a charm. As reward I opened another one https://github.com/allegroai/clearml/issues/423 sorry for that π
Sure, OddShrimp85 , until you need to specifically bind any pod to a node, nodeSelector is not needed. In fact, the new chart will leave to k8s the right to share the load on the worker nodes. About pvc you simply need to declare the Storageclass at k8s level so it can take care of creating the PVC too. How many workers do you have in your setup?
this is the PR: https://github.com/allegroai/clearml-helm-charts/pull/80 https://github.com/allegroai/clearml-helm-charts/pull/80 will merge it soon so agent chart 1.0.1 will be released
I think we can find a solution pretty quickly after some checks. Can you pls open an issue on new helm chart repo so I can take care of it in some day?
if you already have data over there you may import it
if mounts are already there everywhere you can also mount directly on the nodes on a specific folder then use rancher local path provisioner
btw in k8s we abandoned the usage of services since itβs not needed anymore. you can put an agent consuming a queue and enqueue task to it
later in the day I will push also a new clearml chart that will not contain anymore k8s glue since itβs now in clearml-agent chart, this is why I was suggesting to use that chart :)
the goal is to get healthchecks green so ALB should be able to work
In fact it's the same we are applying to helm charts for k8s