No, if you need the cloud ready install (which you do), follow the instructions on the repo readme (not the easy single node setup in the docs, which we will be updating soon)
https://github.com/allegroai/clearml-server-helm-cloud-ready
Hi TrickySheep9
You should probably check the new https://github.com/allegroai/clearml-server-helm-cloud-ready helm chart 😉
https://github.com/allegroai/clearml-server-helm-cloud-ready
one last tiny thing TrickySheep9 .. please do let us know how you get on, good or bad.. and if you bump into anything unexpected then please do scream and let us know 🙂
in the repo whereas the docs are https://allegroai.github.io/clearml-server-helm/
Beyond this have the UI running, have to start playing with it. Any suggestions for agents with k8s?
agentservice...
Not related, the agent-services job is to run control jobs, such as pipelines and HPO control processes.
Thanks! Is there GPU support, not clear from the Readme AgitatedDove14
All right got it, will try it out. Thanks for the quick response.
yes, TrickySheep9 use the k8s glue from here:
https://github.com/allegroai/clearml-agent/blob/master/examples/k8s_glue_example.py
AgitatedDove14 - these instructions are out of date? https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_kubernetes_helm.html
The helm chart installs a agentservice, how is that related if at all?
AlertBlackbird30 - got it running. Few comments:
Nodeport is set by default despite being parameter in values.yml. For example:` webserver:
extraEnvs: []
service:
type: NodePort
port: 80 `2. Ingress was using 8080 for webserver but service was 80
3. Had to change path in ingress to “/*” instead of “/” to get it working for me
is there GPU support
That's basically depends on your template yaml resources, you can have multiple of those each one "connected" with a diff glue pulling from a diff queue. This way the user can enqueue a Task in a specific queue, say single_gpu
, then the glue listens on that queue and for each clearml Task it creates a k8s job the single gpu as specified in the pod template yaml.