Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Clearml Server Deployment Uses Node Storage. If More Than One Node Is Labeled As App=Clearml, And You Redeploy Or Update Later, Then Clearml Server May Not Locate All Your Data.

ClearML Server deployment uses node storage. If more than one node is labeled as app=clearml, and you redeploy or update later, then ClearML Server may not locate all your data.https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_kubernetes.html

Does this mean this is not really a production support? What happens if the node dies and comes back?

  
  
Posted 2 years ago
Votes Newest

Answers 21


in the repo whereas the docs are https://allegroai.github.io/clearml-server-helm/

  
  
Posted 2 years ago

basically PVC for all the DBs 🙂

  
  
Posted 2 years ago

The repo seems to be

  
  
Posted 2 years ago

Beyond this have the UI running, have to start playing with it. Any suggestions for agents with k8s?

  
  
Posted 2 years ago

agentservice...

Not related, the agent-services job is to run control jobs, such as pipelines and HPO control processes.

  
  
Posted 2 years ago

The helm chart installs a agentservice, how is that related if at all?

  
  
Posted 2 years ago

sure, will do AlertBlackbird30

  
  
Posted 2 years ago

Sure thing

  
  
Posted 2 years ago

AlertBlackbird30 - got it running. Few comments:

Nodeport is set by default despite being parameter in values.yml. For example:` webserver:
extraEnvs: []

service:
type: NodePort
port: 80 `2. Ingress was using 8080 for webserver but service was 80
3. Had to change path in ingress to “/*” instead of “/” to get it working for me

  
  
Posted 2 years ago

No, if you need the cloud ready install (which you do), follow the instructions on the repo readme (not the easy single node setup in the docs, which we will be updating soon)
https://github.com/allegroai/clearml-server-helm-cloud-ready

  
  
Posted 2 years ago

one last tiny thing TrickySheep9 .. please do let us know how you get on, good or bad.. and if you bump into anything unexpected then please do scream and let us know 🙂

  
  
Posted 2 years ago

Thanks!

  
  
Posted 2 years ago

Thanks! Is there GPU support, not clear from the Readme AgitatedDove14

  
  
Posted 2 years ago

Wait, let me double check

  
  
Posted 2 years ago

TrickySheep9 make sense?

  
  
Posted 2 years ago

No 😞

  
  
Posted 2 years ago

All right got it, will try it out. Thanks for the quick response.

  
  
Posted 2 years ago

AgitatedDove14 - these instructions are out of date? https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_kubernetes_helm.html

  
  
Posted 2 years ago

is there GPU support

That's basically depends on your template yaml resources, you can have multiple of those each one "connected" with a diff glue pulling from a diff queue. This way the user can enqueue a Task in a specific queue, say single_gpu , then the glue listens on that queue and for each clearml Task it creates a k8s job the single gpu as specified in the pod template yaml.

  
  
Posted 2 years ago
593 Views
21 Answers
2 years ago
one year ago
Tags