Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Is There A Way To Deploy It Without Using Kind? Seems That Kind Taking Over The Control-Plane Configuration In A Way I Can'T See The Other Pods Or Nodes That Keep Running

Is there a way to deploy it without using Kind? seems that Kind taking over the control-plane configuration in a way i can't see the other pods or nodes that keep running

  
  
Posted 2 years ago
Votes Newest

Answers 36


image

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

elastic is not being scheduled

  
  
Posted 2 years ago

you need to investigate why it’s still in Pending state

  
  
Posted 2 years ago

seems like i didn't define a persistant volume

  
  
Posted 2 years ago

Client Version: v1.25.3
Kustomize Version: v4.5.7
Server Version: v1.25.3

  
  
Posted 2 years ago

some suggestions:
start working just with clearml (no agent or serving, these ones will go in after clearml is working) try a fist deploy without any override if it works start adding values to override file (without reporting everything or it will be very difficult to debug, you should not report on override file what is not overridden) do helm upgrade check problems one by one

  
  
Posted 2 years ago

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
chart-1669648334 clearml 1 2022-11-28 17:12:14.781754603 +0200 IST deployed clearml-4.3.0 1.7.0

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

can you pls put the entire helm list -A output command?

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

you need a dynamic capable provisioner

  
  
Posted 2 years ago

this is the problem the elastic pod shows:
Events:
Type Reason Age From Message


Warning FailedScheduling 65s (x2 over 67s) default-scheduler 0/6 nodes are available: 6 pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.

  
  
Posted 2 years ago

same , when i'm installing directly: sudo helm install clearmlai-1 allegroai-clearml/clearml -n clearml

  
  
Posted 2 years ago

clearml-4.3.0

  
  
Posted 2 years ago

hi i tried a new installation using the files downloaded just perform : helm install clearml . -n clearml
without changing anything.
the problem persist.

  
  
Posted 2 years ago

i know what storageclass is.. but i don't think that this is the problem i do have one standard, seems that pv claim do not collecting it

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

so the missing volume, do you have a recommendation how do i solve this?

  
  
Posted 2 years ago

Also k8s distribution and version you are using can be useful

  
  
Posted 2 years ago

image

  
  
Posted 2 years ago

can you also show output of kubectl get po of the namespace where you installaed clearml?

  
  
Posted 2 years ago

oh now I understand

  
  
Posted 2 years ago

i want the storage to be on NFS eventually, the NFS is mounted to a local path on all the nodes (/data/nfs-ssd)

  
  
Posted 2 years ago

i set it as default the results are the same

  
  
Posted 2 years ago

with that in place k8s should be able to provisione pvc

  
  
Posted 2 years ago

or maybe should i create my own pv?

  
  
Posted 2 years ago

i'm running on bare metal cluster if that matter

  
  
Posted 2 years ago

is there a mount i need to add?

  
  
Posted 2 years ago
74K Views
36 Answers
2 years ago
2 years ago
Tags