Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ConvolutedRaven86
Moderator
1 Question, 3 Answers
  Active since 19 July 2025
  Last activity one month ago

Reputation

0

Badges 1

3 × Eureka!
0 Votes
4 Answers
265 Views
0 Votes 4 Answers 265 Views
šŸ‘‹ Hi everyone! We’re facing an issue where ClearML workloads run successfully on our Kubernetes cluster (community edition), but never utilize the GPU — des...
one month ago
0 Hi Everyone! We’Re Facing An Issue Where Clearml Workloads Run Successfully On Our Kubernetes Cluster (Community Edition), But Never Utilize The Gpu — Despite Being Scheduled On

Hey @<1523701070390366208:profile|CostlyOstrich36> , thanks for the suggestion!
Yes, I did manually run the same code on the worker node (e.g., using python3 llm_deployment.py ), and it successfully utilized the GPU as expected.
What I’m observing is that when I deploy the workload directly on the worker node like that, everything works fine — the task picks up the GPU, logs stream back properly, and execution behaves normally.
However, when I submit the same code using clearml-task f...

one month ago