Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
IdealCamel90
Moderator
1 Question, 6 Answers
  Active since 15 July 2025
  Last activity 29 days ago

Reputation

0

Badges 1

3 × Eureka!
0 Votes
7 Answers
388 Views
0 Votes 7 Answers 388 Views
Hi, after today's update clear.ml asks to restart autoscaler and it's still pending. What's wrong? Somebody has same issue?
one month ago
0 Hello Everyone, We’Re Encountering A Persistent Issue With Our Autoscaler Setup And Could Really Use Some Help. Despite Having The Autoscaler Running And The Queue (Default_Cpu) Properly Populated (87 Jobs Pending), The Tasks Are Never Picked Up And Exe

It looks like the task.execute_remotely() method is somehow broken. Previously, when I used it, the task would run in the queue with the same parameters I set locally. But now the parameters are not being passed correctly, and I end up with two tasks: one that I launched locally (but it ends up running remotely), and another one — but without any parameters at all.
It's strange — how could the agent update have affected this?

one month ago
0 Hello Everyone, We’Re Encountering A Persistent Issue With Our Autoscaler Setup And Could Really Use Some Help. Despite Having The Autoscaler Running And The Queue (Default_Cpu) Properly Populated (87 Jobs Pending), The Tasks Are Never Picked Up And Exe

Hi,
It looks like the same issue is happening. It seems to be caused by the recent update of the clearml-agent package to version 2.0.0 .
When I start the queue locally, the agent appears in the list but doesn't pick up any tasks. On the agent side, I get the following error:

FATAL ERROR:
Traceback (most recent call last):
File "***.venv/lib/python3.12/site-packages/clearml_agent/commands/worker.py", line 2128, in daemon
self.run_tasks_loop(
File "***.venv/lib/python3.12/site-pa...

one month ago
one month ago
0 Hi, After Today'S Update Clear.Ml Asks To Restart Autoscaler And It'S Still Pending. What'S Wrong? Somebody Has Same Issue?

no, I have only one autoscaler and few queues. Maybe it was queue on clear.ml side to restart autoscalers?

29 days ago