Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Does Clearml Somehow

Does ClearML somehow remove any loggers from logging module? We suddenly noticed that we have some handlers missing when running in ClearML

  
  
Posted one year ago
Votes Newest

Answers 18


Internally yes, but in Task.init the default argument is a boolean, not an int.
We don't want to close the task, but we have a remote task that spawns more tasks. With this change, subsequent calls to Task.init fail because it goes in the deferred init clause and fails on validate_defaults .

  
  
Posted one year ago

UnevenDolphin73 looking at the code again, I think it is actually correct. it's a bit hackish, but we do use deferred_init as an int internally. Why do you need to close the task exactly? Do you have a script that would highlight the behaviour change between <1.8.1 and >=1.8.1 ?

  
  
Posted one year ago

So now we need to pass Task.init(deferred_init=0) because the default Task.init(deferred_init=False) is wrong

  
  
Posted one year ago

True, and we plan to migrate to pipelines once we have some time for it :) but anyway that condition is flawed I believe

  
  
Posted one year ago

Hi UnevenDolphin73

Does ClearML somehow

remove

any loggers from

logging

module? We suddenly noticed that we have some handlers missing when running in ClearML

I believe it adds a logger, it should not remove any loggers,
What's the clearml version you are using ?

  
  
Posted one year ago

We suddenly have a need to setup our logging after every

task.close()

Hmm that gives me a handle on things, any chance it is easily reproducible ?

  
  
Posted one year ago

1.8.3; what about when calling task.close() ? We suddenly have a need to setup our logging after every task.close() call

  
  
Posted one year ago

Hi UnevenDolphin73 ! We were able to reproduce the issue. We'll ping you once we have a fix as well 👍

  
  
Posted one year ago

I narrowed the bug down to the "fix" in 1.8.1, see my other post

  
  
Posted one year ago

UnevenDolphin73 did that fix the logging for you? doesn't seem to work on my machine. This is what I'm running:
` from clearml import Task
import logging

def setup_logging():
level = logging.DEBUG
logging_format = "[%(levelname)s] %(asctime)s - %(message)s"
logging.basicConfig(level=level, format=logging_format)

t = Task.init()
setup_logging()
logging.info("HELLO!")
t.close()
logging.info("HELLO2!") `

  
  
Posted one year ago

UnevenDolphin73 looks like we clear all loggers when a task is closed, not just clearml ones. this is the problem

  
  
Posted one year ago

SmugDolphin23 I think you can simply change not (type(deferred_init) == int and deferred_init == 0) to deferred_init is True ?

  
  
Posted one year ago

Oh, well, no, but for us that would be one way solution (we didn't need to close the task before that update)

  
  
Posted one year ago

I see. We need to fix both anyway, so we will just do that

  
  
Posted one year ago

ClearML does not officially support a remotely executed task to spawn more tasks we do through pipelines, it that helps you somehow. Note that doing things the way you do them right now might break some other functionality.
Anyway, I will talk with the team and maybe change this behaviour because it should be easy 👍

  
  
Posted one year ago

Kinda, yes, and this has changed with 1.8.1.
The thing is that afaik currently ClearML does not officially support a remotely executed task to spawn more tasks, so we also have a small hack that marks the remote "master process" as a local task prior to anything else.

  
  
Posted one year ago

So the flow is like:
MASTER PROCESS -> (optional) calls task.init -> spawns some children CHILD PROCESS -> calls Task.init. The init is deferred even tho it should not be?
If so, we need to fix this for sure

  
  
Posted one year ago

But it is strictly that if condition in Task.init, see the issue I opened about it

  
  
Posted one year ago