Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi! I Have A Question About The Integration Of Clearml With Yolov8 (Or Otherwise Known As Ultralytics). I Have Written A Generic Task To Run The Ultralytics Tuner Function. However, I Think That There Isn'T A Good Integration For That Specific Task Bet

Hi!

I have a question about the integration of ClearML with YOLOv8 (or otherwise known as ultralytics).

I have written a generic task to run the ultralytics tuner function.

However, I think that there isn't a good integration for that specific task between ClearML and Ultralytics.

My expectation was that if I create a task to track the tune process, it will track that process and create new tasks for the individual training iterations it performs.
That is not the case, as the YOLOv8 integration actually overrides the currently running task with whatever training is currently running.

If I want to track individual tune experiments, my only way to go currently is to not use ClearML and let the ultralytics integration create tasks automatically.
But then the results of the actual tune process aren't published in an experiment.

I am not experienced enough yet to understand what kind of hooks or callbacks I could modify, to circumvent this problem.

Does anyone have experience with using ultralytics tune together with ClearML WITHOUT losing experiment information between iterations?

I understand that there is the HPO and I have another script for that as well, which works fine.
However, I'd like to use the provided functionality from ultralytics as they have very sophisticated optimization techniques already.

Any help is greatly appreciated!

  
  
Posted 2 months ago
Votes Newest

Answers


A minimal illustration of the problem:

If I run model.tune(...) from ultralytics, then it automatically will track each iteration in ClearML and each iteration will be its own task (as it should be, given that the parameters change)

But the actual tune result will not be stored in a ClearML task, since I believe there is no integration on ultralytics side to do so.

If I create a task myself which then performs model.tune(...) it will get immediately overridden by the parameters from the individual training iterations of the tuning process.
Which means that the artifacts and parameters of previous iterations is also overridden and lost.

They still exist locally, but it is a somewhat annoying functionality.

I'd much rather tell my task to fork itself whenever it encounters further automatic tracking.

  
  
Posted 2 months ago