Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi Everyone, I Was Running An Optimization Task With Optunaoptimzer, But I Had The Following Error :

Hi everyone, I was running an optimization task with OptunaOptimzer, but I had the following error :
Could not find requested metric ('evaluate', 'accuracy') report on base task.
I should also say that I have reported this metric to the logger by inserting the following line (in the script of the base task):
Logger.current_logger().report_scalar(title='evaluate', series='accuracy', value=score[1], iteration=parameters['epochs'])
How should I have reported this metric differently on my base task ?

  
  
Posted 2 years ago
Votes Newest

Answers 17


Please do. You can download the entire log from the UI 🙂

  
  
Posted 2 years ago

this is the correct file

  
  
Posted 2 years ago

Where is the error?

  
  
Posted 2 years ago

Because of a server error I can't download the log so I attached a screenshot. In the log I see only the following reports (without a summary table/plot).

  
  
Posted 2 years ago

Unfortunately, I am not running on a community server

  
  
Posted 2 years ago

Any chance you could provide a share-able link if you're running on the community server?

  
  
Posted 2 years ago

But I can add screenshots of the log file if necessary

  
  
Posted 2 years ago

Yes, you can message me directly 🙂

  
  
Posted 2 years ago

In another task I have tried to evaluate this metric but received similar error :
clearml.automation.optimization - WARNING - Could not find requested metric ('evaluate', 'val_loss') report on base task

  
  
Posted 2 years ago

What is the base task you are using? It looks like you're using one of the examples 🙂

  
  
Posted 2 years ago

Results -> Scalars 🙂

  
  
Posted 2 years ago

And afterwards, I have the following output that continues for 300 iterations without further reports of metrics

  
  
Posted 2 years ago

It looks like you are running on the community server. Can you right click the experiment in the experiments table and click on 'Share' on all the relevant experiments and send here?

  
  
Posted 2 years ago

I indeed have different scalar there :
val_loss but I have reported this metric in the checkpoint not in the logger..

  
  
Posted 2 years ago

I have some info that I wouldn't like to post here (due to security reasons), is there a way to share the link only with your user ? 🙂

  
  
Posted 2 years ago

When looking at the base task, do you have that metric there?

  
  
Posted 2 years ago

where should I look to see this metric? at the scalars tab?

  
  
Posted 2 years ago
974 Views
17 Answers
2 years ago
one year ago
Tags