Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Have A Question Regarding The Deletion Of Archived Experiments. Some Of Them Can'T Be Deleted And The Error Message Is

I have a question regarding the deletion of archived experiments. Some of them can't be deleted and the error message is
General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.'))As I understand this is an "error" on elasticsearch side. Did you have any clue how to remove the experiment ? without changing the search.max_buckets parameter on the elastic container if possible.

  
  
Posted 2 years ago
Votes Newest

Answers 30


I have made some changes in the code
logger.clearml_logger.report_image( self.tag, f"{self.tag}_{epoch:0{pad}d}", iteration=iteration, image=image ) `` epoch range is 0-150 iteration range is 0-100And the error is still there
General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.'))Could it be because the joint of the scalar graph + debug samples ?
I have 8 scalar graph:
2 :monitor:{gpu|machine}: with 15k iteration 2 training_{metrics|loss} with 15k iteration and the other between 150 and 40 iteration each
SuccessfulKoala55 did you have any other suggestion? did I do something wrong with my changes ?

  
  
Posted 2 years ago

it's a matter of scale for the query that retrieves the data, not related to the amount of data

  
  
Posted 2 years ago

Sure, let me know if I can help 🙂

  
  
Posted 2 years ago

150 x 100 is still larger than 10,000

  
  
Posted 2 years ago

iteration has nothing to do with it

  
  
Posted 2 years ago

Oh, sorry

  
  
Posted 2 years ago

Can I still ask you to open a GitHub issue? stuff tends to get lost here, and I can't get to it today 😞

  
  
Posted 2 years ago

Issue open on the clearml-server github https://github.com/allegroai/clearml-server/issues/89 . Thanks for your help.

  
  
Posted 2 years ago

Is it better on clearml or clearml-server ?

  
  
Posted 2 years ago

Ok fine.

  
  
Posted 2 years ago

I call it like that:
logger.clearml_logger.report_image( self.tag, f"{self.tag}_{iteration:0{pad}d}", epoch, image=image ) `` self.tag is train or valid . iteration is an int for the minibatch in the epoch

  
  
Posted 2 years ago

Hi SteadyFox10 , how many unique metrics and variants do you have in this task? We may be hitting some limit here

  
  
Posted 2 years ago

Something like 100 epoch with a least more than 100 images par epoch reported.

  
  
Posted 2 years ago

I have 6 plots with one or 2 metrics. But I have a lot of debug samples.

  
  
Posted 2 years ago

What do you use as title and for the series for each image?

  
  
Posted 2 years ago

Reducing the number of image reported (already in our plan)

You don't actually need to reduce the number of images, just make sure the series parameter is consistent, so basically you want to make sure that in every report (i.e. iteration in which you're reporting), you have a fixed set of title/series values

  
  
Posted 2 years ago

That's really hard to support using ES as it inflates the number of buckets in the aggregation used when trying to locate unique debug images

  
  
Posted 2 years ago

We're planning to optimize the server code for these cases, but I would suggest using a more fixed set of title/series for your debug images

  
  
Posted 2 years ago

Thanks a lot I'll check how to do this correctly

  
  
Posted 2 years ago

That's the issue...

  
  
Posted 2 years ago

I'll try to make a code that reproduce this behavior and post it on github is it fine ? that way you could check if I'm the problem (which is really likely) 😛

  
  
Posted 2 years ago

This is a run I made with the changes, As you can see the iteration now go from 0-111 and in each of them I have image with the name train_{001|150}

  
  
Posted 2 years ago

Even simpler than a github, this code reproduce the issues I have.

  
  
Posted 2 years ago

SuccessfulKoala55 feel free to roast my errors.

  
  
Posted 2 years ago

Are you using a fixed self.tag ?

  
  
Posted 2 years ago

I'd appreciate that 🙂

  
  
Posted 2 years ago

yes tag is fixed

  
  
Posted 2 years ago

So I see two options:
Reducing the number of image reported (already in our plan) Make on big image per epoch

  
  
Posted 2 years ago

You're generating a huge amount of variants ( series ) using the iteration number

  
  
Posted 2 years ago

That's strange... 😕

  
  
Posted 2 years ago
685 Views
30 Answers
2 years ago
one year ago
Tags