Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
I Have A Question Regarding The Deletion Of Archived Experiments. Some Of Them Can'T Be Deleted And The Error Message Is

I have a question regarding the deletion of archived experiments. Some of them can't be deleted and the error message is
General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.'))As I understand this is an "error" on elasticsearch side. Did you have any clue how to remove the experiment ? without changing the search.max_buckets parameter on the elastic container if possible.

  
  
Posted one year ago
Votes Newest

Answers 30


I'll try to make a code that reproduce this behavior and post it on github is it fine ? that way you could check if I'm the problem (which is really likely) 😛

  
  
Posted one year ago

This is a run I made with the changes, As you can see the iteration now go from 0-111 and in each of them I have image with the name train_{001|150}

  
  
Posted one year ago

Can I still ask you to open a GitHub issue? stuff tends to get lost here, and I can't get to it today 😞

  
  
Posted one year ago

Even simpler than a github, this code reproduce the issues I have.

  
  
Posted one year ago

SuccessfulKoala55 feel free to roast my errors.

  
  
Posted one year ago

That's strange... 😕

  
  
Posted one year ago

yes tag is fixed

  
  
Posted one year ago

I call it like that:
logger.clearml_logger.report_image( self.tag, f"{self.tag}_{iteration:0{pad}d}", epoch, image=image ) `` self.tag is train or valid . iteration is an int for the minibatch in the epoch

  
  
Posted one year ago

Hi SteadyFox10 , how many unique metrics and variants do you have in this task? We may be hitting some limit here

  
  
Posted one year ago

Something like 100 epoch with a least more than 100 images par epoch reported.

  
  
Posted one year ago

I have 6 plots with one or 2 metrics. But I have a lot of debug samples.

  
  
Posted one year ago

Ok fine.

  
  
Posted one year ago

I have made some changes in the code
logger.clearml_logger.report_image( self.tag, f"{self.tag}_{epoch:0{pad}d}", iteration=iteration, image=image ) `` epoch range is 0-150 iteration range is 0-100And the error is still there
General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.'))Could it be because the joint of the scalar graph + debug samples ?
I have 8 scalar graph:
2 :monitor:{gpu|machine}: with 15k iteration 2 training_{metrics|loss} with 15k iteration and the other between 150 and 40 iteration each
SuccessfulKoala55 did you have any other suggestion? did I do something wrong with my changes ?

  
  
Posted one year ago

Sure, let me know if I can help 🙂

  
  
Posted one year ago

150 x 100 is still larger than 10,000

  
  
Posted one year ago

iteration has nothing to do with it

  
  
Posted one year ago

Are you using a fixed self.tag ?

  
  
Posted one year ago

Oh, sorry

  
  
Posted one year ago

Reducing the number of image reported (already in our plan)

You don't actually need to reduce the number of images, just make sure the series parameter is consistent, so basically you want to make sure that in every report (i.e. iteration in which you're reporting), you have a fixed set of title/series values

  
  
Posted one year ago

We're planning to optimize the server code for these cases, but I would suggest using a more fixed set of title/series for your debug images

  
  
Posted one year ago

So I see two options:
Reducing the number of image reported (already in our plan) Make on big image per epoch

  
  
Posted one year ago

You're generating a huge amount of variants ( series ) using the iteration number

  
  
Posted one year ago

Thanks a lot I'll check how to do this correctly

  
  
Posted one year ago

That's the issue...

  
  
Posted one year ago

Is it better on clearml or clearml-server ?

  
  
Posted one year ago

it's a matter of scale for the query that retrieves the data, not related to the amount of data

  
  
Posted one year ago

I'd appreciate that 🙂

  
  
Posted one year ago

Issue open on the clearml-server github https://github.com/allegroai/clearml-server/issues/89 . Thanks for your help.

  
  
Posted one year ago

What do you use as title and for the series for each image?

  
  
Posted one year ago

That's really hard to support using ES as it inflates the number of buckets in the aggregation used when trying to locate unique debug images

  
  
Posted one year ago
135 Views
30 Answers
one year ago
4 months ago
Tags