Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey There Have The Following Issue After Upgrading Server And Trains To 0.16:

hey there

Have the following issue after upgrading server and trains to 0.16:
Error 100 : General data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [11633]. This limit can be set by changing the [search.max_buckets] cluster level setting.'))Error appears when checking scalar plots. Randomly appeared after training for a while (it was ok for e.g. first epoch).

This seems to be coming from ES: https://discuss.elastic.co/t/search-max-buckets-limit-error-on-7-0-1/179989

  
  
Posted 5 years ago
Votes Newest

Answers 34


SubstantialBaldeagle49 This is fine. When you start docker-compose it takes different time for the services to start. Apiserver waits for the Elasticsearch to start and proceeds once it is ready. Can you reproduce the buckets issue and share the apiserver log that contains it?

  
  
Posted 5 years ago

SubstantialBaldeagle49 This should collect the logs: 'sudo docker logs trains-apiserver >& apiserver.logs'

  
  
Posted 5 years ago

Ok , i will start a new experiment to see if the error will be still there? Sorry i dont really get how to show the trains-apiserver log

  
  
Posted 5 years ago