I have made some changes in the codelogger.clearml_logger.report_image( self.tag, f"{self.tag}_{epoch:0{pad}d}", iteration=iteration, image=image ) `` epoch
range is 0-150 iteration
range is 0-100And the error is still thereGeneral data error (TransportError(503, 'search_phase_execution_exception', 'Trying to create too many buckets. Must be less than or equal to: [10000] but was [10001]. This limit can be set by changing the [search.max_buckets] cluster level setting.'))
Could it be because the joint of the scalar graph + debug samples ?
I have 8 scalar graph:
2 :monitor:{gpu|machine}: with 15k iteration 2 training_{metrics|loss} with 15k iteration and the other between 150 and 40 iteration each
SuccessfulKoala55 did you have any other suggestion? did I do something wrong with my changes ?