Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hello, I Have An Issue With Self Hosted Clearml Server, Everything Was Running Find But Since Today I Get The Following Error:

Hello,

I have an issue with self hosted clearml server, everything was running find but since today i get the following error:

 clearml.Metrics - ERROR - Action failed <500/100: events.add_batch/v1.0 (General data error: err=('98 document(s) failed to index.', [{'index': {'_index': 'events-training_stats_scalar-d1bd92a3b039400cbafc60a7a5b1e52b', '_type': '_doc', '_id': '5cb30e1952fe4e43a872020c58999d86', 'status': 503,..., extra_info=[events-training_stats_scalar-d1bd92a3b039400cbafc60a7a5b1e52b][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[events-training_stats_scalar-d1bd92a3b039400cbafc60a7a5b1e52b][0]] containing [98] requests and a refresh])>

Any clue what to do there?

  
  
Posted one year ago
Votes Newest

Answers 8


additional context:
when i click on scalar charts i get the following message:
ERROR
Failed to get Scalar Charts

and in docker logs there appears an error:

...
clearml-elastic         | "at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.66.Final.jar:4.1.66.Final]",
clearml-elastic         | "at java.lang.Thread.run(Thread.java:1589) [?:?]",
clearml-elastic         | "Caused by: org.elasticsearch.action.NoShardAvailableActionException",
clearml-elastic         | "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:532) ~[elasticsearch-7.17.7.jar:7.17.7]",
clearml-elastic         | "at org.elasticsearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:479) [elasticsearch-7.17.7.jar:7.17.7]",
clearml-elastic         | "... 81 more"] }
  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> here is the log file, thanks!

  
  
Posted one year ago

I also exposed elastic port and checked /_cluster/health/?level=shards
status is red
and this is the red shard:
"events-training_stats_scalar-d1bd92a3b039400cbafc60a7a5b1e52b": {
"status": "red",
"number_of_shards": 1,
"number_of_replicas": 0,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 1,
"shards": {
"0": {
"status": "red",
"primary_active": false,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 1
}
}
},

  
  
Posted one year ago

the container is running, how can I get more detailed status?

  
  
Posted one year ago

found a corrupted file in the index, deleting this resolved the issue now everything is back to normal. thanks a lot for your help!

  
  
Posted one year ago

Hi @<1582904448076746752:profile|TightGorilla98> , can you check on the status of the elastic container?

  
  
Posted one year ago

@<1582904448076746752:profile|TightGorilla98> can you please share the entire log of the clearml-elastic container?

  
  
Posted one year ago

not sure what that means

  
  
Posted one year ago