Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ResponsiveKoala38
Moderator
0 Questions, 55 Answers
  Active since 11 July 2024
  Last activity one year ago

Reputation

0
0 Rolling Back To 1.15.0 Seemed To Fix The Error For Now. Is There Something One Should Be Aware Of Between Server Versions 1.15 And 1.16 Related To Versions Of The

Hi @<1523701601770934272:profile|GiganticMole91> , what is the exact version of Elasticsearch that is running now in your 1.15.0 installation? You can see it in the output of 'sudo docker ps'

one year ago
0 Anyone Faced An Issue With Elasticsearch Before

This seems something different not connected to ES. Where do you get these logs?

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

In ES container please run "curl -XGET localhost:9200/_cat/indices"

4 months ago
4 months ago
0 Hi

Hi @<1734020208089108480:profile|WickedHare16> , what is the image of the apiserver that you are running?

one year ago
0 Hi

@<1734020208089108480:profile|WickedHare16> Do you mean that you see the plots now? Are there still any _attempt_serialize_numpy errors in the apiserver logs?

one year ago
0 Hi

Please take a look here:
None
Does it match you scenario? Can you try the suggested workaround?

one year ago
0 Anyone Faced An Issue With Elasticsearch Before

And are you still getting exactly this error?

<500/100: events.add_batch/v1.0 (General data error: err=1 document(s) failed to index., extra_info=[events-log-d1bd92a3b039400cbafc60a7a5b1e52b][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[events-log-d1bd92a3b039400cbafc60a7a5b1e52b][0]] containing [index {[events-log-d1bd92a3b039400cbafc60a7a5b1e52b][f3abecd0f46f4bd289e0ac39662fd850], source[{"timestamp":1747654820464,"type":"log","task":"fd3d00d99d88427bbc57...
4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

No, it says that it does not detect any problematic shards. Given that output and the absence of the errors in the logs I would expect that you will not get the error anymore

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

What is the status that you get for the "events-log-d1bd92a3b039400cbafc60a7a5b1e52b" index?

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

Did you try to run your job again?

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

Probably the 9200 port is not mapped from the ES container in the docker compose
The easiest would be to perform "sudo docker exec -it clearml-elastic /bin/bash" and then run the curl command from inside the ES docker

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

Hi @<1835488771542355968:profile|PerplexedShells66> , please inspect your Elasticsearch logs. Any errors or warnings there?

4 months ago
0 Hi There, Our. Self-Hosted Server Is Periodically Very Slow To React In The Web Ui. We'Ve Been Debugging For Quite Some Time, And It Would Seem That Elastisearch Might Be The Culprit. Looking At The Elastisearch Index, We Have An Index Of Around 80G Of Tr

Hi @<1523701601770934272:profile|GiganticMole91> , each scalar document in ES has a "task" field that is a task ID. The below query will show you the first 10 documents for the task ID:

curl -XGET "localhost:9200/<the scalar index name>/_search?q=task:<task ID>&pretty"
12 months ago
0 Hello Everyone! I Tried To Remove Models From Clearml Using

Hi @<1578555761724755968:profile|GrievingKoala83> , the DELETED prefix in the model id means that the original model was already deleted. The reference that you see "__DELETED__63e920aeacb247c890c70e525576474c" does not point to any model but instead a reminder that there was a reference to 63e920aeacb247c890c70e525576474c model here but the model was removed

8 months ago
0 Hey, Everyone! Recently I Tried To Restore Clearml From Backup And Encountered Elastic Error. I Decided To Rule Out That Problem Is In Backup And Just Did Fresh Installation Of Clearml Without Backup Files. Problem Persisted. Apiserver Container Logs Read

The path then would be as following:

  • Upgrade the old deployment to the latest clearml server according to the clearml server upgrade procedure. This will automatically upgrade the data
  • Backup your data folders (mongo and elastic)
  • Deploy the latest clearml server on another machine and restore the data from the backup
one year ago
0 Hey, Everyone! Recently I Tried To Restore Clearml From Backup And Encountered Elastic Error. I Decided To Rule Out That Problem Is In Backup And Just Did Fresh Installation Of Clearml Without Backup Files. Problem Persisted. Apiserver Container Logs Read

Hi @<1526734383564722176:profile|BoredBat47> , it seems that your Elasticsearch version is out of sync with what the latest version of the apiserver requires (7.17.18). Can you please follow the instructions here to make sure that you use the latest images for the ClearML Server?
None

one year ago
0 Hey, Everyone! Recently I Tried To Restore Clearml From Backup And Encountered Elastic Error. I Decided To Rule Out That Problem Is In Backup And Just Did Fresh Installation Of Clearml Without Backup Files. Problem Persisted. Apiserver Container Logs Read

Can you please describe what working deployments you current have and what is you final goal?
Do you have an old deployment working or it was corrupted?
Do you want to upgrade that old deployment to a new one? Or you want to have a new deployment in some other place based on the data from the old deployment?

one year ago
0 Anyone Faced An Issue With Elasticsearch Before

One of the most likely reasons for this issue would be insufficient free disk space for Elasticsearch. This may happen if less than 10% of free space is left on ES storage location. But there may be also other reasons

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

It depends on your usage. ES has some default watermarks that are activated when the amount of used space is above 85% and 90% (can be overwritten) of the storage. At some point it may transfer the index to a "readonly" state.

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

I do not see any issues in the log. Do you still get errors in the task due to the failure in events.add_batch?

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

Then possibly it is another reason. Need to search for in the ES logs

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

On what host did you run the curl command?

4 months ago
0 Anyone Faced An Issue With Elasticsearch Before

While on the host you can run some ES commands to check the shards health and allocations. For example this:

curl -XGET "localhost:9200/_cluster/allocation/explain?pretty"

It may give more clues to the problem

4 months ago
0 Hello, I Am Having A Problem That Debug Images Are Not Shown After Clearml Server Migration. I Found A Solution On This Page:

About the prefix part I think it should not matter. Just put your prefix instead of ' None .<ADDRESS>'

one year ago
0 Hello, I Am Having A Problem That Debug Images Are Not Shown After Clearml Server Migration. I Found A Solution On This Page:

No. It is actually string concatenation. What you actually get is that an original string is broken into several parts. That are concatenated as following:
-d'{....' + ' + '....}'

one year ago
0 Hi Everyone, I Am Updating The Self-Hosted Server To A Public Ip. However, All My Datasets Cannot Be Downloaded Anymore. I Followed Instructions From

I see now. It seems that the instructions that we provided updated only model urls and there are some more artifacts that need to be handled. Please try running the attached python script from inside your apiserver docker container. The script should fix all the task artifact links in mongo. Copy it to any place inside the running clearml-apiserver container and then run it as following:

python3 fix_mongo_urls.py --mongo-host 
 --host-source 
 --host-target http:...
one year ago
Show more results compactanswers