SubstantialBaldeagle49 Well, I see. Elaticsearch does not support putting that large number into max_buckets. From the error message that I see in the apiserver log I am not sure that the original problem is connected to the buckets number. Can you please revert the max_bucket change, reproduce the original problem and share the elasticsearch log?
Hi @<1523707653782507520:profile|MelancholyElk85> , what version of the apiserver are you using?
Hi SarcasticSparrow10 , I am trying to understand whether we have some gaps in the instructions. In the upgrade process did you perform the steps 3-10 of the below instruction? Were there any errors when performing these steps?
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_mongo44_migration
Hi SteadyFox10 , how many unique metrics and variants do you have in this task? We may be hitting some limit here
We found the issue. It will be fixed in the upcoming patch for the open-v1.14 release
Thanks! In this log it mentions that the source elastic 5 has failed during the reindex process. Can you also share the logs from the 'elastic-upgrade' service?
Are you running your dockers on Linux or Windows?
Hi MortifiedDove27 , you can run the following commands on the clearml server host to get the docker logs for the apiserver and elasticsearch:sudo docker logs clearml-apiserver > apiserver.logs 2>&1 sudo docker logs clearml-elastic > elastic.logs 2>&1
What can be seen in the logs is that for some reason Elasticsearch had internal failure when trying to perform the plots query. I will send you the instruction on how to check for the health of ES nodes. It may provide us with some clues
Sure, you delete it with the following command:
curl -XDELETE " http://localhost:9200/events-plot-d1bd92a3b039400cbafc60a7a5b1e52b "
Once deleted it will be automatically recreated by the api server and should see the plots from the new tasks that you run afterwards
SubstantialBaldeagle49 This should collect the logs: 'sudo docker logs trains-apiserver >& apiserver.logs'
If you run the following command 'sudo chown -R 1000:1000 /opt/trains' does it change anything?
ReassuredTiger98 What are the memory settings for Elasticsearch in your docker compose? If it is 2 Gb and you have enough memory on your server then you can try to increase it to 4gb like this: ES_JAVA_OPTS: -Xms4g -Xmx4g
Hi VexedPeacock35 , I suspect that Elasticsearch works too hard and periodically misses timeouts on recording events. How much memory and CPU is it using? Can you increase the memory that is allocated to it and see whether this helps?
Hi @<1523701868901961728:profile|ReassuredTiger98> , what version of the apiserver are you using?
Hi MassiveHippopotamus56
Can you please open the browser developer tools, navigate to scalar tabs for one of the experiments that show wrong iteration and copy here the request payload and response for the events.scala_metrics_iter_histogram call?
Hi ResponsiveCamel97 , the shards and indices stats look fine. Can you please try the async delete of the task data? You can run the following line in the shell inside the apiserver container. Just replace <task_id> with your actual task idcurl -XPOST -H "Content-Type: application/json" "
" -d'{"query": {"term": {"task": "<task_id>"}}}'
You should get in response something like this:{"task":"p6350SG7STmQALxH-E3CLg:1426125"}
Then you can periodically ping ES on the status of the r...
Hi IdealPanda97 , can you share the logs for the 'elastic-upgrade-7' docker container? According to the upgrade log there was some problem with Elasticsearch during indices copy.
Another option that should work for the upgrade script is to pass an environment variable that disable the xpack (the feature that requires licensing) for the ES5 docker container. It can done as following:
python elastic_upgrade.py --extra-source-env xpack.security.enabled=false
Hi DefeatedCrab47 , ES docker requires that it is data folder belongs to 1000:1000 user and group. If you want to transfer your existing data from trains 15.1 then please follow the guide https://allegro.ai/docs/deploying_trains/trains_server_es7_migration/
The script that is run in this guide should create elastic_7 folder with the correct permissions and transfer all your existing data
@<1673863788857659392:profile|HomelyRabbit25> What happens when you delete the files from UI? Can you please share the logs from the async_delete service? This is the service that is actually responsible for the files deletion and the s3 configuration that you prepared should be mapped into that service (not the apiserver)
This one is indeed dynamic but can be set as follows: "plot_len":{"type":"long"}
If it returns an OK result then rerun the upgrade process again.
Do you mean the "search_phase_execution" error? Yes, stopping containers, deleting the data folder and running the containers again would bring you to a "clean install" state. But then you would loose all your data not only the task scalar results
Yes, the command would be like this: curl -XDELETE " http://localhost:9200/queue_metrics_d1bd92a3b039400cbafc60a7a5b1e52b_2020-08 "
If you decide to delete the "red" indices then you can proceed with the command above issuing it for each problematic index. The queue metrics index is not very important but the second one "events-logs" contains all the log messages produced by your tasks in August. You will still have debug images and scalar metrics reported by these tasks but the log messages ...
Enjoy the new version!
Hi @<1673863788857659392:profile|HomelyRabbit25> , yes it should include the support for async_delete service. Please provide the storage_credentials configuration to this service instead of the apiserver. For the details of whether the deletion works or it has any issues with the provided configuration please inspect the logs from the async_delete pod.
Hi SoggyBeetle95 , from what version of clearml did you upgrade? About the tasks that disappeared: you do not see these tasks at all or you see these tasks with no results?
We can compare with the table that you sent yesterday. Unless a lot of new events were written since then