Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi! I'M Having Some Problems, Could You Help Me? I Have Been Working With Version 0.15.0 Of Trains-Server For A Month, But Yesterday I Stopped Accessing Logs. When I Tried To Go To The Project /Task/Results/Scalars, I Got The Error: "Error 100 : General D

Hi! I'm having some problems, could you help me? I have been working with version 0.15.0 of trains-server for a month, but yesterday I stopped accessing logs. When I tried to go to the Project /Task/Results/Scalars, I got the error: "Error 100 : General data error(Transport Error (503, 'search_phase_execution_exception'))". I tried to restart the server, but all the projects disappeared. I made a data backup, and updated trains-server to 0.15.1, but the problem remained. In the attached files, I left my "docker-compose.yml " file, log after executing the command "docker-compose -f docker-compose.yml up", a screen after executing the "docker ps" command, and a screen from the server itself. I will be glad of any help!

  
  
Posted 3 years ago
Votes Newest

Answers 8


IdealPanda97 Is your user id 1000? If not then this maybe the reason and chown -R 1000:1000 may help. Elasticsearch in the docker runs with user 1000. Another reason maybe some other elasticsearch process or docker running on your machine and holding the lock in the data folder. If there are any then please try stopping them. If neither of the above helps then there is an option of manually deleting .lock files from the elastic data folder. Of course the data should be backed up before this. https://stackoverflow.com/questions/28932178/elasticsearch-failed-to-obtain-node-lock-is-the-following-location-writable

  
  
Posted 3 years ago

Hi IdealPanda97 , can you please check your available disk space and available RAM? According to the logs all the services (Elastic, Mongo, Redis) fail to start

  
  
Posted 3 years ago

SuccessfulKoala55 Sure. I can attach an archive with the elastic folder if necessary. But it weighs 500MB

  
  
Posted 3 years ago

IdealPanda97 It seems Elasticsearch fails to start, can you list the contents of /opt/trains/data/elastic and /opt/trains/data/elastic/nodes ? According to the logs this looks like either a permission or a lock issue...

  
  
Posted 3 years ago

AppetizingMouse58 I made my user the owner (sudo chown liz:liz -R /opt/trains/), but the problem still remains.

  
  
Posted 3 years ago

AppetizingMouse58 There should be enough space on the disks

  
  
Posted 3 years ago

I was helped by running from user 1000 and the command "http_proxy= docker-compose -f /opt/trainz/docker-compose.html up"
AppetizingMouse58 SuccessfulKoala55 Thank you so much!

  
  
Posted 3 years ago

IdealPanda97 Ok, I see. Can you please run the following command, then restart the docker-compose and see if it makes any difference?
sudo chown -R 1000:1000 /opt/trains

  
  
Posted 3 years ago
682 Views
8 Answers
3 years ago
one year ago
Tags
Similar posts