Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Trying To Enqueue A Task Through The Ui, Getting This Error - What Could It Be? (Running On Aws, On The Official Trains Ami)

Trying to enqueue a task through the UI, getting this error - what could it be?

(Running on AWS, on the official trains AMI)

Error 100 : General data error: err=('1 document(s) failed to index.', [{'index': {'_index': 'queue_metrics_d1bd92a3b039400cbafc60a7a5b1e52b_2020-12', '_type': '_doc', '_id': 'Qu1DW3YBl-ZxV1F9pW4R', 'status': 403, 'error': {'type': 'cluster_block_exception', 'reason': 'index [queue_metrics_d1bd92a3b039400cbafc60a7a5b1e52b_2020-12] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];'}, 'data': {'timestamp': 1607848928527, 'queue': '2cf9c066daaa4184ab54271a212bdad7', 'average_waiting_time': 0, 'queue_length': 0}}}]), extra_info=index [queue_metrics_d1bd92a3b039400cbafc60a7a5b1e52b_2020-12] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];

  
  
Posted 4 years ago
Votes Newest

Answers 23


what should I paste here to diagnose it?

  
  
Posted 4 years ago

Depends on the state of your hard-drive

  
  
Posted 4 years ago

Well, you can inspect the ES logs to find out why there's a 0.5GB limit but ES still locks up when there's 1.5GB free, however 8GB storage for the machine is really the absolute minimum, I suggest increasing it. The current price in AWS is $0.08 per GB, so personally I think 50GB is a very reasonable number.

  
  
Posted 4 years ago

And depends on what takes the most space

  
  
Posted 4 years ago

I mean, I barely have 20 experiments

  
  
Posted 4 years ago

sudo ?

  
  
Posted 4 years ago

🙂

  
  
Posted 4 years ago

How large is your EBS disk size?

  
  
Posted 4 years ago

It would be useful to create a disk-usage tree detailing the disk usage under the /opt/trains folder, just so you'll get a feel of what takes the most space (uploaded files, experiment statistics etc.)

  
  
Posted 4 years ago

when spinning up the ami i just went for trains recommended settings

  
  
Posted 4 years ago

but I can't seem to run docker-compose down

  
  
Posted 4 years ago

SuccessfulKoala55 AppetizingMouse58

[ec2-user@ip-10-0-0-95 ~]$ df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 880K 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/nvme0n1p1 8.0G 6.5G 1.5G 82% / tmpfs 790M 0 790M 0% /run/user/1000

  
  
Posted 4 years ago

Increased to 20, lets see how long will it last 🙂

  
  
Posted 4 years ago

(it works now, with 20 GB)

  
  
Posted 4 years ago

what should I paste here to diagnose it?

Well, you can find a Linux command that lists the X largest folders/files and see what's taking the most disk space

  
  
Posted 4 years ago

Now I see the watermarks are 2gb

  
  
Posted 4 years ago

I guess the AMI auto updated

  
  
Posted 4 years ago

I get this
` [ec2-user@ip-10-0-0-95 ~]$ docker-compose down
WARNING: The TRAINS_HOST_IP variable is not set. Defaulting to a blank string.
WARNING: The TRAINS_AGENT_GIT_USER variable is not set. Defaulting to a blank string.
WARNING: The TRAINS_AGENT_GIT_PASS variable is not set. Defaulting to a blank string.
ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?

If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. `

  
  
Posted 4 years ago

why does it deplete so fast?

  
  
Posted 4 years ago

I know, but we really provide the bare minimum since people usually want to try it out and I assume most are price-conscious... I guess we can explain that in the documentation 🙂

  
  
Posted 4 years ago

Hi Elior, chances are that you do not have enough space for Elasticsearch on your storage. Please check the ES logs and increase the available disk space.

  
  
Posted 4 years ago

You can disable the auto-update feature if you'd like to keep your own custom docker-compose.yml file

  
  
Posted 4 years ago

This error just keeps coming back... I already made the watermarks like 0.5gb

  
  
Posted 4 years ago
1K Views
23 Answers
4 years ago
one year ago
Tags