Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hey There, I Would Like To Increase The

Hey there, I would like to increase the ulimit for the number of files opened at the same time in a ec2 instance. According to this https://stackoverflow.com/questions/11342167/how-to-increase-ulimit-on-amazon-ec2-instance , I would simply need to execute in the ec2-instance:
sudo echo "\n* soft nofile 65535\n* hard nofile 65535" >> /etc/security/limits.conf sudo rebootI guess I would need to put this in the extra_vm_bash_script param of the auto-scaler, but it will reboot in loop right? Isn’t there an easier way to achieve that?

  
  
Posted 3 years ago
Votes Newest

Answers 18


BTW: for future reference, if you set the ulimit in the bash, all processes created after that should have the new ulimit

  
  
Posted 3 years ago

Sure thing 🙂

  
  
Posted 3 years ago

thanks for your help anyway AgitatedDove14 !

  
  
Posted 3 years ago

So actually I don’t need to play with this limit, I am OK with the default for now

  
  
Posted 3 years ago

because at some point it introduces too much overhead I guess

  
  
Posted 3 years ago

it actually looks like I don’t need such a high number of files opened at the same time

  
  
Posted 3 years ago

by replacing the pid with $PID ?

  
  
Posted 3 years ago

now how to adapt to do it from extra_vm_bash_script ?

  
  
Posted 3 years ago

that works from within the ssh session

  
  
Posted 3 years ago

Set it on the PID of the agent process itself (i.e. the clearml-agent python process)

  
  
Posted 3 years ago

Give me a minute

  
  
Posted 3 years ago

yes please, I think indeed that’s the problen

  
  
Posted 3 years ago

I think you cannot change it for a running process, do you want me to check for you if this can be done ?

  
  
Posted 3 years ago

mmmh it fails, but if I connect to the instance and execute ulimit -n , I do see
65535while the tasks I send to this agent fail with:
OSError: [Errno 24] Too many open files: '/root/.commons/images/aserfgh.png'and from the task itself, I run:
import subprocess print(subprocess.check_output("ulimit -n", shell=True))Which gives me in the logs of the task:
b'1024'So nnofiles is still 1024, the default value, but not when I ssh, damn. Maybe rebooting would work

  
  
Posted 3 years ago

I think this should work 🤞

  
  
Posted 3 years ago

I will try adding
sudo sh -c "echo '\n* soft nofile 65535\n* hard nofile 65535' >> /etc/security/limits.conf"to the extra_vm_bash_script , maybe that’s enough actually

  
  
Posted 3 years ago

I guess I would need to put this in the extra_vm_bash_script param of the auto-scaler, but it will reboot in loop right? Isn’t there an easier way to achieve that?

You can edit the extra_vm_bash_script which means the next time the instance is booted you will have the bash script executed,
In the meantime, you can ssh to the running instance and change the ulimit manually, wdyt?

  
  
Posted 3 years ago