I also removed 'sudo' from all the commands as is suggested in https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html but that wasn't the cause of the problem
LovelyHamster1 Now I see... Interesting credentials ability. Specifically all the S3 access on trains is derived from the ~/clearml.conf
credentials section :
https://github.com/allegroai/clearml/blob/ebc0733357ac9ead044d0ed32d41447763f5797e/docs/clearml.conf#L73
( or the AWS S3 environment variables )
I'm not sure how this AWS feature works, I suspect it is changing the AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY variables on the ec2 instance. If this is the case, it should work out of the box 🙂
Nice, I'll try also with the extra_bash_script, thank you!
FriendlySquid61 Your solution seems to have solved the problem. But only after I removed the export CLEARML_API_HOST={api_server}
export CLEARML_WEB_HOST={web_server}
export CLEARML_FILES_HOST={files_server}
command from the bash script executed when the instance is launched
Hi AgitatedDove14 , what I meant is that if it is possible to associate ec2 instances of the autoscaler to a IAM role in order to grant permissions to applications running on that instances, which could be for example the access to a s3 buckets that can be accessed only with a certain IAM role permissions. I'm not completely sure that what I'm saying makes sense, but I refer to something similar as it's specified here https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
Hey LovelyHamster1 ,
This means that for some reason the agent on the instances created fails to run and the instance is terminated.
The credentials could definatly cause that.
Can you try adding the credentials as they appear in your clearml.conf?
To do so, create new credentials from your profile page in the UI, and add the entire section to the extra_trains_conf
section in the following way:extra_trains_conf = """ api { web_server: "<webserver>" api_server: "<apiserver>" files_server: "<fileserver>" credentials {"access_key": "<KEY>", "secret_key": "<SECRET>"} } """
Hi Sapir, no that didn't solve the problem unfortunately. I ssh into the machine (after removing shutdown so that it doesn't terminate) and from the log I saw the error : "clearml_agent: ERROR: Connection Error: it seems *api_server* is misconfigured. Is this the ClearML API server
http://apiserver:8008 ?
So it is a credential problem
Hey LovelyHamster1 ,
If s3 is what you're interested of, then the above would do the trick.
Note that you can attach the IAM using instance profiles. You can read about those here:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
Once you have an instance profile, you can add it to the autoscaler using the extra_configurations
section in the autoscaler.
Under your resource_configurations
-> some resource name
-> add an extra_configurations
section which is a dict of the boto3 format.
See IamInstanceProfile
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.run_instances and use the same types as in boto3.
LovelyHamster1 what do you mean by "assume the permissions of a specific IAM Role" ?
In order to spin an ec2 instance (aws autoscaler) you have to have correct credentials, to pass those credentials you must create a key/secret pair to pass to the autoscaler. There is no direct support for IAM Role. Make sense ?
Great.
Note that instead of removing those lines you can override it using the extra_vm_bash_script
For example:extra_vm_bash_script = """ export CLEARML_API_HOST=<api_server> export CLEARML_WEB_HOST=<web_server> export CLEARML_FILES_HOST=<files_server> """
Hi AgitatedDove14 , FriendlySquid61 ! I managed to grant permission to the AWS autoscaler to spin instances using the instance profile as suggested by FriendlySquid61 . The instances are created and terminated correclty, however the new instances don't executed the queued task and shutdown immediately. I noticed that the clearml credential atself.web_server = Session.get_app_server_host()
self.api_server = Session.get_api_server_host()
self.files_server = Session.get_files_server_host()
in the autoscaler code return the defaul configuration ( http://apiserver:8008
http://apiserver:8080
http://apiserver:8081 ) instead of my server ip, could this be the problem? I created the ClearML server using the suggested AWS AMI and updated the docker-compose at the version on github