Before injecting anything into the instances you need to spin them up somehow. This is achieved by the application that is running and the credentials provided. So the credentials need to be provided to the AWS application somehow.
Does that make sense CostlyOstrich36 ? Any thoughts on how to treat this? For the time being I'm also perfectly happy to include something specific to extra_clearml_conf
, but I'm not sure how to set the sdk.aws.s3.credentials
to be a list of dictionaries as needed
So basically what I'm looking for and what I have now is something like the following:
(Local) I have a well-defined aws_autoscaler.yaml
that is used to run the AWS autoscaler. That same autoscaler is also run with CLEARML_CONFIG_FILE=....
(Remotely) The autoscaler launches, listens to the predefined queue, and is able to launch instances as needed. I would run a remote execution task object that's appended to the autoscaler queue. The autoscaler picks it up, launches a new instance. The requirements are installed, the git credentials are copied from the aws_autoscaler.yaml
. ---here is the error--- my task fails since it does not have additional sdk.aws.s3.credentials
required to access specific buckets
Hi UnevenDolphin73 , so all works now? With multi credentials?
UnevenDolphin73 , I think I might have skipped a beat. Are you running the autoscaler through the code example in the repo?
TimelyPenguin76 CostlyOstrich36 It seems a lot of manual configurations is required to get the EC2 instances up and running.
Would it not make sense to update the autoscaler (and example script) so that the config.yaml
that's used for the autoscaler service is implicitly copied to the EC2 services, and then any extra_clearml_conf
are used/overwritten?
I would expect the service to actually implicitly inject it to new instances prior to applying the user's extra configuration 🤔
UPDATE: Apparently the quotation type matters for furl
? I switched the '
to \"
and it seems to work now
Hi UnevenDolphin73 ,
I think you need to lunch multiple instances to use multiple creds.
That doesn't make sense? 🤔
Maybe I was not clear, but it's a simple part of the config file.
Which config file? The one sitting locally on your computer? You would still need to transmit that data to the application that is spinning the instances up and down. Maybe a CLI? But that would be adding more complexity on top of it. What do you think?
If I set the following:"extra_clearml_conf": "sdk.aws.s3.credentials = [\n{\nhost: 'ip:9000'\nkey: 'xxx'\nsecret: 'xxx'\nmultipart: false\nsecure: false\n},\n{\nhost: 'ip2:9000'\nkey: 'xxx'\nsecret: 'xxx'\nmultipart: false\nsecure: false\n}\n]"
I run into a weird furl
error:ValueError: Invalid port '9000''.
Since the additional credentials are available to the autoscaler when it boots up (via the config file), I thought it could use those natively?
-ish, still debugging some weird stuff. Sometimes ClearML picks ip
and sometimes ip2
, and I can't tell why 🤔
That's up and running and is perfectly fine.