Removing the AWS credentials from the aws_autoscaler.yaml and setting them as env Variables seems to work at least for the local version using the --run parameter. Took me a while because I needed to fiddle in the subnetid using the extra_configurations field which is not documented... 😄
But now I have encountered some funny behaviour. A worker node is scheduled and according to the autoscaler logs I would say it is assigned to the correct queue:
2022-11-18 14:34:34,590 - clearml.auto_scaler - INFO - Idle for 120.00 seconds
ClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start
2022-11-18 14:36:35,106 - clearml.auto_scaler - INFO - Found 1 tasks in queue 'autoscaler_test_machines'
2022-11-18 14:36:35,207 - clearml.auto_scaler - INFO - resources: {'AutoscalerTest': 'autoscaler_test_machines'}
2022-11-18 14:36:35,208 - clearml.auto_scaler - INFO - idle worker: {}
2022-11-18 14:36:35,208 - clearml.auto_scaler - INFO - up machines: defaultdict(<class 'int'>, {'AutoscalerTest': 1}
However, in the Web UI the worker does not show up and the task does not get picked up. Any idea what went wrong?