Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi, I Am Currently Playing Around With The Aws_Autoscaler (Script) In The Opensource Version. I Was Wondering If There Is A Way To Hand Over The Credentials (E.G. Aws Secret Access Key) In A Encrypted Way, Or At Least Mask Them In The Web Ui?

Hi,
I am currently playing around with the aws_autoscaler (script) in the OpenSource version. I was wondering if there is a way to hand over the credentials (e.g. AWS secret access key) in a encrypted way, or at least mask them in the Web UI?

  
  
Posted one year ago
Votes Newest

Answers 7


VexedStork84 I think this is planned for one of the next versions (masking in the UI).
You can also make sure the credentials are set on the machine running the autocaler using env vars - boto3 should be able to pick them up, I think

  
  
Posted one year ago

Looks like the bootstrap is broken, as the ami in the documentation is deprecated but there are some hard constraints on the image (I just used a basic amazon ami which failed with docker missing, etc.).

  
  
Posted one year ago

Which AMI are you referring to?

  
  
Posted one year ago

Removing the AWS credentials from the aws_autoscaler.yaml and setting them as env Variables seems to work at least for the local version using the --run parameter. Took me a while because I needed to fiddle in the subnetid using the extra_configurations field which is not documented... 😄

But now I have encountered some funny behaviour. A worker node is scheduled and according to the autoscaler logs I would say it is assigned to the correct queue:

2022-11-18 14:34:34,590 - clearml.auto_scaler - INFO - Idle for 120.00 seconds
ClearML Monitor: Could not detect iteration reporting, falling back to iterations as seconds-from-start
2022-11-18 14:36:35,106 - clearml.auto_scaler - INFO - Found 1 tasks in queue 'autoscaler_test_machines'
2022-11-18 14:36:35,207 - clearml.auto_scaler - INFO - resources: {'AutoscalerTest': 'autoscaler_test_machines'}
2022-11-18 14:36:35,208 - clearml.auto_scaler - INFO - idle worker: {}
2022-11-18 14:36:35,208 - clearml.auto_scaler - INFO - up machines: defaultdict(<class 'int'>, {'AutoscalerTest': 1}

However, in the Web UI the worker does not show up and the task does not get picked up. Any idea what went wrong?

  
  
Posted one year ago

The https://github.com/allegroai/clearml/blob/master/examples/services/aws-autoscaler/aws_autoscaler.py is referencing to ami "ami-04c0416d6bd8e4b1f" which does not exist anymore (referencing to amis by ID might not be the best idea anyway, but this is a different stroy). So I used a plain amazon-linux-2 (ami-0b920b0594b5288fb) which lead to errors because of missing dependencies like docker.

  
  
Posted one year ago

Thanks for the fast response.
If understood you correctly I could set the credentials using env variables on e.g. the ClearML Server (If I use the "service" queue there) and omit them in the aws_autoscaler.yaml file? Wouldn't that make the autoscaler complain about missing credentials, if I don't mess around in the code?

  
  
Posted one year ago

I'm not sure, but you can check - you can also fix that and submit a PR 🙂

  
  
Posted one year ago
664 Views
7 Answers
one year ago
one year ago
Tags