Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
[Injecting Secrets Into A Clearml Agent / Accessing

[Injecting secrets into a ClearML Agent / accessing clearml.conf from within a Task]
Hi everyone, we are using the ClearML AWS Autoscaler (still awesome 😉 ) but we would need to access some access credentials for AWS S3 from within that task to stream training data.
What would be the best way to inject these secrets into an agent? (With just putting them in the code of course.)
Can I access the clearml.conf content from within the Task (which runs inside a Docker container) somehow? Is there a way to inject environment variables into a Task or into its container? There is the extra_vm_bash_script options, but if I create variables there they won’t make it into the container right…

  
  
Posted 2 years ago
Votes Newest

Answers 16


SuccessfulKoala55 AgitatedDove14 So I’ve tried the approach and it does work, however, this of course results in the credentials being visible in the ClearML web interface output, which comes close to just hard-coding them in…
Is there any way to send the secrets safely?
Is there any way to access the clearml.conf file contents from within code? (afaik, the file does not get send over to the container - otherwise I could just yml-read it myself…)

  
  
Posted 2 years ago

Won’t they be printed out as well in the Web UI? That shows the full Docker command for running the task right…

  
  
Posted 2 years ago

Yes for example, or some other way to get credentials over to the container safely without them showing up in the checked-in code or web UI

  
  
Posted 2 years ago

when I duplicate the experiment and clone it remote, the call is ignored and the recorded values are used?

Yes ScantChimpanzee51 exactly.
Think of it as the inital value you want to put on the Task when you are running the code on your machine, later when you clone the Task, you can edit the base docker image in the UI (or with the API), of course the new value is used when the agent spins this Task, and to avoid the actual docker (the one you changed in the UI) to be overwritten by this call (the set_base_docker call), it is essentially skipped over

  
  
Posted 2 years ago

Sorry to ask again, but the values are still showing up in the WebUI console logs this way (see screenshot.)
Here is the config that I paste into the EC2 Autoscaler Setup:
` agent {
extra_docker_arguments: ["-e AWS_ACCESS_KEY_ID=XXXXXX", "-e AWS_SECRET_ACCESS_KEY=XXXXXX"]

hide_docker_command_env_vars {
    enabled: true
    extra_keys: ["AWS_SECRET_ACCESS_KEY"]
    parse_embedded_urls: true
}    

} Never mind, it came from setting the options wrong, it has to be ["-e", "key=...", ...] ` . Everything working now ✔

  
  
Posted 2 years ago

Hi SuccessfulKoala55 , thanks for getting back to me!
In the docs of Task.set_base_docker() it says “When running remotely the call is ignored”. Does that mean that this function call is executed when running locally to “record” the arguments and then when I duplicate the experiment and clone it remote, the call is ignored and the recorded values are used?

  
  
Posted 2 years ago

Although, some correction here: While the secret is indeed hidden in the logs, it is still visible in the “execution” tab of the experiment, see two screenshots below.
One again I set them with
task.set_base_docker(docker_arguments=["..."])

  
  
Posted 2 years ago

That was the missing piece - thank you!
Awesome to all the details you have considered in ClearML 😉

  
  
Posted 2 years ago

ScantChimpanzee51 if you set these in the agent's configuration under agent.extra_docker_arguments and specify the keys in the agent.hide_docker_command_env_vars they should only appear hidden in the console log. If you set them by code ( task.set_base_docker(docker_arguments=["..."] ...) they would still appear in the WebUI (BTW we have a feature coming up that will hide them in the UI until you actually want to edit them, to prevent some passer-by to see them 🙂 )

  
  
Posted 2 years ago

Ahhh, ok got it! Thanks 👍

  
  
Posted 2 years ago

Hi ScantChimpanzee51 , you can set the task's container's arguments, which can include any docker run command line arguments. For your case, you can use -e KEY=VALUE as many times as you need. These will be used in the docker command and will be available inside the container as env vars

  
  
Posted 2 years ago

You could use the agent.extra_docker_arguments configuration section in the clearml.conf file to get the same result you've reached before... this should work, I think 🙂 - see here https://github.com/allegroai/clearml-agent/blob/26e62da1a8fb4b66361a5d6a4d852d50b1e0a158/docs/clearml.conf#L155

  
  
Posted 2 years ago

Won’t they be printed out as well in the Web UI?

They would in the log, but it will not be stored back on the Task (the idea is these are "agent specific" additions no need for them to go with the Task)

So I’ve tried the approach and it does work,

ScantChimpanzee51 What do you mean it does not work? what exatcly are you trying with task.connect and does not work?

Is there a way to inject environment variables into a Task or into its container?

Yes you can with:
task.set_base_docker(docker_image="mycontainer", docker_arguments="-e KEY=VAL")what am I missing?

  
  
Posted 2 years ago

Hey guys, really appreciating the help here!
So what I meant by “it does work” is that the environment variables go through to the container, I can use them there, everything runs.

The remaining problem is that this way, they are visible in the ClearML web UI which is potentially unsafe / bad practice, see screenshot below.

  
  
Posted 2 years ago

You mean getting them somehow into the container by specifying them in the clearml.conf file you provide to the agent?

  
  
Posted 2 years ago

The remaining problem is that this way, they are visible in the ClearML web UI which is potentially unsafe / bad practice, see screenshot below.

Ohhh that makes sense now, thank you 🙂
Assuming this is a one time credntials for every agent, you can add these arguments in the "extra_docker_arguments" in clearml.conf
Then make sure they are also listed in: hide_docker_command_env_vars which should cover the console log as well
https://github.com/allegroai/clearml-agent/blob/26e62da1a8fb4b66361a5d6a4d852d50b1e0a158/docs/clearml.conf#L239

  
  
Posted 2 years ago
1K Views
16 Answers
2 years ago
one year ago
Tags