No it doesn't, the agent has its own clearml.conf file.
I'm not too familiar with clearml on docker, but I do remember there are config options to pass some environment variables to docker.
You can then set your environment variables in any way you'd like before the container starts
yes, a lot of moving pieces here as we're trying to migrate to AWS and set up autoscaler and more 😅
Thanks AgitatedDove14 , I'll give it a try. Perhaps additional documentation is needed for that extra_layout
Sorry, not necessarily RBAC (although that is tempting 😉 ), but for now was just wondering if an average joe user has access to see the list of "registered users"?
Hey AgitatedDove14 🙂
Finally managed; you keep saying "all projects" but you meant the "All Experiments" project instead. That's a good start 👍 Thanks!
Couple of thoughts from this experience:
Could we add a comparison feature directly from the search results (Dashboard view -> search -> highlight some experiments for comparison)? Could we add a filter on the project name in the "All Experiments" project? Could we add the project for each of the search results? (see above pictur...
Answering myself for future interested users (at least GrumpySeaurchin29 I think you were interested):
You can "hide" (explained below) secrets directly in the agent 😁 :
When you start the agent listening to a specific queue (i.e. the services worker), you can specify additional environment variables by prefixing them to the execution, i.e. FOO='bar' clearml-agent daemon ....
Modify the example AWS autoscaler script - after the driver = AWSDriver.from_config(conf)
, inject ...
Yes, using this extra_clearml_conf parameter you can add configuration
This is again exposing the environment variables on the WebUI for everyone to see.
The idea was to specify just the names of the environment variables, and that those would be exposed automatically to the EC2 instance, without specifying what values they should have (the value is taken from the agent running the scaler)
CostlyOstrich36 I'm not sure what you mean by "through the apps", but any script AFAICS would expose the values of these environment variables; or what am I missing?
AFAIK that's the only way right now (see my comment here - https://clearml.slack.com/archives/CTK20V944/p1657720159903739?thread_ts=1657699287.630779&cid=CTK20V944 )
Or then if you have the ClearML paid service, I believe there is a "vaults" service, right AgitatedDove14 ?
I just ran into this too recently. Are you passing these also in the extra_clearml_conf
for the autoscaler?
I just set the git credentials in the clearml.conf
and it works out of the box
I will TIAS, but maybe worthwhile to also mention if it has to be the absolute path or if relative path is fine too!
I think so, it was just missing from the official documentation 🙂 Thanks!
It could be related to ClearML agent or server then. We temporarily upload a given .env file to internal S3 bucket (cache), then switch to remote execution. When the remote execution starts, it first looks for this .env file, downloads it using StorageManager, uses dotenv, and then continues the execution normally
StorageManager.download_folder(remote_url='
s3://some_ip:9000/clearml/my_folder_of_interest ', local_folder='./')
yields a new folder structure, ./clearml/my_folder_of_interest
, rather than just ./my_folder_of_interest
That's weird -- the concept of "root directory" is defined to a bucket. There is no "root dir" in S3, is there? It's only within a bucket itself.
And since the documentation states:
If we have a remote file
then StorageManager.download_folder(‘
’, ‘~/folder/’) will create ~/folder/sub/file.ext
Then I would have expected the same outcome from MinIO as I do with S3, or Azure, or any other blob container
Sounds like incorrect parsing on ClearML side then, doesn't it? At least, it does not fully support MinIO then
I don't imagine AWS users get a new folder named aws-key-region-xyz-bucket-hostname
when they download_folder(...)
from an AWS S3 bucket, or do they? 🤔
I see that the GUI AutoScaler is only in the paid version, wonder why the GCP driver is not open source?
Hmmm maybe 🤔 I thought that was expected behavior from poetry side actually
That's probably in the newer ClearML server pages then, I'll have to wait still 😅
Running a self-hosted server indeed. It's part of a code that simply adds or uploads an artifact 🤔