Reputation
Badges 1
13 × Eureka!I would love to help add the functionality to do either, but adding the functionality to clearml-task sounds very attractive!
Is there a preferred way of passing environmental variables to a task? I'm willing to do it however, just the best way I've found to do it has been
` 1. Have a local .env variable on a client machine
2. Start script running on local laptop. connect_configuration pulls from the .env file
3. now in the clearml UI, the configuration_objects dropdown includes the environmental variables from my local machine necessary to run the script
4. I abort the run running from my laptop, and copy it onto ...
But instead I'd like to connect the configuration ahead of time while i'm submitting the task from the command line
Just referring to the first step on the github page
` Shut down the docker containers
docker-compose down Before pulling the new file and stuff. I seem to be in the weird situation where the docker stuff is all running (i assume, since the ami seems to be functioning properly and i can access the web ui and stuff) but I don't have the compose file, or at least can't seem to find it in
/opt `
Not intentional! When I launched the AMI it was running an older version of the trains software, so I just want the old docker file to shut all that stuff down so i can pull and compose the new docker stuff.
If that's not necessary and I can just kill all the current docker stuff running in the AMI and pull the new stuff i'm happy to do that!
If the answer to number 2 is no, I'd loveee to write a plugin.
For my own clarification, if I wanted to write a plugin that would listen for events to note when a model is set to is_ready
and is a pytorch model, it runs some code to attempt to serialize it and then stores the new, serialized model in the model repository, would that be a Model Registry Store
plugin?
Yes, I would like the contents of the .env file to end up in the configuration_objects
Currently, I can do this if I first run the script locally, and then enqueue it to a remote machine through the UI
env_file = task.connect_configuration('.env', name='env_file') load_dotenv(dotenv_path=env_file, override=True)
Yeah! I just wanted to make sure that it made sense to tag the models for production use and then have them loaded right out of the model repository and into the production service. As I've looked around at the API it definitely seems to support that use case. Really stoked to start using it and introduce a more sane ML ops workflow at my workplace lol.