But how do you specify the data hyperparameter input and output models to use when the agent runs the experiment
They are autodetected if you are using Argparse / Hydra / python-fire / etc.
The first time you are running the code (either locally or with an agent), it will add the hyper parameter section for you.
That said you can also provide it as part of the clearml-task
command with --args
(btw: clearml-task --help
will list all the options, https://clear.ml/docs/latest/docs/apps/clearml_task#command-line-options )
But how do you specify the data hyperparameter input and output models to use when the agent runs the experiment
clearml-task --project keras_examples --name remote_test --repo
--branch master --script /webinar-0620/keras_mnist.py --args batch_size=64 epochs=1 --queue default
Thank you for your solution! I have an idea to deploy ClearML Server on the storage server, and then upload multiple networks such as UNet(including code, running environment, hyperparameters, weights, etc.) that have been modified and tested locally to ClearML Server. Then install ClearML Agent on the training server to perform network training in the ClearML Server. Is this feasible?
Hi WearyChicken64 ,
I'm not sure why you refer to the server as storage server - it sounds like you have a server machine and a training machine - the most simple solution is to install the ClearML Server on the server ("Storage server") and install ClearML Agent on the training machine ("Training server")