Hi SmarmyDolphin68
You have two options:
Automatically upload the models when training pass output_uri
to Task.init. For example output_uri=True
will upload to the clearml-server, output_uri='
s3://bucket/folder '
will upload to S3 etc. Manually upload a model that you have locally: https://github.com/allegroai/clearml/blob/9ff52a8699266fec1cca486b239efa5ff1f681bc/examples/reporting/model_config.py#L37
AgitatedDove14 , Let me clarify, I meant, let's say I have all the data like checkpoints, test and train logdirs, scripts that were used to train a model. So, how would I upload all of that to the ClearML server without retraining a model, so that the 'Scalars', 'Debug Samples', 'Hyperparameters', everything show up on ClearML server like they generally do?
SmarmyDolphin68 sadly if this was not executed with trains (i.e. the offline option of trains), this is not really doable (I mean it is, if you write some code and parse the TB 😉 but let's assume this is way to much work)
A few options:
On the next run, use clearml OFFLINE option, (i.e. in your code call Task.set_offline() , or set env variable CLEARML_OFFLINE_MODE=1) You can compress the upload the checkpoint folder manually, by passing the checkpoint folder, see https://github.com/allegroai/clearml/blob/fa77d0f380739c9d56e9e0bce8a8a9bfafb339f4/examples/reporting/model_config.py#L37 wdyt?
Right, parsing the TB is too much work, I'll look into the material you sent. Thanks!