EcstaticBaldeagle77 , Actually, these scalars and configurations are not saved locally to a file, but can be retrieved and saved manually. If you want to get metrics you can call task.get_reported_scalars() and if you want configuration then call task.get_configuration_object() with the configuration section as it appears in the web application
Hi EcstaticBaldeagle77 ,
The comment says “Connecting ClearML with the current process, from here on everything is logged automatically.”
this comment means that every framework is now patched and will report to ClearML too, this can be configure (per task) with auto_connect_frameworks
in your Task.init
call (example can be found here - https://clear.ml/docs/latest/docs/faq#experiments )
Q2: Can I dump this logged keys & values as local files ? (edited)
Not sure what you mean here, but you can connect parameters and configuration files to your task with task.connect
(parameters) and task.connect_configuration
(configuration files) methods
Can you share an example of:
self.log(“key_name”, value) that you save?
Hi, AnxiousSeal95 thanks for your help.
self.log(“key_name”, value) just means self.log("train_loss", loss)
or self.log("valid_loss", loss)
in the example source code 😅
It’s also possible to retrieve configurations from clearml and dump them as a file. Is that what you’re looking for?
If configurations include values which logged at self.log("train_loss", loss)
and self.log("valid_loss", loss)
, that’s what I am looking for.
Can I manually set the local path to be dumped?
As for your second question, You can connect files as TimelyPenguin76 suggested. It's also possible to retrieve configurations from clearml and dump them as a file. Is that what you're looking for?
EcstaticBaldeagle77 , Can you share an example of:
self.log("key_name", value) that you save? (not 100% sure what self is 🙂 )
What the automagic integration provide is that you have all the parameters of your pl trainer automatically fetched and populated, as well as when you call this function:def validation_step(self, batch, batch_idx): x, y = batch y_hat = self(x) loss = F.cross_entropy(y_hat, y) self.log('valid_loss', loss)
The call to "self.log()" is fetched and reported as a metric (with the name "valid_loss")