hardware monitoring etc.
This is averaged and being sent only every 30 seconds, not a lot of calls.
I just saw that I went through the first 200k API calls rather fast, so that is how I rationalized it.
Yes, that's kind of makes sens
Once every 2000 steps, which is every few seconds. So in theory those ~20 scalars should be batched since they are reported more or less at the same time. It's a bit odd that the API calls added up so quickly anyway.
The default flush is every 2 seconds, so "real time" but the assumption is most of the time nothing to be seen.
I'll try to decrease the flush frequency (once a minute or even every few minutes is plenty for my use case) and see if it reduces the API calls. Thank you for your help!
Sure thing. Please let me know if it helps.
Is there some way to configure this without using the CLI to generate a client config? I'm currently using the environment-variables based setup to avoid leaving state on the client.
I think that dues to the fact that the actual data is being sent in a background Process (not thread) once the Task is created, these have smaller effect (we should somehow fox that, but currently there is no way to do that)
You can hack it though:
` from clearml.backend_interface.task.development.worker import DevWorker
DevWorker.report_period_sec = 600 `Let me know if it has any effect