BoredPigeon26 , Hi 🙂
Basically 'group' is based on the "title" of the graph, when TB is autoconnected, this is the first section before the '/' meaning, reporting "my graph title/series" allows to group by "my graph title". How are you reporting the metrics on TB ? Somehow (I'm not sure how with Sagemaker) you have to pass the previously used Task ID, so clearml will know which Task we are continuing Any idea how you could pass this information ? maybe store something on the sagemaker job?
As for 1-
They all start with the same name space - since TB then showing them in a nicer view.
with tf.name_scope("accumulated"): for metric in self.metrics: tf.summary.scalar(metric.name, metric.result(), step=step)I want to group them by metric.name, and they are all grouped by the name_scope
2. Again - can a task id be changed after it was created?
CostlyOstrich36 - I don't want to change the TB behavior.
How can I check what is the "title" I guess I have multiple "/" separating "tf.name_scope", "metric.name", and something like summary_name.
TB knows to put all graphs from the same scope in the same section, in that section it has a graph for each "metric.name", and in the graph it have a series for each summary_name.
I guess I can change the scope name to be the metric name, and abandon all the scoping. But it is very useful to me to separate the metrics from the losses, from other outputs.
BoredPigeon26 , Please try the following setting in your ~/clearml.conf
sdk.metrics.tensorboard_single_series_per_graph: true and see if it helps 🙂
Can I see the clearml.conf in the server view?
I think I changed it in the right place, but I don't see it in the graph.
BoredPigeon26 , regarding 2, no I'm afraid not since the task ID is the unique task identifier
Regarding 1, can you name them yourself somehow and thus get the wanted result?
BoredPigeon26 , what do you mean in the server view?
BoredPigeon26 , do you run them manually or with the agent? If you run manually then I'm afraid it doesn't show the config currently. However if you run with the agent, then it will also print out the entire config (excluding the secrets, of course) at the start of the run and it will be shown in the console output in the UI 🙂
Or do you mean the different parameters you've changed about in the task itself?
My question was is can I see the
~/clearml.conf in the web view. If I run two experiments with different config, where can I see it? It is less important now, I didn't see any changes since it didn't get to the test phase yet.
CostlyOstrich36 It splits the data not in the way I intended.
It have the 4 metrics of train in the same graph,
and the 4 metrics of test in the same graph.
Regarding the scalar visualization - If you have other solution, it will be nice to try
~/clearml.conf on the side of the agent/clearml that runs the script 🙂
- This could work.
Run them manually, but it is not that important