ok so:
you recommend just saving the dataset id as part of the task configuration? I think I was a bit unclear, my question is how should I report them from the code, they are not caught automatically because they are custom parameters I calculate not as part of any framework, so I wonder if I should report them as artifacts, or maybe scalars? my issue with scalars is that I only have 1 of each type, and the API seems to be oriented toward a series of results of the same type
Hi, regarding your questions:
If you create and finalize the dataset, it should upload the file contents to the fileserver (or any other storage you configure). The dataset is an object similar to a task - it has a unique ID You can add metric columns to the experiments table. You can do this by clicking the little cog wheel at the top right of the experiments table. You can also select multiple experiments and compare them (Bottom left on the bar that appears after selecting more than 1 experiment)
That's an option This really depends on your usage - if you want those 'custom parameters' be accessible by other tasks, then save them as artifacts. If you only want visibility - then save them as scalars. You have a nice example on usage here: https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py