@<1559349204206227456:profile|BeefyStarfish55> try checking out the general overview on pipelines here , and info on the pipelines UI here .
Each step's arguments (and results) should appear in the steps details panel (which you could then follow to the underlying task for complete, in-depth, details).
ExcitedFish86 You can https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#adding-metrics-and--or-hyperparameters to include any parameter/metric column that helps your analysis (and subsequently filter the table on those columns).
There's not yet the equivalent of a parameter importance visualization, though such insight visualizations are definitely in our sights.
Sure appreciate if you can https://github.com/allegroai/clearml/issues/new on the subject :)
HappyDove3 Notice that in https://github.com/allegroai/clearml/issues/400 the goal is to see a table plot in the UI scalars tab for a specific experiment (with additional discussions on how these will be addressed when comparing experiments).
Note that once you take the approach you suggested of logging your metrics single values, you can configure your experiment comparison scalars view to show single values instead of the time-series graph which I think will provide you with the matrix c...
MysteriousBee56 would providing Trains with an "import mode" (say, via environment or command line variable), which means that it should create a draft server entry, populate all the execution/environment info and exit before it actually starts employing the ML infrastructure address your use case?
JitteryCoyote63 Great idea. Appreciate if you https://github.com/allegroai/clearml/issues/new/choose .
IrateDolphin19 ClearML provides for saving files generated as part of your code execution through the https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact . For your use case, you can have your code thus create the artifact as it runs, you can set the specific storage location when you edit your configuration, through the task's output_uri field.
Does this help?
SmarmySeaurchin8 Following up on ColossalDeer61 's hint, notice https://allegroai-trains.slack.com/archives/CTK20V944/p1597248476076700?thread_ts=1597248298.075500&cid=CTK20V944 not-too-old thread on reusing globally installed packages.
Thanks for clarifying @<1523705301990117376:profile|WickedCat12> .
As I mentioned originally, plotting an arbitrary metric against another is further down the ClearML roadmap.
It'd be great if you use a github issue to help push it through :)
@<1523701157564780544:profile|TenseOstrich47> Seems like the ClearML website is temporarily down 😞 . Should be resolved soon though.
DepressedChimpanzee34 Apologies for missing your previous comment.
Totally agree that the global selection indicator should maintain its 'clear selection' behaviour even if some/all of the selection is off-screen.
@<1523705301990117376:profile|WickedCat12> ClearML Scalars explicitly show metrics time progression (you can display iteration/wall-time).
Plotting one metric against another is a feature that lies further down ClearML's roadmap.
If your metric is reported only once per epoch you can make use of the existing scalars functionality by making use of the iteration parameter when reporting your metric to reflect the epoch instead.
Does this make sense?