DepressedChimpanzee34 ClearML tries to conserve storage by limiting the history length for debug images (see sdk.metrics.file_history_size
https://clear.ml/docs/latest/docs/configs/clearml_conf#sdk-section ), though the history can indeed grow large by setting a large value or using a metric/variant naming scheme to circumvent this limit.
Does your use case call for accessing a specific iteration for all images or when looking at a specific image? Note that the debug image viewer (wh...
OutrageousSheep60 You can see https://github.com/allegroai/clearml/issues/724 a discussion on the topic.
TL;DR:
Currently the containing project is available in the UI as a tooltip to the dataset name An alternate "Project view" to the datasets page is in the works
SmarmySeaurchin8 Following up on ColossalDeer61 's hint, notice https://allegroai-trains.slack.com/archives/CTK20V944/p1597248476076700?thread_ts=1597248298.075500&cid=CTK20V944 not-too-old thread on reusing globally installed packages.
MysteriousBee56 would providing Trains with an "import mode" (say, via environment or command line variable), which means that it should create a draft server entry, populate all the execution/environment info and exit before it actually starts employing the ML infrastructure address your use case?
GentleSwallow91 For more information, look at what ClearML logs for your experiments: https://docs-testing.allegro.ai/docs/latest/docs/fundamentals/task#logging-task-information
Hi HealthyStarfish45 ,
Since you're discussing the experiment list, I assume that by "fixed view per experiment" you actually mean "per project" (as the list view is across all experiments in the list)?
Under this assumption, note that the view configuration (column sort, custom columns, filters) is also specified in the browser URL. So, until the Trains UI supports in-app per-project view preferences - You can simply bookmark the URL.
Does this help?
UpsetTurkey67 The single set of online documentation ( https://clear.ml/docs/latest/docs ), denotes OSS/Free-SaaS/Paid features as such. For example: https://clear.ml/docs/latest/docs/configs/clearml_conf#configuration-vault
If the credentials don't provide access, the calls should fail (there's no fallback - just default values in place of empty configuration).
Notice you explicitly configure all hosts values, so you don't end up using a specific server for API access, and the default demo server for File server access...
WittyOwl57 Is that information available for you on each of the compared experiments when you view them individually?
WittyOwl57 The UI shows repo and package detailed comparison under the "Details"/"Execution" (See sample screenshot), whereas auto-logged environment variables are shown under the "HyperParameters" comparison tab.
What do you find missing beyond those?