UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?
@<1628927672681762816:profile|GreasyKitten62> When you have specific display considerations, you can implement them through report_table's 'extra_layout' and 'extra_data' parameters
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
@<1687643893996195840:profile|RoundCat60> Looks like the docs have not caught up yet with recent structure change in the repo which renamed the 'server' folder to 'apiserver'.
So... the correct link would be None
OutrageousSheep60 You can see https://github.com/allegroai/clearml/issues/724 a discussion on the topic.
TL;DR:
Currently the containing project is available in the UI as a tooltip to the dataset name An alternate "Project view" to the datasets page is in the works
Hi DefeatedCrab47 ,
The examples folder has just been restructured: Find the example here:
https://github.com/allegroai/trains/blob/master/examples/services/hyper-parameter-optimization/hyper_parameter_optimizer.py
Hi JuicyOtter4
The GUI search returns all experiments in the project that have your search string in their task id, name, description or any of their models' names.
You can use regex with the '.*' button in the search bar.
KindGiraffe71 Have you checked out the https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch-lightning/pytorch_lightning_example.py ? https://clearml.slack.com/archives/CTK20V944/p1616070536033700 previous discussion provides some insight into how it works under the hood.
DepressedChimpanzee34 Have you noticed the "Show n experiments selected" button on the bottom bar? This effectively toggles your view between whatever is currently sorted/filtered and the current item selection.
To address the scenario you describe: Switch to "Show selected experiments", remove the redundant items, and switch back to the original view: "Show all experiments"
Thoughts?
DepressedChimpanzee34 Apologies for missing your previous comment.
Totally agree that the global selection indicator should maintain its 'clear selection' behaviour even if some/all of the selection is off-screen.
ItchyJellyfish73 Have you looked at the https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation ?
DepressedChimpanzee34 ClearML tries to conserve storage by limiting the history length for debug images (see sdk.metrics.file_history_size
https://clear.ml/docs/latest/docs/configs/clearml_conf#sdk-section ), though the history can indeed grow large by setting a large value or using a metric/variant naming scheme to circumvent this limit.
Does your use case call for accessing a specific iteration for all images or when looking at a specific image? Note that the debug image viewer (wh...
DepressedChimpanzee34a filter similar to one in the scalars page where you can display a subset of the reported debug images can be useful
The scalars page provides a metric hide/show control - Is this the one you mean? The debug images page also provides a filter by metric - Depending on your naming policy this can easily be used to focus on more sparsely appearing images.
Else, an example of the filter you were thinking of would be appreciated.
Regardless, direct iteration access cou...
DepressedChimpanzee34 Thanks for clarifying where the current debug images display falls short for your use case - Extending the filtering to liken the behaviour of the scalars sound like a great idea 🙂
DepressedChimpanzee34 Always appreciated
DepressedChimpanzee34 Experience has shown that some mechanisms for mitigating large sets impact on browser performance are required.
Your 2nd suggestion for adding an in-app search tool for such sections seems to be completely in line with ClearML's behaviour in other UI sections (e.g. console logs) - It'd be great if you can https://github.com/allegroai/clearml/issues/new/choose
Thanks for noticing @<1523708920831414272:profile|SuperficialDolphin93> - ClearML is already there under it's legacy "Trains" name, it's indeed past time for an update.
Take a look at https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#running-the-pipeline ;
By default pipelines are enqueued for execution by a ClearML Agent. You can explicitly change this behaviour in your code.
GreasyPenguin14 When the project description is empty you get a "Add project overview" instead if the "Edit" button:
HappyDove3 Notice that in https://github.com/allegroai/clearml/issues/400 the goal is to see a table plot in the UI scalars tab for a specific experiment (with additional discussions on how these will be addressed when comparing experiments).
Note that once you take the approach you suggested of logging your metrics single values, you can configure your experiment comparison scalars view to show single values instead of the time-series graph which I think will provide you with the matrix c...
Thanks for clarifying @<1523705301990117376:profile|WickedCat12> .
As I mentioned originally, plotting an arbitrary metric against another is further down the ClearML roadmap.
It'd be great if you use a github issue to help push it through :)
@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.
SharpDove45 you can programmatically control the configured server using https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?highlight=set_credentials#clearml.task.Task.set_credentials
If the credentials don't provide access, the calls should fail (there's no fallback - just default values in place of empty configuration).
Notice you explicitly configure all hosts values, so you don't end up using a specific server for API access, and the default demo server for File server access...
UpsetTurkey67 The single set of online documentation ( https://clear.ml/docs/latest/docs ), denotes OSS/Free-SaaS/Paid features as such. For example: https://clear.ml/docs/latest/docs/configs/clearml_conf#configuration-vault
UnevenDolphin73 Well... not right now... Currently the ClearML UI only partitions internal artifact types.
That said, having user-defined artifact groups sure sounds worth looking into - Care to https://github.com/allegroai/clearml/issues/new/choose ?
The easy way to do that is to add the desired metrics/params as custom columns, then use the column filters: https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#customizing-the-experiments-table