JitteryCoyote63 Not currently there, but certainly sounds like something to add to the list - Care to https://github.com/allegroai/clearml/issues/new/choose ?
@<1523705301990117376:profile|WickedCat12> ClearML Scalars explicitly show metrics time progression (you can display iteration/wall-time).
Plotting one metric against another is a feature that lies further down ClearML's roadmap.
If your metric is reported only once per epoch you can make use of the existing scalars functionality by making use of the iteration parameter when reporting your metric to reflect the epoch instead.
Does this make sense?
@<1523701157564780544:profile|TenseOstrich47> The storage in question here is what's available on the machine hosting the ClearML server's docker containers (specifically, the ES one).
@<1523701157564780544:profile|TenseOstrich47> This is typically indicative of insufficient server disk space causing ES to go into read-only mode or turn active shards into inactive or unassigned (see FAQ ).
The disk watermarks controlling the ES free-disk constraints are defined by default as % of the disk space (so it might look to you like you still have plenty of space, but ES thinks otherwise). You can configure di...
DepressedChimpanzee34 Thanks for clarifying where the current debug images display falls short for your use case - Extending the filtering to liken the behaviour of the scalars sound like a great idea 🙂
AverageRabbit65 Adding to SweetBadger76 's reference, e2e examples are available for the different pipeline implementation methods:
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_controller
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_decorator
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_functions
DepressedChimpanzee34a filter similar to one in the scalars page where you can display a subset of the reported debug images can be useful
The scalars page provides a metric hide/show control - Is this the one you mean? The debug images page also provides a filter by metric - Depending on your naming policy this can easily be used to focus on more sparsely appearing images.
Else, an example of the filter you were thinking of would be appreciated.
Regardless, direct iteration access cou...
WittyOwl57 The UI shows repo and package detailed comparison under the "Details"/"Execution" (See sample screenshot), whereas auto-logged environment variables are shown under the "HyperParameters" comparison tab.
What do you find missing beyond those?
UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?
Thanks for clarifying @<1523705301990117376:profile|WickedCat12> .
As I mentioned originally, plotting an arbitrary metric against another is further down the ClearML roadmap.
It'd be great if you use a github issue to help push it through :)
Take a look at https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#running-the-pipeline ;
By default pipelines are enqueued for execution by a ClearML Agent. You can explicitly change this behaviour in your code.
ItchyJellyfish73 Have you looked at the https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation ?
The easy way to do that is to add the desired metrics/params as custom columns, then use the column filters: https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#customizing-the-experiments-table
GreasyPenguin14 That's an annoying bug indeed - Thanks for spotting it. If you need to circumvent it before a fix comes out in one of the near releases, you can programatically use the https://clear.ml/docs/latest/docs/references/api/endpoints#post-projectsupdate e.g.from clearml.backend_api.session.client import APIClient client = APIClient() client.projects.update(project='<project ID>', description='My new description')
Note you can get your project's ID either from the webapp URL...
SharpDove45 you can programmatically control the configured server using https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?highlight=set_credentials#clearml.task.Task.set_credentials
If the credentials don't provide access, the calls should fail (there's no fallback - just default values in place of empty configuration).
Notice you explicitly configure all hosts values, so you don't end up using a specific server for API access, and the default demo server for File server access...
KindGiraffe71 Have you checked out the https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch-lightning/pytorch_lightning_example.py ? https://clearml.slack.com/archives/CTK20V944/p1616070536033700 previous discussion provides some insight into how it works under the hood.
GreasyPenguin14 When the project description is empty you get a "Add project overview" instead if the "Edit" button:
UnsightlySeagull42 The upgrade process is slightly different depending on the environment in which you've deployed your ClearML server (e.g. for a https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading ).
Note the document you are referring to only applies once when you're moving from the older pre-0.16 versions in which case DB migration is required.
If your server is more up to date (0.16 and newer) you should be OK with the link above.
@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.
Thanks letting us know @<1784392065820397568:profile|SplendidFox3> - The signup for app.clear.ml had indeed broken down, but we should be back on track - Can you now complete the registration?
@<1628927672681762816:profile|GreasyKitten62> When you have specific display considerations, you can implement them through report_table's 'extra_layout' and 'extra_data' parameters
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
@<1785841629471444992:profile|CluelessSheep59> find the latest ClearML server AMIs here
@<1580367723722969088:profile|SmoothDuck83> CSV export is only available for table plots
DepressedChimpanzee34 Have you noticed the "Show n experiments selected" button on the bottom bar? This effectively toggles your view between whatever is currently sorted/filtered and the current item selection.
To address the scenario you describe: Switch to "Show selected experiments", remove the redundant items, and switch back to the original view: "Show all experiments"
Thoughts?