@<1687643893996195840:profile|RoundCat60> Looks like the docs have not caught up yet with recent structure change in the repo which renamed the 'server' folder to 'apiserver'.
So... the correct link would be None
DepressedChimpanzee34 Apologies for missing your previous comment.
Totally agree that the global selection indicator should maintain its 'clear selection' behaviour even if some/all of the selection is off-screen.
GreasyPenguin14 That's an annoying bug indeed - Thanks for spotting it. If you need to circumvent it before a fix comes out in one of the near releases, you can programatically use the https://clear.ml/docs/latest/docs/references/api/endpoints#post-projectsupdate e.g.from clearml.backend_api.session.client import APIClient client = APIClient() client.projects.update(project='<project ID>', description='My new description')
Note you can get your project's ID either from the webapp URL...
Take a look at https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#running-the-pipeline ;
By default pipelines are enqueued for execution by a ClearML Agent. You can explicitly change this behaviour in your code.
GreasyPenguin14 When the project description is empty you get a "Add project overview" instead if the "Edit" button:
AverageRabbit65 Adding to SweetBadger76 's reference, e2e examples are available for the different pipeline implementation methods:
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_controller
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_decorator
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_functions
UnevenDolphin73 Well... not right now... Currently the ClearML UI only partitions internal artifact types.
That said, having user-defined artifact groups sure sounds worth looking into - Care to https://github.com/allegroai/clearml/issues/new/choose ?
@<1523701157564780544:profile|TenseOstrich47> This is typically indicative of insufficient server disk space causing ES to go into read-only mode or turn active shards into inactive or unassigned (see FAQ ).
The disk watermarks controlling the ES free-disk constraints are defined by default as % of the disk space (so it might look to you like you still have plenty of space, but ES thinks otherwise). You can configure di...
@<1523701157564780544:profile|TenseOstrich47> The storage in question here is what's available on the machine hosting the ClearML server's docker containers (specifically, the ES one).
@<1628927672681762816:profile|GreasyKitten62> When you have specific display considerations, you can implement them through report_table's 'extra_layout' and 'extra_data' parameters
IrateDolphin19 ClearML provides for saving files generated as part of your code execution through the https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact . For your use case, you can have your code thus create the artifact as it runs, you can set the specific storage location when you edit your configuration, through the task's output_uri field.
Does this help?
DepressedChimpanzee34 Thanks for clarifying where the current debug images display falls short for your use case - Extending the filtering to liken the behaviour of the scalars sound like a great idea 🙂
OutrageousSheep60 You can see https://github.com/allegroai/clearml/issues/724 a discussion on the topic.
TL;DR:
Currently the containing project is available in the UI as a tooltip to the dataset name An alternate "Project view" to the datasets page is in the works
Thanks for clarifying @<1523705301990117376:profile|WickedCat12> .
As I mentioned originally, plotting an arbitrary metric against another is further down the ClearML roadmap.
It'd be great if you use a github issue to help push it through :)
RotundHedgehog76 Have you tried clearml-data add --files .
? (Probably best to try on a smaller subset first)
The easy way to do that is to add the desired metrics/params as custom columns, then use the column filters: https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#customizing-the-experiments-table
From the https://github.com/allegroai/trains-server/releases/tag/0.13.0 :
Reports average load metrics per day (CPU/memory) Reports average workload per day (amount and average duration of queues, agents and experiments)
SharpDove45 you can programmatically control the configured server using https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?highlight=set_credentials#clearml.task.Task.set_credentials
JitteryCoyote63 Not currently there, but certainly sounds like something to add to the list - Care to https://github.com/allegroai/clearml/issues/new/choose ?
HappyDove3 Notice that in https://github.com/allegroai/clearml/issues/400 the goal is to see a table plot in the UI scalars tab for a specific experiment (with additional discussions on how these will be addressed when comparing experiments).
Note that once you take the approach you suggested of logging your metrics single values, you can configure your experiment comparison scalars view to show single values instead of the time-series graph which I think will provide you with the matrix c...
@<1580367723722969088:profile|SmoothDuck83> CSV export is only available for table plots
Hi JuicyOtter4
The GUI search returns all experiments in the project that have your search string in their task id, name, description or any of their models' names.
You can use regex with the '.*' button in the search bar.
UnevenDolphin73 I think it'd be easier to track as a separate one.
DepressedChimpanzee34a filter similar to one in the scalars page where you can display a subset of the reported debug images can be useful
The scalars page provides a metric hide/show control - Is this the one you mean? The debug images page also provides a filter by metric - Depending on your naming policy this can easily be used to focus on more sparsely appearing images.
Else, an example of the filter you were thinking of would be appreciated.
Regardless, direct iteration access cou...
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
@<1523701157564780544:profile|TenseOstrich47> Seems like the ClearML website is temporarily down 😞 . Should be resolved soon though.
@<1580367723722969088:profile|SmoothDuck83> Not every plot is trivially be formed as a table (i.e. CSV), that's why the JSON export is available for all plots.
What were you considering?
@<1523705301990117376:profile|WickedCat12> ClearML Scalars explicitly show metrics time progression (you can display iteration/wall-time).
Plotting one metric against another is a feature that lies further down ClearML's roadmap.
If your metric is reported only once per epoch you can make use of the existing scalars functionality by making use of the iteration parameter when reporting your metric to reflect the epoch instead.
Does this make sense?