MelancholyElk85 Thanks for calling this to attention. What do you think would have made it easier for you to notice the available extended list content?
I would assume that a "type to match" option would also have helped?
Appreciate if you could https://github.com/allegroai/clearml/issues/new/choose so this can be pushed forward.
WittyOwl57 Is that information available for you on each of the compared experiments when you view them individually?
DepressedChimpanzee34 ClearML tries to conserve storage by limiting the history length for debug images (see sdk.metrics.file_history_size
https://clear.ml/docs/latest/docs/configs/clearml_conf#sdk-section ), though the history can indeed grow large by setting a large value or using a metric/variant naming scheme to circumvent this limit.
Does your use case call for accessing a specific iteration for all images or when looking at a specific image? Note that the debug image viewer (wh...
UnsightlySeagull42 The upgrade process is slightly different depending on the environment in which you've deployed your ClearML server (e.g. for a https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading ).
Note the document you are referring to only applies once when you're moving from the older pre-0.16 versions in which case DB migration is required.
If your server is more up to date (0.16 and newer) you should be OK with the link above.
Take a look at https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#running-the-pipeline ;
By default pipelines are enqueued for execution by a ClearML Agent. You can explicitly change this behaviour in your code.
Thanks for noticing @<1523708920831414272:profile|SuperficialDolphin93> - ClearML is already there under it's legacy "Trains" name, it's indeed past time for an update.
ExcitedFish86 You can https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#adding-metrics-and--or-hyperparameters to include any parameter/metric column that helps your analysis (and subsequently filter the table on those columns).
There's not yet the equivalent of a parameter importance visualization, though such insight visualizations are definitely in our sights.
Sure appreciate if you can https://github.com/allegroai/clearml/issues/new on the subject :)
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?
@<1523701157564780544:profile|TenseOstrich47> This is typically indicative of insufficient server disk space causing ES to go into read-only mode or turn active shards into inactive or unassigned (see FAQ ).
The disk watermarks controlling the ES free-disk constraints are defined by default as % of the disk space (so it might look to you like you still have plenty of space, but ES thinks otherwise). You can configure di...
@<1523701157564780544:profile|TenseOstrich47> The storage in question here is what's available on the machine hosting the ClearML server's docker containers (specifically, the ES one).
AverageRabbit65 Adding to SweetBadger76 's reference, e2e examples are available for the different pipeline implementation methods:
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_controller
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_decorator
https://clear.ml/docs/latest/docs/guides/pipeline/pipeline_functions
DepressedChimpanzee34 Always appreciated
WittyOwl57 I just used a couple of the experiments in the https://app.community.clear.ml/projects/764d8edf41474d77ad671db74583528d/ of the free tier server.
RotundHedgehog76 Have you tried clearml-data add --files .
? (Probably best to try on a smaller subset first)
@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.
SmarmySeaurchin8 Following up on ColossalDeer61 's hint, notice https://allegroai-trains.slack.com/archives/CTK20V944/p1597248476076700?thread_ts=1597248298.075500&cid=CTK20V944 not-too-old thread on reusing globally installed packages.
Hi JuicyOtter4
The GUI search returns all experiments in the project that have your search string in their task id, name, description or any of their models' names.
You can use regex with the '.*' button in the search bar.
@<1687643893996195840:profile|RoundCat60> Looks like the docs have not caught up yet with recent structure change in the repo which renamed the 'server' folder to 'apiserver'.
So... the correct link would be None
Thanks for clarifying @<1523705301990117376:profile|WickedCat12> .
As I mentioned originally, plotting an arbitrary metric against another is further down the ClearML roadmap.
It'd be great if you use a github issue to help push it through :)
Hi HealthyStarfish45 ,
Since you're discussing the experiment list, I assume that by "fixed view per experiment" you actually mean "per project" (as the list view is across all experiments in the list)?
Under this assumption, note that the view configuration (column sort, custom columns, filters) is also specified in the browser URL. So, until the Trains UI supports in-app per-project view preferences - You can simply bookmark the URL.
Does this help?
GreasyPenguin14 That's an annoying bug indeed - Thanks for spotting it. If you need to circumvent it before a fix comes out in one of the near releases, you can programatically use the https://clear.ml/docs/latest/docs/references/api/endpoints#post-projectsupdate e.g.from clearml.backend_api.session.client import APIClient client = APIClient() client.projects.update(project='<project ID>', description='My new description')
Note you can get your project's ID either from the webapp URL...
GreasyPenguin14 When the project description is empty you get a "Add project overview" instead if the "Edit" button:
WittyOwl57 No worries 🙂 happens to the best!
UnevenDolphin73 Well... not right now... Currently the ClearML UI only partitions internal artifact types.
That said, having user-defined artifact groups sure sounds worth looking into - Care to https://github.com/allegroai/clearml/issues/new/choose ?
From the https://github.com/allegroai/trains-server/releases/tag/0.13.0 :
Reports average load metrics per day (CPU/memory) Reports average workload per day (amount and average duration of queues, agents and experiments)
The easy way to do that is to add the desired metrics/params as custom columns, then use the column filters: https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#customizing-the-experiments-table
DepressedChimpanzee34a filter similar to one in the scalars page where you can display a subset of the reported debug images can be useful
The scalars page provides a metric hide/show control - Is this the one you mean? The debug images page also provides a filter by metric - Depending on your naming policy this can easily be used to focus on more sparsely appearing images.
Else, an example of the filter you were thinking of would be appreciated.
Regardless, direct iteration access cou...