HappyDove3 you can get some more insight on the different configuration methods and how to use theme https://clear.ml/docs/latest/docs/fundamentals/hyperparameters
RotundHedgehog76 Thanks for the spot - seems like docs are wrong, and CLI help is correct: '--skip-docker-network' will NOT pass '--network host' to the docker.
@<1523709410411548672:profile|NuttyFox2> Since the default server user configuration does not require authentication, I'm assuming your use case calls for some users being authenticated where others are not?
Such mixed access mode is currently not on the near term roadmap for the OSS server - You should create a feature request to help push it into the development plan.
ExcitedFish86 You can https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#adding-metrics-and--or-hyperparameters to include any parameter/metric column that helps your analysis (and subsequently filter the table on those columns).
There's not yet the equivalent of a parameter importance visualization, though such insight visualizations are definitely in our sights.
Sure appreciate if you can https://github.com/allegroai/clearml/issues/new on the subject :)
BattyLion34 Adding to AgitatedDove14 hint. See the following docs page: https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_config_for_clearml_server.html
TightElk12 This makes a lot of sense - should make it into one of the coming releases
WittyOwl57 Is that information available for you on each of the compared experiments when you view them individually?
From the https://github.com/allegroai/trains-server/releases/tag/0.13.0 :
Reports average load metrics per day (CPU/memory) Reports average workload per day (amount and average duration of queues, agents and experiments)
SmarmySeaurchin8 Following up on ColossalDeer61 's hint, notice https://allegroai-trains.slack.com/archives/CTK20V944/p1597248476076700?thread_ts=1597248298.075500&cid=CTK20V944 not-too-old thread on reusing globally installed packages.
DepressedChimpanzee34 ClearML tries to conserve storage by limiting the history length for debug images (see sdk.metrics.file_history_size
https://clear.ml/docs/latest/docs/configs/clearml_conf#sdk-section ), though the history can indeed grow large by setting a large value or using a metric/variant naming scheme to circumvent this limit.
Does your use case call for accessing a specific iteration for all images or when looking at a specific image? Note that the debug image viewer (wh...
IrateDolphin19 ClearML provides for saving files generated as part of your code execution through the https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact . For your use case, you can have your code thus create the artifact as it runs, you can set the specific storage location when you edit your configuration, through the task's output_uri field.
Does this help?
Hi HealthyStarfish45 ,
Since you're discussing the experiment list, I assume that by "fixed view per experiment" you actually mean "per project" (as the list view is across all experiments in the list)?
Under this assumption, note that the view configuration (column sort, custom columns, filters) is also specified in the browser URL. So, until the Trains UI supports in-app per-project view preferences - You can simply bookmark the URL.
Does this help?
MelancholyElk85 Thanks for calling this to attention. What do you think would have made it easier for you to notice the available extended list content?
I would assume that a "type to match" option would also have helped?
Appreciate if you could https://github.com/allegroai/clearml/issues/new/choose so this can be pushed forward.
GentleSwallow91 For more information, look at what ClearML logs for your experiments: https://docs-testing.allegro.ai/docs/latest/docs/fundamentals/task#logging-task-information
UnevenDolphin73 Well... not right now... Currently the ClearML UI only partitions internal artifact types.
That said, having user-defined artifact groups sure sounds worth looking into - Care to https://github.com/allegroai/clearml/issues/new/choose ?
UnevenDolphin73 I think it'd be easier to track as a separate one.
Thanks for noticing @<1523708920831414272:profile|SuperficialDolphin93> - ClearML is already there under it's legacy "Trains" name, it's indeed past time for an update.
@<1687643893996195840:profile|RoundCat60> Looks like the docs have not caught up yet with recent structure change in the repo which renamed the 'server' folder to 'apiserver'.
So... the correct link would be None
Hi DefeatedCrab47 ,
The examples folder has just been restructured: Find the example here:
https://github.com/allegroai/trains/blob/master/examples/services/hyper-parameter-optimization/hyper_parameter_optimizer.py
DepressedChimpanzee34 Always appreciated
@<1559349204206227456:profile|BeefyStarfish55> try checking out the general overview on pipelines here , and info on the pipelines UI here .
Each step's arguments (and results) should appear in the steps details panel (which you could then follow to the underlying task for complete, in-depth, details).
MysteriousBee56 would providing Trains with an "import mode" (say, via environment or command line variable), which means that it should create a draft server entry, populate all the execution/environment info and exit before it actually starts employing the ML infrastructure address your use case?
@<1580367723722969088:profile|SmoothDuck83> Not every plot is trivially be formed as a table (i.e. CSV), that's why the JSON export is available for all plots.
What were you considering?
WittyOwl57 I just used a couple of the experiments in the https://app.community.clear.ml/projects/764d8edf41474d77ad671db74583528d/ of the free tier server.
WittyOwl57 No worries 🙂 happens to the best!
HappyDove3 Notice that in https://github.com/allegroai/clearml/issues/400 the goal is to see a table plot in the UI scalars tab for a specific experiment (with additional discussions on how these will be addressed when comparing experiments).
Note that once you take the approach you suggested of logging your metrics single values, you can configure your experiment comparison scalars view to show single values instead of the time-series graph which I think will provide you with the matrix c...
JitteryCoyote63 Great idea. Appreciate if you https://github.com/allegroai/clearml/issues/new/choose .
Hi JuicyOtter4
The GUI search returns all experiments in the project that have your search string in their task id, name, description or any of their models' names.
You can use regex with the '.*' button in the search bar.
DefeatedCrab47 Happy you're finding Trains useful 🙂
but it definitely has it's advantages if TRAINS would support it (early stage Data Science infrastructure).
No doubt, and I definitely see such usable example in the cards for Trains' upcoming versions...
DepressedChimpanzee34 Experience has shown that some mechanisms for mitigating large sets impact on browser performance are required.
Your 2nd suggestion for adding an in-app search tool for such sections seems to be completely in line with ClearML's behaviour in other UI sections (e.g. console logs) - It'd be great if you can https://github.com/allegroai/clearml/issues/new/choose