@<1523701157564780544:profile|TenseOstrich47> Seems like the ClearML website is temporarily down 😞 . Should be resolved soon though.
@<1523701157564780544:profile|TenseOstrich47> The storage in question here is what's available on the machine hosting the ClearML server's docker containers (specifically, the ES one).
There's an example here to get you going @<1645597514990096384:profile|GrievingFish90> .
We'll definitely look into finding a place for this info in the ClearML docs.
@<1523709410411548672:profile|NuttyFox2> Since the default server user configuration does not require authentication, I'm assuming your use case calls for some users being authenticated where others are not?
Such mixed access mode is currently not on the near term roadmap for the OSS server - You should create a feature request to help push it into the development plan.
@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.
DepressedChimpanzee34 Have you noticed the "Show n experiments selected" button on the bottom bar? This effectively toggles your view between whatever is currently sorted/filtered and the current item selection.
To address the scenario you describe: Switch to "Show selected experiments", remove the redundant items, and switch back to the original view: "Show all experiments"
Thoughts?
DepressedChimpanzee34 Apologies for missing your previous comment.
Totally agree that the global selection indicator should maintain its 'clear selection' behaviour even if some/all of the selection is off-screen.
JitteryCoyote63 Great idea. Appreciate if you https://github.com/allegroai/clearml/issues/new/choose .
JitteryCoyote63 Not currently there, but certainly sounds like something to add to the list - Care to https://github.com/allegroai/clearml/issues/new/choose ?
ExcitedFish86 You can https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#adding-metrics-and--or-hyperparameters to include any parameter/metric column that helps your analysis (and subsequently filter the table on those columns).
There's not yet the equivalent of a parameter importance visualization, though such insight visualizations are definitely in our sights.
Sure appreciate if you can https://github.com/allegroai/clearml/issues/new on the subject :)
IrateDolphin19 ClearML provides for saving files generated as part of your code execution through the https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact . For your use case, you can have your code thus create the artifact as it runs, you can set the specific storage location when you edit your configuration, through the task's output_uri field.
Does this help?
UnsightlySeagull42 The upgrade process is slightly different depending on the environment in which you've deployed your ClearML server (e.g. for a https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading ).
Note the document you are referring to only applies once when you're moving from the older pre-0.16 versions in which case DB migration is required.
If your server is more up to date (0.16 and newer) you should be OK with the link above.
@<1523701157564780544:profile|TenseOstrich47> This is typically indicative of insufficient server disk space causing ES to go into read-only mode or turn active shards into inactive or unassigned (see FAQ ).
The disk watermarks controlling the ES free-disk constraints are defined by default as % of the disk space (so it might look to you like you still have plenty of space, but ES thinks otherwise). You can configure di...
MysteriousBee56 would providing Trains with an "import mode" (say, via environment or command line variable), which means that it should create a draft server entry, populate all the execution/environment info and exit before it actually starts employing the ML infrastructure address your use case?
DepressedChimpanzee34 Thanks for clarifying where the current debug images display falls short for your use case - Extending the filtering to liken the behaviour of the scalars sound like a great idea 🙂
UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?
GreasyPenguin14 When the project description is empty you get a "Add project overview" instead if the "Edit" button:
ScrawnyLion96 Looks like a case of broken links - Check out https://clear.ml/docs/latest/docs/references/api/definitions#tasksexecution and https://clear.ml/docs/latest/docs/references/api/definitions#tasksconfiguration_item
DepressedChimpanzee34 Experience has shown that some mechanisms for mitigating large sets impact on browser performance are required.
Your 2nd suggestion for adding an in-app search tool for such sections seems to be completely in line with ClearML's behaviour in other UI sections (e.g. console logs) - It'd be great if you can https://github.com/allegroai/clearml/issues/new/choose
Hi DefeatedCrab47 ,
The examples folder has just been restructured: Find the example here:
https://github.com/allegroai/trains/blob/master/examples/services/hyper-parameter-optimization/hyper_parameter_optimizer.py
From the https://github.com/allegroai/trains-server/releases/tag/0.13.0 :
Reports average load metrics per day (CPU/memory) Reports average workload per day (amount and average duration of queues, agents and experiments)
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
SharpDove45 you can programmatically control the configured server using https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?highlight=set_credentials#clearml.task.Task.set_credentials
ItchyJellyfish73 Have you looked at the https://clear.ml/docs/latest/docs/clearml_agent#dynamic-gpu-allocation ?
UpsetTurkey67 The single set of online documentation ( https://clear.ml/docs/latest/docs ), denotes OSS/Free-SaaS/Paid features as such. For example: https://clear.ml/docs/latest/docs/configs/clearml_conf#configuration-vault
Hi HealthyStarfish45 ,
Since you're discussing the experiment list, I assume that by "fixed view per experiment" you actually mean "per project" (as the list view is across all experiments in the list)?
Under this assumption, note that the view configuration (column sort, custom columns, filters) is also specified in the browser URL. So, until the Trains UI supports in-app per-project view preferences - You can simply bookmark the URL.
Does this help?
Thanks for clarifying @<1523705301990117376:profile|WickedCat12> .
As I mentioned originally, plotting an arbitrary metric against another is further down the ClearML roadmap.
It'd be great if you use a github issue to help push it through :)
@<1523705301990117376:profile|WickedCat12> ClearML Scalars explicitly show metrics time progression (you can display iteration/wall-time).
Plotting one metric against another is a feature that lies further down ClearML's roadmap.
If your metric is reported only once per epoch you can make use of the existing scalars functionality by making use of the iteration parameter when reporting your metric to reflect the epoch instead.
Does this make sense?
WittyOwl57 Is that information available for you on each of the compared experiments when you view them individually?