JitteryCoyote63 Great idea. Appreciate if you https://github.com/allegroai/clearml/issues/new/choose .
RotundHedgehog76 Have you tried clearml-data add --files .
? (Probably best to try on a smaller subset first)
DefeatedCrab47 For the most part, mlflow can serve basic ML models using scikit-learn. In contrast, Trains was designed with more general purpose ML/DL workflows in mind, for which there's no "generic" way to serve models as different scenarios can use different input encoding, models results would be represented in a variety of forms, etc.
Consider also, that creating an HTTP endpoint for model inference is quite a breeze: there are multiple examples of Flask on top of any DL/ML framework w...
DefeatedCrab47 Happy you're finding Trains useful 🙂
but it definitely has it's advantages if TRAINS would support it (early stage Data Science infrastructure).
No doubt, and I definitely see such usable example in the cards for Trains' upcoming versions...
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
GentleSwallow91 For more information, look at what ClearML logs for your experiments: https://docs-testing.allegro.ai/docs/latest/docs/fundamentals/task#logging-task-information
UnevenDolphin73 I think it'd be easier to track as a separate one.
HappyDove3 Notice that in https://github.com/allegroai/clearml/issues/400 the goal is to see a table plot in the UI scalars tab for a specific experiment (with additional discussions on how these will be addressed when comparing experiments).
Note that once you take the approach you suggested of logging your metrics single values, you can configure your experiment comparison scalars view to show single values instead of the time-series graph which I think will provide you with the matrix c...
MelancholyElk85 Thanks for calling this to attention. What do you think would have made it easier for you to notice the available extended list content?
I would assume that a "type to match" option would also have helped?
Appreciate if you could https://github.com/allegroai/clearml/issues/new/choose so this can be pushed forward.
UnevenDolphin73 Well... not right now... Currently the ClearML UI only partitions internal artifact types.
That said, having user-defined artifact groups sure sounds worth looking into - Care to https://github.com/allegroai/clearml/issues/new/choose ?
Hi DefeatedCrab47 ,
The examples folder has just been restructured: Find the example here:
https://github.com/allegroai/trains/blob/master/examples/services/hyper-parameter-optimization/hyper_parameter_optimizer.py
DepressedChimpanzee34 Have you noticed the "Show n experiments selected" button on the bottom bar? This effectively toggles your view between whatever is currently sorted/filtered and the current item selection.
To address the scenario you describe: Switch to "Show selected experiments", remove the redundant items, and switch back to the original view: "Show all experiments"
Thoughts?
DepressedChimpanzee34 Apologies for missing your previous comment.
Totally agree that the global selection indicator should maintain its 'clear selection' behaviour even if some/all of the selection is off-screen.
The easy way to do that is to add the desired metrics/params as custom columns, then use the column filters: https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#customizing-the-experiments-table
UnsightlySeagull42 The upgrade process is slightly different depending on the environment in which you've deployed your ClearML server (e.g. for a https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading ).
Note the document you are referring to only applies once when you're moving from the older pre-0.16 versions in which case DB migration is required.
If your server is more up to date (0.16 and newer) you should be OK with the link above.
WittyOwl57 No worries 🙂 happens to the best!
ExcitedFish86 You can https://clear.ml/docs/latest/docs/webapp/webapp_exp_table#adding-metrics-and--or-hyperparameters to include any parameter/metric column that helps your analysis (and subsequently filter the table on those columns).
There's not yet the equivalent of a parameter importance visualization, though such insight visualizations are definitely in our sights.
Sure appreciate if you can https://github.com/allegroai/clearml/issues/new on the subject :)
WittyOwl57 The UI shows repo and package detailed comparison under the "Details"/"Execution" (See sample screenshot), whereas auto-logged environment variables are shown under the "HyperParameters" comparison tab.
What do you find missing beyond those?
WittyOwl57 I just used a couple of the experiments in the https://app.community.clear.ml/projects/764d8edf41474d77ad671db74583528d/ of the free tier server.
TightElk12 This makes a lot of sense - should make it into one of the coming releases
UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?
ScrawnyLion96 Looks like a case of broken links - Check out https://clear.ml/docs/latest/docs/references/api/definitions#tasksexecution and https://clear.ml/docs/latest/docs/references/api/definitions#tasksconfiguration_item
DepressedChimpanzee34 Experience has shown that some mechanisms for mitigating large sets impact on browser performance are required.
Your 2nd suggestion for adding an in-app search tool for such sections seems to be completely in line with ClearML's behaviour in other UI sections (e.g. console logs) - It'd be great if you can https://github.com/allegroai/clearml/issues/new/choose
@<1628927672681762816:profile|GreasyKitten62> When you have specific display considerations, you can implement them through report_table's 'extra_layout' and 'extra_data' parameters
HappyDove3 you can get some more insight on the different configuration methods and how to use theme https://clear.ml/docs/latest/docs/fundamentals/hyperparameters
OutrageousSheep60 You can see https://github.com/allegroai/clearml/issues/724 a discussion on the topic.
TL;DR:
Currently the containing project is available in the UI as a tooltip to the dataset name An alternate "Project view" to the datasets page is in the works
WittyOwl57 Is that information available for you on each of the compared experiments when you view them individually?
GreasyPenguin14 That's an annoying bug indeed - Thanks for spotting it. If you need to circumvent it before a fix comes out in one of the near releases, you can programatically use the https://clear.ml/docs/latest/docs/references/api/endpoints#post-projectsupdate e.g.from clearml.backend_api.session.client import APIClient client = APIClient() client.projects.update(project='<project ID>', description='My new description')
Note you can get your project's ID either from the webapp URL...
SharpDove45 you can programmatically control the configured server using https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/task_module/task_task.html?highlight=set_credentials#clearml.task.Task.set_credentials