@<1580367723722969088:profile|SmoothDuck83> CSV export is only available for table plots
@<1785841629471444992:profile|CluelessSheep59> find the latest ClearML server AMIs here
@<1523705301990117376:profile|WickedCat12> ClearML Scalars explicitly show metrics time progression (you can display iteration/wall-time).
Plotting one metric against another is a feature that lies further down ClearML's roadmap.
If your metric is reported only once per epoch you can make use of the existing scalars functionality by making use of the iteration parameter when reporting your metric to reflect the epoch instead.
Does this make sense?
KindGiraffe71 Have you checked out the https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch-lightning/pytorch_lightning_example.py ? https://clearml.slack.com/archives/CTK20V944/p1616070536033700 previous discussion provides some insight into how it works under the hood.
BattyLion34 Adding to AgitatedDove14 hint. See the following docs page: https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_config_for_clearml_server.html
UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?
UnsightlySeagull42 The upgrade process is slightly different depending on the environment in which you've deployed your ClearML server (e.g. for a https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading ).
Note the document you are referring to only applies once when you're moving from the older pre-0.16 versions in which case DB migration is required.
If your server is more up to date (0.16 and newer) you should be OK with the link above.
IrateDolphin19 ClearML provides for saving files generated as part of your code execution through the https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact . For your use case, you can have your code thus create the artifact as it runs, you can set the specific storage location when you edit your configuration, through the task's output_uri field.
Does this help?
DefeatedCrab47 For the most part, mlflow can serve basic ML models using scikit-learn. In contrast, Trains was designed with more general purpose ML/DL workflows in mind, for which there's no "generic" way to serve models as different scenarios can use different input encoding, models results would be represented in a variety of forms, etc.
Consider also, that creating an HTTP endpoint for model inference is quite a breeze: there are multiple examples of Flask on top of any DL/ML framework w...
DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py
@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.