Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
FrothyDog40
Moderator
0 Questions, 71 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 Happy New Year Everyone!

@<1785841629471444992:profile|CluelessSheep59> find the latest ClearML server AMIs here

10 months ago
0 Hello. Is There Any Possibility To Change Horizontal Axis In Scalars To Some Chosen Metrics (E.G. Epochs Metric)? As Far I Have Seen It'S Possible To Use Wall Time. Is That A Planned Feature?

@<1523705301990117376:profile|WickedCat12> ClearML Scalars explicitly show metrics time progression (you can display iteration/wall-time).
Plotting one metric against another is a feature that lies further down ClearML's roadmap.

If your metric is reported only once per epoch you can make use of the existing scalars functionality by making use of the iteration parameter when reporting your metric to reflect the epoch instead.

Does this make sense?

2 years ago
0 Can I Run An Autoscaler Listening To A Regular Queue (I.E. Combine Autoscaling With On Premise Machines)? Or Does It Run The Danger Of Creating An Ec2 Instance While An On-Premise Machine Takes The Job?

UnevenDolphin73 Am I missing anything in rephrasing your use case to "Have a single autoscaler service multiple queues" (where the autoscaler resource configuration is, in essence, the pool you mention)?

3 years ago
0 Hello, I Want To Update Allegro. I Found The Link:

UnsightlySeagull42 The upgrade process is slightly different depending on the environment in which you've deployed your ClearML server (e.g. for a https://allegro.ai/clearml/docs/docs/deploying_clearml/clearml_server_linux_mac.html#upgrading ).
Note the document you are referring to only applies once when you're moving from the older pre-0.16 versions in which case DB migration is required.
If your server is more up to date (0.16 and newer) you should be OK with the link above.

4 years ago
0 Hi, I Was Wondering If There Is A Proper Way To Integrate My Model Config File (Json) Into The Workflow Using The Ui. I'Ve Managed To Connect It As A Configuration Object But It Would Be Preferable If I Could Load It Into A Table Like The Other Params (In

IrateDolphin19 ClearML provides for saving files generated as part of your code execution through the https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact . For your use case, you can have your code thus create the artifact as it runs, you can set the specific storage location when you edit your configuration, through the task's output_uri field.

Does this help?

3 years ago
0 Hey, I Hope This Is The Right Place To Ask. We'Re A Small Data Science Team That Wants To Log Everything About Our Ml Models. Looking Around On The Internet, Mostly Mlflow Is Being Recommended, But Occasionally The Name Trains Pop-Ups. According To You,

DefeatedCrab47 For the most part, mlflow can serve basic ML models using scikit-learn. In contrast, Trains was designed with more general purpose ML/DL workflows in mind, for which there's no "generic" way to serve models as different scenarios can use different input encoding, models results would be represented in a variety of forms, etc.
Consider also, that creating an HTTP endpoint for model inference is quite a breeze: there are multiple examples of Flask on top of any DL/ML framework w...

5 years ago
0 The Links To Pytorch Lightning Are Broken In The

DefeatedCrab47 Thanks for pointing it out.
We'll get in touch with the PyTorch Lightning team to better understand the code restructure they're effecting (see https://github.com/PyTorchLightning/pytorch-lightning/pull/2384 ).
In the mean time, you can look at the prior version: https://github.com/PyTorchLightning/pytorch-lightning/blob/0.8.1/pytorch_lightning/loggers/trains.py

5 years ago
0 “You Can View The Reported Text In The

@<1523706095791509504:profile|FiercePenguin76> The "Log" tab has been renamed "Console" in ClearML 0.17.0 - Thanks for pointing out the outdated description.

4 years ago
Show more results compactanswers