Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
DashingAlligator35
Moderator
1 Question, 11 Answers
  Active since 21 March 2023
  Last activity one year ago

Reputation

0

Badges 1

10 × Eureka!
0 Votes
18 Answers
1K Views
0 Votes 18 Answers 1K Views
one year ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

Well, I read this, but it is same as what I had done before.
The query here gives percentage of input data in each bucket over a period of time.
But my previous ques and other query are still not figured out.

one year ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

I understood this, but still I have few doubts. Like what would be the exact query given an endpoint, for requests per sec.
Also, for the example you gave, I got the query up and running for it. Let's say I want a query to get the feature value (x and y in your example) distribution over some duration of time, then what should be the query, I tried endpoint:x_bucket{"+inf"}[$duration]/endpoint:x_sum{"+inf"}[$duration] and some other variations, but couldn't get it right. Can you help?

one year ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

so, this allows us to define buckets for the histogram distribution, as given in the example docs for monitoring, but apart from that what exactly can we add? eg. I want to view feature value distribution over an interval, and baseline distribution of training and test set, how can I do with the cli tool, or do I need to make changes in the original serving code?

one year ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

Agreed with your answer. I mistook the given example query in the tutorial as something else rather than the feature distribution over time.
My next question is that what can be the other relevant queries that we can visualize (in grafana), which will help in monitoring the served model and the end-user. So, I wanted the queries for that, like can we have a query for K-L divergence from the available metrics (that prometheus scraped from clearml-serving-statistics), and if yes, then what is t...

one year ago
0 Hi Everyone, I Was Working With Model Serving And Monitoring, And Wanted To Know About Monitoring Aspects/Usage In Serving. I Actually Wanted To Know About Exactly What All Queries Related To The Serving Can Be Done, Like What All Are Important Metric Mon

the one where I asked about the query for feature value distribution over time that can be executed to be shown in prometheus and grafana with the metrics that are currently getting scraped by prometheus from clearml-statistics

one year ago
0 Hi Team,In My Dl Project Im Using Lstm But Model Logging Isn'T Happening In Artifacts . Does Clearml Supports Lstm?

Hi @<1542316991337992192:profile|AverageMoth57> ,
were you able to resolve this issue, as I am also facing the same for all type of models/frameworks except Keras, even after saving the model to disk?

one year ago