Reputation
Badges 1
13 × Eureka!yup so the multiple endpoints has only difference in weight value but the same preprocessing script. And each preprocessing script use different threshold. okay I will test it on my friend's pc then. I will give the update soon when this problem solved. I'm not familiar with term 'PR' since I've never joined a community before, is it pull request? If yes, then I will open a PR and informed the update at this thread
Okay thank you Sonckie.
Yup I tried that method. In my application's use case it needs to connect to multiple model endpoint with the same preprocess method. So I need to change the task id in the preproccess.py code everytime I deploy the model? My docker-compose version is 1.29.2. I am running docker desktop apps in my windows 10 with WSL. My Laptop use AMD APU, I haven't searched about configuring my docker to run with GPU. Thank you for the answer. It seems the error is same with the skl...
` from multiprocessing.sharedctypes import Value
from typing import Any
import numpy as np
import pandas as pd
from clearml import Task
import pickle
from sklearn.linear_model import LinearRegression, RANSACRegressor
Notice Preprocess class Must be named "Preprocess"
class Preprocess(object):
def init(self):
# set internal state, this will be called only once. (i.e. not per request)
self.task = Task.get_task(project_name='serving examples', task_id='bfc1ae4d242b4d5...
the trend step artifact used to keep track the time of the data so we know the expected trend of the input data. For example, on the first data which is trend_step = 1 the trend value is 10, then if the trend_step = 10 (the tenth data) our regressor will predict the trend value of the selected trend_step. this method is still in research to make it more efficient so it doesn't need to upload artifact every request
X is the sequence generated from df. df contains 2 columns (date and value). S...
the last dimension is the value, like if the time series is univariate data the last dimension is 1 and if multivariate data the last dimension is depend on the number of datas feed to the model
I think this usage pattern will be greatly appreciated
BTW as an optimization I would use Task scalars (they are send in the background, you can relativity easily get the latest value etc.) do you also need it to be atomic ?
Yes it's atomic, Okay I will research more about the Task scalars
So in theory 1,60,-1 should work as a size for Triton, Are you getting an error?
(BTW: if you were to manually run the model inference I'm assuming you would have created a 3d matrix where the dim...
I have tried to change dimension to [-1,60,1] but I got this error:
{"detail":"Error processing request: <_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.INVALID_ARGUMENT\n\tdetails = "All input dimensions should be specified for input 'lstm_input' for model 'MTkyLjE2OC4wLjEwMXxTaXRlc2NvcGV8dXRpbGl6YXRpb258Q1BVfFBWRTEzUDEwMQ', got [-1,60,1]"\n\tdebug_error_string = "{"created":"@1660528548.049544414","description":"Error received from peer ipv4:172.18.0.7:80...
BTW: this seems like a triton LSTM configuration issue, we might want to move the discussion to the Triton server issue, wdyt?
Definitely!
Also, i'd like to ask about the alternative way of this issue. Is there any reference about integrating kafka data streaming directly to clearml-serving? because right now this issue raise as I'm using a nodejs service to mediate data from kafka to my clearml-serving
Oh, so the way it currently works clearml-serving will push the data in real-time into Prometheus (you can control the stats/input/out), then you can build the anomaly detection in grafana (for example alerts on histograms over time is out-of-the-box, and clearml creates the histograms overtime).
Would you also need access to the stats data in Prometheus ? or are you saying you need to process it before it gets there?
Is there any references to setup this? I'm not familiar about advanc...
There is already a kafka server in clearml-serving, I think I'm lost on what you are looking to build here?
I want to build a real time data streaming anomaly detection service with clearml-serving
Okay, I will research about the features first. Thank you AgitatedDove14 for the support. If there is new issue will let you know in the new thread
It said the command --aux-config got invalid input
ZanyPelican5 Yes, I am using keras for serving (triton)