Reputation
Badges 1
15 × Eureka!Hi AgitatedDove14 , thanks for your reply, here is the full log. Also when i try to test my keras model this gives me this error(sensored image, the code is on the last image). Thank you :D
Hi AgitatedDove14 I already have the .conf file in my C:/Users/[name]/clearml.conf directory, also I can run the train file on local execution. But it doesn't work on git action execution.
Actually AgitatedDove14 , let me try to explain my problem more clearly.
When I'm trying to serve my model with clearml-serving, the expected input-size for my AI model is always [1,60,1]. What I need is that model served by clearml-serving can receive the input-size dynamically.Is there any solution for the model to be able to receive the input size dynamically (especially dynamic for the first dimension) like [10,60,1] or [23000,60,1] etc?
Here are some diagram to help me explain this.
Yes, that make sense, thank you for your help AgitatedDove14
Hi CostlyOstrich36 Thank you for replying
Here is the ClearML version im currently using and also it is a self hosted server and running locally.
On step 7, I actually trying to pass temporary data to another preprocessing method. Step 6 successfully executed, but after that it wont go into step 7 and gives the error. Here is the screenshots.
I'm using pipeline from decorator by the way
Hello AgitatedDove14 , based on the picture below, I think it's stream processing, not batch.
And the executor do preprocessing and create a data to fit to model
Perfect, I already set the env variables in yaml file and now I can run train model.py and Task.init on git action. Thank You for your help SuccessfulKoala55 and CostlyOstrich36 😁
Hi TimelyPenguin76 I specify the repo for each step by using the 'repo' arguments from PipelineDecorator.component.
Here is my reference
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_function_decorators#arguments-1
Hi CostlyOstrich36 thank you for your reply, can you explain more about the "inject the file itself into ~/clearml.conf
"? and how to do that?
And fyi, the train model.py file cannot detect the clearml.conf file on git action execution, but on local execution train model.py file run just perfect.
CostlyOstrich36 Where can I check the SDK python package and CLearML-Agent?
Well I'm using clearml==1.4.1 I think
GG AgitatedDove14 IT WORKS!!! I changed from "dense" to "time_distributed"
THANK YOU SO MUCH!!
Okay, here is the full docker-compose log AgitatedDove14 Thank You.
Hi CostlyOstrich36 here is my clearML version
clearml-serving --id cd4c615583394719b9019667068954bd model add --engine triton --endpoint "test_model_lstm2" --preprocess "preprocess.py" --name "train lstmae model - serving_model" --project "serving examples" --input-size 1 60 1 --input-name "lstm_input" --input-type float32 --output-size -1 60 1 --output-name "dense" --output-type float32
This is how i add my model and set the endpoint, is it right? AgitatedDove14