Okay thank you Sonckie.
Yup I tried that method. In my application's use case it needs to connect to multiple model endpoint with the same preprocess method. So I need to change the task id in the preproccess.py code everytime I deploy the model? My docker-compose version is 1.29.2. I am running docker desktop apps in my windows 10 with WSL. My Laptop use AMD APU, I haven't searched about configuring my docker to run with GPU. Thank you for the answer. It seems the error is same with the sklearn documentation command
Hi William!
1 So if I understand correctly, you want to get an artifact from another task into your preprocessing.
You can do this using the Task.get_task()
call. So imagine your anomaly detection task is called anomaly_detection
it produces an artifact called my_anomaly_artifact
and is located in the my_project
project you can do:
` from clearml import Task
anomaly_task = Task.get_task(project_name='my_project', task_name='anomaly_detection')
treshold = anomaly_task.artifacts['my_anomaly_artifact'].get() You can do this anywhere to get details from any task! So also in preprocessing 🙂 In this case I use project name and task name to get the task you need, but you can also use id!
Task.get_task(task_id='...') `
2 I will take a look, which docker-compose version do you have? Are you running linux, windows, mac? Which GPU are you running on and have you configured docker to allow access to your GPU?
3 Can you give the exact command you ran and also a screenshot of the error? 🙂 You should not have to upload it manually, the add model command should work!
Ok I check 3: The commandclearml-serving --id <your_id> model add --engine triton --endpoint "test_model_keras" --preprocess "examples/keras/preprocess.py" --name "train keras model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32
should beclearml-serving --id <your_id> model add --engine triton --endpoint "test_model_keras" --preprocess "examples/keras/preprocess.py" --name "train keras model - serving_model" --project "serving examples" --input-size 1 784 --input-name "dense_input" --input-type float32 --output-size -1 10 --output-name "activation_2" --output-type float32
It seems to have been a mistype in the docs 🙂
1 Can you give a little more explanation about your usecase? It seems I don't fully understand yet. So you have multiple endpoints, but always the same preprocessing script to go with it? And you need to gather a different threshold for each of the models?
2 Not completely sure of this, but I think an AMD APU simply won't work. ClearML serving is using triton as inference engine for GPU based models and that is written by nvidia, specifically for nvidia hardware. I don't think triton will run on an AMD APU
3 Well spotted! Indeed it seems the sklearn documentation has the same problem. Would you mind opening a PR for it, then you can be contributor 😄
yup so the multiple endpoints has only difference in weight value but the same preprocessing script. And each preprocessing script use different threshold. okay I will test it on my friend's pc then. I will give the update soon when this problem solved. I'm not familiar with term 'PR' since I've never joined a community before, is it pull request? If yes, then I will open a PR and informed the update at this thread