Unanswered
Hi There, Another Triton-Related Question:
Are We Able To Deploy
Hi @<1523701118159294464:profile|ExasperatedCrab78> , so I’ve started looking into setting up the TritonBackends now, as we first discussed.
I was able to structure the folders correctly, and deploy the endopints. However, when I spin up the containers, I get the following error:
clearml-serving-triton | | detection_preprocess | 1 | UNAVAILABLE: Internal: Unable to initialize shared memory key 'triton_python_backend_shm_region_1' to requested size (67108864 bytes). If you are running Triton inside docker, use '--shm-size' flag to control the shared memory region size. Each Python backend model instance requires at least 64MBs of shared memory. Error: No such file or directory
I then wanted to debug this a little further, to see if this is the issue. Passed --t-log-verbose=2
in CLEARML_TRITON_HELPER_ARGS
to get more logs, but triton didn’t like it:
tritonserver: unrecognized option '--t_log_verbose=2'
Usage: tritonserver [options]
...
So wondering, is there any way to increase the shared memory size as well? I believe we have to do this when running/starting the container? But i couldn’t figure out how the container is brought up when doing it directly:
docker run --name triton --gpus=all -it --shm-size=512m -p8000:8000 -p8001:8001 -p8002:8002 -v $(pwd)/model_repository:/models image_path
152 Views
0
Answers
one year ago
one year ago