Unanswered
Hello Everyone! I'M Encountering An Issue When Trying To Deploy An Endpoint For A Large-Sized Model Or Get Inference On A Large Dataset (Both Exceeding ~100Mb). It Seems That They Can Only Be Downloaded Up To About 100Mb. Is There A Way To Increase A Time
AgitatedDove14 this file is not getting mounted when using the docker-compose file for the clearml-serving
pipeline, do we also have to mount it somehow?
The only place I can see this file being used is in the README, like so:
Spin the inference container:
docker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID=<service_id> -e CLEARML_SERVING_POLL_FREQ=5 clearml-serving-inference:latest
150 Views
0
Answers
one year ago
one year ago