You will have to build your own docker image based on that docker file, and then update the docker compose
so I would be editing this if e.g. i wanted some cuda version? None ?
actually it seems i should be editing these: None
I am exploring your latest video on cleaml onboarding part 3 model serving and monitoring. The example in a video is very simple - deploying a xgboost model to triton engine. What about if I need to deploy a custom solution with 2 models lets say and some custom logic outside of the models as well - I need a custom container and code, for it to be served. Can clearml serve and hot-reload and monitor custom containers?