I think I failed in explaining my self, I meant instead of multiple CUDA versions installed on the same host/docker, wouldn't it make sense to just select a different out-of-the-box docker with the right CUDA, directly from the public nvidia dockerhub offering ? (This is just another argument on the Task that you can adjust), wouldn't that be easier for users?
Absolutely aligned with you there AgitatedDove14 . I understood you correctly.
My default is to work with native VM images, and conda environments, and thus, when I wanted a VM with multiple CUDA versions, I created an image which had multiple CUDA versions installed, as well as Conda for environment and package management, and JupyterHub for serving Notebook and Lab.
However, I now realise that serving containers with the specific version of CUDA is the way to go.