Hi ObnoxiousStork61 ,
In general, I think setting up the server on the GPU machine running the experiment is not a good idea - the server is supposed to run in a stable environment, whereas the GPU machine is more dynamic in nature.
Regarding data, since the data is stored in a network storage, and assuming the network storage is local (i.e. in your own network), I don't think data fetching will be an issue...
So ideally I should setup one machine with storage for datasets for clearml-server and one GPU machine to be clearml-agent with storage enough to fetch data?
Yes, one machine for server and storage, GPU machine with local storage for data fetching and caching :)