This is the simplest I could get for the inference request. The model and input and output names are the ones that the server wanted.
Thanks VexedCat68 !
This is a great example, maybe PR it to the cleamrl-servvng repo ? wdyt?
For anyone who's struggling with this. This is how I solved it. I'd personally not worked with GRPC so I instead looked at the HTTP docs and that one was much simpler to use.