Yes, but how do you plan to run the inference?
I am running the example code, so i guess its running on my local machine?
@<1523701070390366208:profile|CostlyOstrich36> ClearML offeres a free tier server, right? My question is
- Can I deploy to this server? I.e use hardware from this server instead of from my machine.
- If so, when i do deploy on ClearML server, how can i get a public url to run inference?
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , I'm not sure what you mean. Can you please elaborate?
@<1523701087100473344:profile|SuccessfulKoala55> i use the free-tier hosting
If so, yes. Which example code exactly?
@<1580367711848894464:profile|ApprehensiveRaven81> do you mean clearml-serving? Where do you run the serving deployment?