Reputation
Badges 1
371 × Eureka!I've been having this issue for a while now :((
wrong image. lemme upload the correct one.
Big thank you though.
let me check
Also, is clearml open source and accepting contributions or is it just a limited team working on it? Sorry for an off topic question.
only issue is even though it's a bool, it's stored as "False" since clearml stores the args as strings.
Ok this worked. Thank you.
Basically if I pass an arg with a default value of False, which is a bool, it'll run fine originally, since it just accepted the default value.
I'll create a github issue. Overall I hope you understand.
And casting it to bool converts it to True
when you connect to the server properly, you're able to see the dashboard like this with menu options on the side.
I've also mentioned it on the issue I created but I had the issue even when I set the type to bool in parser.add_argument(type=bool)
However when i'll reset or clone the task, now it won't just accept the default value but clearml will pass the arg directly
Thanks, I went through it and this seems easy
Basically the environment/container the agent is running in needs to have specific cuda installed. Is that correct CostlyOstrich36 ?
For anyone who's struggling with this. This is how I solved it. I'd personally not worked with GRPC so I instead looked at the HTTP docs and that one was much simpler to use.
This is the simplest I could get for the inference request. The model and input and output names are the ones that the server wanted.
I've finally gotten the triton engine to run. I'll be going through nvidia triton docs to find how to make an inference request. If you have an example inference request, I'll appreciate if you can share it with me.
I'm currently installing nvidia docker on my machine, where the agent resides. I was also getting an error regarding gpu not being available in docker since the agent was running on docker mode. I'll share update in a bit. Trying to re run the whole set up
I want to serve using Nvidia Triton for now.
Also the tutorial mentioned serving-engine-ip as a variable but I have no idea what the ip of the serving engine is.