Reputation
Badges 1
371 × Eureka!You can see there's no task bar on the left. basically I can't get any credentials to the server or check queues or anything.
There's a whole task bar on the left in the server. I only get this page when i use the ip 0.0.0.0
I feel like they need to add this in the documentation 😕
Thanks for the help.
I think I get what you're saying yeah. I don't know how I would give each server a different cookie name. I can see this problem being resolved by clearing cookies or manually entering /login at the end of the url
I think maybe it does this because of cache or something. Maybe it keeps a record of an older login and when you restart the server, it keeps trying to use the older details maybe
Shouldn't I get redirected to the login page if i'm not logged in instead of the dashboard? 😞
I've been having this issue for a while now :((
wrong image. lemme upload the correct one.
Big thank you though.
let me check
Also, is clearml open source and accepting contributions or is it just a limited team working on it? Sorry for an off topic question.
only issue is even though it's a bool, it's stored as "False" since clearml stores the args as strings.
Ok this worked. Thank you.
Basically if I pass an arg with a default value of False, which is a bool, it'll run fine originally, since it just accepted the default value.
I'll create a github issue. Overall I hope you understand.
And casting it to bool converts it to True
when you connect to the server properly, you're able to see the dashboard like this with menu options on the side.
I've also mentioned it on the issue I created but I had the issue even when I set the type to bool in parser.add_argument(type=bool)
However when i'll reset or clone the task, now it won't just accept the default value but clearml will pass the arg directly
Thanks, I went through it and this seems easy
Basically the environment/container the agent is running in needs to have specific cuda installed. Is that correct CostlyOstrich36 ?
For anyone who's struggling with this. This is how I solved it. I'd personally not worked with GRPC so I instead looked at the HTTP docs and that one was much simpler to use.
This is the simplest I could get for the inference request. The model and input and output names are the ones that the server wanted.