In the open source you don't have users & groups, user management is done via fixed users - None
What errors are you seeing in the apiserver pod?
Hi @<1715175986749771776:profile|FuzzySeaanemone21> , what if you try to register them as https?
@<1529271085315395584:profile|AmusedCat74> , what happens if you try to run it with clearml
1.8.0?
Hi @<1715900788393381888:profile|BitingSpider17> , you can run the agent in --debug
mode and this should pass it over to the internal agent running the code
TimelyPenguin76 , MammothGoat53 , I think you shouldn't call Task.init()
more than once inside a script
Ah I see. I'm guessing UI is summing up runtimes of experiments in project.
Please check what you get for events.debug_images
in network section of developer tools (F12) when trying to view the preview in the dataset
Hi @<1529271085315395584:profile|AmusedCat74> , thanks for reporting this, I'll ask the ClearML team to look into this
Hi TrickyFox41 , how did you save the debug samples? What is the URL of the image?
Hi WackyHorse2 ,
What happens if you rename your model to ' u2net-ne1
' instead and try reloading it into triton?
It's totally possible, I think you need to do research on it. There are probably a few ways to do it too. I see CLEARML_API_ACCESS_KEY
& CLEARML_API_SECRET_KEY
in the docker compose - None
You should do some more digging around. One option is to see how you can generate a key/secret pair and inject them via your script into mongoDB where the credentials are stored. Another way is to see how the UI ...
no, it's an environment variable
Hi @<1576381444509405184:profile|ManiacalLizard2> , I don't think such a capability currently exists. I would suggest opening a github feature request for this. As a workaround you could zip them up together and then bind them to an output model.
What do you think?
By the way, how do i set up a shell script?
It's in under the 'container' section in the 'execution' tab of the experiment
can we do it there as well?
Yes, I think you can pass extra configurations through the shell init script
Hi @<1594863230964994048:profile|DangerousBee35> , I'm afraid that the self hosted version and the PRO versions are entirely disconnected. There are many more advanced features in the Scale/Enterprise licenses where you can have a mix of all the features you might be looking for. You can see the different options here - None
Are you getting some errors? Did you run an agent?
Hi @<1529633475710160896:profile|ThickChicken87> , do you mean via the API? I suggest taking a look at what the UI is doing when scrolling through metrics and copying that method of work
Hi @<1593413673383104512:profile|MiniatureDragonfly17> , no. The assumption is that serving runs on a dedicated machine. Of course you can edit the docker compose to use different ports
Hi @<1523701099620470784:profile|ElegantCoyote26> , what happens if you define the cache size to be -1?
I think I've encountered something related to this. Let me take a look at the docs
Hi @<1840199805821784064:profile|EnviousMouse30> , I'm guessing you want to enable login via user/pass? This is the relevant configuration - None
One note though, there are no admin roles in the open source. All users are the same and have access to everything.
JitteryCoyote63 , Hi 🙂
Why do you expect to see the enqueued on top of 'started' if they haven't started yet and are in enqueue state only? You can sort by 'updated' to get this result.
I don't think there is a specific API call for that but you can fetch all the running experiments and then check which users are running them
Yes, you'll need to connect them via code
In the UI, you can edit the docker image you want to use. You can then choose an image with the needed python pre-installed