VexedCat68 hi!
Did you raise a serving engine?
I think the serving engine ip depends on how you set it up
VexedCat68 , what do you mean by trigger? You want some indication that a dataset whats published so you can move to the next step in your pipeline?
Yes, for an enqueued task to run you require an agent to run against the task 🙂
VexedCat68 , what errors are you getting? What exactly is not working, the webserver or apiserver? Are you trying to access the server from the machine you set it up on or remotely?
VexedCat68 , can you try accessing it as
192.168.15.118:8080/login first?
VexedCat68 , It appears to be a bug of sorts, we'll sort it out 🙂
That's a good question. In case you're not running in docker mode, the agent machine that runs the experiment needs to have Cuda/Cudnn installed. If you're running in docker mode you need to select a docker that already has those installed 🙂
Did anything change in your configurations? In the previous version there was no such issue? Is the agent version the only change?
Do you have a log of the triton server?
VexedCat68 Hi 🙂
Please try with pip install clearml==1.1.4rc0
What is still being sent to the fileserver?
try with pip install -U clearml==1.7.2rc1
Hi @<1578555761724755968:profile|GrievingKoala83> , there is no such capability in the open source. To add new users you need to edit the users file.
In the Scale/Enterprise licenses you have full user management including role based access controls
This is exactly what the build command is for. I suggest reviewing the documentation
Hi @<1523701949617147904:profile|PricklyRaven28> , are you using the docker argument and it's not working? Are you sure the agent is running in docker mode?
Hi @<1523701949617147904:profile|PricklyRaven28> , note that steps in a pipeline are special tasks with hidden system tag, I think you might want to enable that in your search
In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?
Did you try the workaround provided in https://clearml.slack.com/archives/CTK20V944/p1664887550256279 by AgitatedDove14 ?
Hi @<1739455977599537152:profile|PoisedSnake58> , in the log you have the location of the cloned repo printed out.
For CLEARML_AGENT_EXTRA_PYTHON_PATH
you need to provide it with a path
You can always add the relevant configurations to the docker image itself as well. From my understanding a new version should be released towards the end of the month and with it the ability to run without docker image required on the autoscaler
Yes, this will cause the code to run inside the container.
if so it won't work as my environment is in the hist linux
Not sure I understand this part, can you please elaborate?
If you shared an experiment to a colleague in a different workspace, can't they just clone it?
Hmmmm I couldn't find something in the SDK, however, you can use the API to do it
lol! Can you hit F12 and see what the server returns for the call projects.get_all_ex