Hi ScantCrab97 , what is the flow you and your colleague follow? Both simply clone and enqueue from the UI but simply use different accounts on the same workspace? Are the queues exactly the same (checking so no different ~/clearml.conf files are used)
Hi @<1840924589736071168:profile|EmbarrassedCrab49> , the AI Gateway is part of only the Enterprise version, its not available in the OS
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , I'm not sure what you mean. Can you please elaborate?
Hi @<1734020208089108480:profile|WickedHare16> , what issues are you facing?
Hi ConvolutedSealion94 , you should use the InputModel/OutputModel modules for working with models:
https://clear.ml/docs/latest/docs/references/sdk/model_inputmodel
This makes getting models very easily directly by their IDs (Models have unique IDs). For example:
https://clear.ml/docs/latest/docs/references/sdk/model_inputmodel#get_local_copy
Or:
https://clear.ml/docs/latest/docs/references/sdk/model_inputmodel#get_weights_package
They do look identical, I think the same issue (If it's an issue) also affects https://clear.ml/docs/latest/docs/references/sdk/dataset/#list_added_files
Then you should use this to set up packages (& versions) that will work with an ubuntu appropriate version of Python - None
What version of python do you need to run on?
Yeah, what is the version of the ClearML server. You can see it on the bottom right if you go into settings
Part of the docker compose, there is a container with a special agent that works specifically for managing services for the system, like the pipelines controllers
Hi FierceRabbit20 , I don't think there is such an option out of the box but you can simply add it to your startup of the machine or create a Cron job
Hi NastySeahorse61 ,
It looks like deleting smaller tasks didn't make much of a dent. Do you have any tasks that ran for very long or were very intensive on reporting to the server?
Hi JitteryCoyote63 , can I assume you can ssh into the machine directly?
Hi RoughTiger69 ,
Have you considered maybe cron jobs or using the task scheduler?
Another option is running a dedicated agent just for that - I'm guessing you can make it require very little compute power
SarcasticSquirrel56 , you're right. I think you can use the following setting in ~/clearml.conf : sdk.development.default_output_uri: <S3_BUCKET> . Tell me if that works
You can also just delete the installed packages section from the webUI and it will force it to use the requirements.txt
AbruptWorm50 , it seems the issue is fixed and applications are running again for me 🙂
GreasyLeopard35 , what happens if you try to run the command it's (agent) trying to run yourself?
Hi @<1556812506238816256:profile|LargeCormorant97> , I think you would need to go deeper and investigate each docker's environment and see what is run inside each container and what is the entrypoint since there are several containers each in charge of something else.
Is there a specific reason you need to deploy it without docker?
Hi SwankyCrab22 ,
Regarding Task.init() did you try passing docker_bash_setup_script and it didn't work? Because according to the docs it should be available with Task.init() as well. Also, after the Task.init() you can use the following method:
https://clear.ml/docs/latest/docs/references/sdk/task#set_base_docker
to also add a docker_setup_bash_script in it's args.
Regarding running the script after the repository is downloaded - I'm not sure. But certainly...
What are the exact steps you are currently doing now? Is the folder/script in a repo?
In the UI check under the execution tab in the experiment view then scroll to the bottom - You will have a field called "OUTPUT" what is in there? Select an experiment that is giving you trouble?
Hi @<1665891247245496320:profile|TimelyOtter30> , not sure I follow. It looks like a misconfiguration. I think you need to see the correct settings here: None , also note the direct reference to minio 🙂
Yes, it can be controlled via the local clearml.conf file
I see. Makes sense. Maybe open a GitHub issue for this to follow up on the request 🙂
MelancholyElk85 , I think the upload() function has got the parameter you need: output_uri