What if you set the  default_output_uri  to  false  ?
@<1523701083040387072:profile|UnevenDolphin73> , basically, it scales to as many pods as you like. Very similar to the autoscaler but on top of K8s
I'm afraid that's all there is. I think security integrations are in the Scale & Enterprise versions.
The ValueError is happening because there is no queue called services it appears
I see. I don't think it's supported but I think it would be a great idea for a feature. Maybe Open a Github feature request?
What about  Task.unregister_artifact  ?
https://clear.ml/docs/latest/docs/references/sdk/task#unregister_artifact
VexedCat68 , what errors are you getting? What exactly is not working, the webserver or apiserver? Are you trying to access the server from the machine you set it up on or remotely?
@<1734020208089108480:profile|WickedHare16> , many different things, RBAC, users & groups, K8s dedicated support with advanced features, HyperDatasets, SSO/LDAP integration, dedicated support, dynamic GPU allocation, advanced GPU fractioning on top of K8s and much more.
You can see a more detailed list here - None
I would suggest contacting sales@clear.ml for more information 🙂
Hi @<1734020162731905024:profile|RattyBluewhale45> , from the error it looks like there is no space left on the pod. Are you able to run this code manually?
Does any exit code appear? What is the status message and status reason in the 'INFO' section?
This is doen via  clearml-agent  this is the second link I provided -  None
TartLeopard58 , I think you need to mount  apiserver.conf  to the api server. This is an API configuration  🙂
This is what I just tested now for a task with a commit in the webUI:
from clearml import Task
task = Task.get_task(task_id="<TASK_ID>")
print(task.data.script.version_num)
This returned the commit ID I see in the webUI.
Are you sure there is a commit ID in the UI? Are you sure you're fetching the correct task?
I think you would need to contact the sales department for this  🙂
None
Can you please run  ls -la /opt/clearml  and send the output + your docker compose file
MammothParrot39 , yes it is available. This part of the Dataset module of  clearml
Hi  @<1856507252714770432:profile|VirtuousStork5> , what if you set the  output_uri  to point directly to the s3 bucket?
Hi  GiganticMole91 , what version of ClearML server are you using?
Also, can you take a look inside the elastic container to see if there are any errors there?
Hmmm this is strange. Still not working for you?
You can provide it in the extra configurations sections
And are you running inside a repository or is it a stand alone script?
Yep, just make sure you show some activity in a task once every 2 hours so it won't be detected as inactive 🙂
CumbersomeParrot30 , try setting the following environment variable to true:CLEARML_SKIP_PIP_VENV_INSTALL
The sample script you posted runs fine on server 1.6.0. I did however comment out  from machine_learning.clearml_client import Task  and used  from clearml import Task
Can you please try with the regular import?
Hi FierceHamster54 , is this an old autoscaler instance? What is the version? You can see the version when you're on the application and click on 'More' at the top left text area
Yes, but this data is managed by mongodb. Also since you have full visibility at the user/passwords you can probably somehow generate a token similiar to how the UI does it when you log in to create a token
Can you try with auto_connect_streams=True ? Also, what version of  clearml  sdk are you using?
