You need to also spin up the ClearML server...
None
Hi CurvedHedgehog15 ,
Can you please provide a short snippet to reproduce this?
In which section are we looking at currently in the comparison?
DefiantLobster38 , please try the following - Change the verify_certificate
to False
https://github.com/allegroai/clearml/blob/aa4e5ea7454e8f15b99bb2c77c4599fac2373c9d/docs/clearml.conf#L16
Tell me if it helps 🙂
Hi EnormousCormorant39 ,
is there a way to enqueue the dataset
add
command on a worker
Can you please elaborate a bit on this? Do you want to create some sort of trigger action to add files to a dataset?
Hii @<1608271575964979200:profile|GiddyRaccoon10> , ClearGPT is a separate enterprise product 🙂
@<1722786138415960064:profile|BitterPuppy92> , we are more than happy to accept pull requests into our free open source 🙂
Please open developer tools (F12), go to the network tab and refresh the page
It can run dockers and it can run over K8s
Hi @<1574931891478335488:profile|DizzyButterfly4> , not sure what you mean. Can you elaborate on what you see vs what you expect to see?
Then just use export
I'm not sure it's not possible in the open version. I think this is because the users are loaded into the server when the server loads all the config files. This usually happens on server startup.
However, even if you simply restart the apiserver, any running experiments should continue running and resume communication with the backend once the apiserver is back up.
There is also the option of manually creating documents directly on mongodb - but it is unadvisable.
Aight. What OS are you on, also, what is the status of this deploy? Is this a clean install, version upgrade or it just stopped working after a restart? 🙂
Hi @<1710827340621156352:profile|HungryFrog27> , can you provide a full log of the task?
@<1734020208089108480:profile|WickedHare16> , many different things, RBAC, users & groups, K8s dedicated support with advanced features, HyperDatasets, SSO/LDAP integration, dedicated support, dynamic GPU allocation, advanced GPU fractioning on top of K8s and much more.
You can see a more detailed list here - None
I would suggest contacting sales@clear.ml for more information 🙂
OutrageousSheep60 , it looks like it's not a bug. Internally x
is stored as an int
, however get_user_properties()
casts it back as a string. You could open a github issue with a feature request for this 🙂
Hi CostlyFox64 ,
Can you try configuring your ~/clearml.conf
with the following?agent.package_manager.extra_index_url= [ "https://<USER>:<PASSWORD>@packages.<HOSTNAME>/<REPO_PATH>" ]
Hi ScrawnyLion96 ,
I think it handles some data like worker stats. It's required for the server to run. What do you mean by the redis getting fuller and fuller?
Hi @<1523701295830011904:profile|CluelessFlamingo93> part of the server is a service that kills such tasks, I think this is what you're looking for - None
Are you running in docker mode? You could maybe use another docker image that has python in it.
You can add it to your pip configuration so it will always be taken into account
Can you provide a task id for such a task?
What actions did you take exactly to get to this state?
Edit clearml.conf
on the agent side and add the extra index url there - https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
Not from the top of my head, let me take a look 🙂
Hi HungryArcticwolf62 ,
from what I understand you simply want to access models afterwards - correct me if I'm wrong.
What I think would solve your problem is the following:task = Task.init(...., output_uri=True)
This should upload the model to the server and thus make it accessible by other entities within the system.
Am I on track?
Is it your own server installation or are you using the SaaS?
Also I think it should start with None
If you go into the settings, at the bottom right you will see the version of the server