I understand. That's strange, column ordering etc should be stored in cookies per project. Maybe @<1523703436166565888:profile|DeterminedCrab71> , might have an idea
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , why not just run it as python script.y
?
Hi @<1748153283605696512:profile|GreasyPenguin24> , you certainly can. CLEARML_CONFIG_FILE is the environment variable that allows you to use different configuration files
Hi @<1573119955400921088:profile|CloudyPelican46> , what do you mean by active users? People who are currently logged on to the web UI or people who are running experiments currently or just all users registered to the server?
I'm not sure. Try to see if its an attribute of Task object and see if you can change it somehow during runtime - I'm sure this wasn't intended though 🙂
Hi @<1548839979558375424:profile|DelightfulFrog42> , you can use tasks.set_requirements
to provide specific packages or a requirements.txt:
None
Hi @<1544853721739956224:profile|QuizzicalFox36> , from the error you're getting it looks like a permissions issue with the credentials. Check if your credentials have read/write/delete permissions
Hi @<1635088270469632000:profile|LividReindeer58> , you should do a separation. The pipeline controller should run on the services queue. Pipeline steps should run on different queues. This is why they are sitting in pending - there is no free worker to pick them up.
The reports is a separate area, it's between 'Pipelines' and 'Workers & Queues' buttons on the bar on the left 🙂
That sounds like a fairly large team already. I would suggest considering the Scale version. It would alleviate a lot of devops work & maintenance on your part, provide direct support for users & admins, RBAC, SSO, configuration vaults and many other features.
Also please expand the 500 errors you're seeing. It should give some log
JitteryCoyote63 , thanks for the heads up, we'll look into it 🙂
REMOTE MACHINE:
- git ssh key is located at ~/.ssh/id_rsa
Is this also mounted into the docker itself?
Hi DangerousDragonfly8 , can you please elaborate on your use case? If you want only a single instance to exist at any time how do you expect to update it?
BoredPigeon26 , it's a feature in our next release 🙂
By the way, I don't suggest ever using the _ex
suffix on any of the apicalls you see in UI, it's reserved for UI and can cause unintended results in automations
Can you share the entire log of the run?
Yeah I see it too! I'll ask someone to take a look at it
Can you try it with clearml==1.6.0
please?
Also, can you list the exact commands you ran?
Since the "grand" dataset will inherit from the child versions you wouldn't need to have data duplications
Hi SuperiorCockroach75 , can you please elaborate? What is taking to execute?
Hi GloriousPenguin2 , how did you try to modify it? From the code it looks like it's expecting a configuration and it will sample it once every few minutes
Hi NastySeahorse61 ,
It looks like deleting smaller tasks didn't make much of a dent. Do you have any tasks that ran for very long or were very intensive on reporting to the server?
worker by default checks the backend every 5 seconds for new tasks in the queue. While running a task I think it basically sends whatever api calls a regular local task sends