Reputation
Badges 1
76 × Eureka!I see some keys from apiserver that are a couple of hundreds of mb
But why does it try to run “docker run”?
Some way more k8s native like a job with a base image and the exported task
Or my own computer
Ok thanks, I’m searching a way to not be dependent on the aws auto scaler script but rather use a k8s job which will run and die after it’s finished thus saving resources
Also added the “CLEARML_SUPPRESS_UPDATE_MESSAGE” env which didn’t work…
Yes that would be awesome
One of the keys:
b”<class ‘apiserver.database.model.base.GetMixin.GetManyScrollState’>/f4010ba7df0f45dbbea10a71fe568a94"
Thanks for the reply 👍
maybe change the session cookie to 24 hours?
SuccessfulKoala55 Seems like apiserver uses redis as cache for ui pagination with a long ttl
cloned the repo “clearml-helm-charts”
users.get_preferences
users.get_current_user
Post reqs fail…
When I deployed it without cloning the repo the menu was visible, but I need the charts locally
I don’t wan’t to be dependent on the agents
Maybe this?
It’s the entrypoint file for the docker
I beileve this is the problem but what should be the value?
Hi SuccessfulKoala55 ,
On the agent pod the tasks logs are written locally to /tmp/.clearml_agent_out.<random num>.txt
I wan’t those logs to also bind to the hosts stdout, with a console handler for example
Yeah tried it but it tries to do a docker run for some reason…
Thanks! SuccessfulKoala55 I’ll take a look
but it also has the --create-queue flag but it didn’t create the queue I needed to create it manually
