
Reputation
Badges 1
19 × Eureka!@<1523701994743664640:profile|AppetizingMouse58>
@<1523701994743664640:profile|AppetizingMouse58> Hi! Thank you for a quick response! Should I edit the configuration file directly in apiserver container like in screenshot? Is there a better, more consistent way to do that (I don't want to edit this config again if I recreate container for some reason) ?
@<1523701994743664640:profile|AppetizingMouse58> @<1523701087100473344:profile|SuccessfulKoala55> It worked as intended! Thank you once again :hugging_face:
@<1523701087100473344:profile|SuccessfulKoala55> Thank you! I think I've done something wrong (It doesn't delete now). Correct me please
- I've created file on clearml host machine /opt/clearml/config/services.conf with content as in the pic
- Restarted container with
docker restart async-delete
Hi! I've faced the same problem and solved it like in this thread
In short - you have to map config folder to async_delete container like in apiserver container.
@<1523703097560403968:profile|CumbersomeCormorant74> Thank you! I'll try this and let you know if that helped
It worked! Thank you once again!
Thank you @<1523701070390366208:profile|CostlyOstrich36> Do you mean that I should edit clearml.conf
and add something like this in agent
section?
Thanks, I figured it out that it's enough to mount configmap into clearml pods
Here's my custom-values.yaml
# -- Private image registry configuration
imageCredentials:
# -- Use private authentication mode
enabled: true
# -- If this is set, chart will not generate a secret but will use what is defined here
existingSecret: ""
# -- Registry name
registry: gitlab.my-company
# -- Registry username
username: gitlab+deploy-token
# -- Registry password
password: token
...
@<1570583237065969664:profile|AdorableCrocodile14> Hi, probably you have dataset or pipeline in this project
@<1523701994743664640:profile|AppetizingMouse58> Thanks a lot! That worked ☺
@<1523701087100473344:profile|SuccessfulKoala55> Mate, any news on my issue from the team?
If you re facing problem with getting elastic search image, then I'd recommend using vpn to download it and then push it to your own docker-hub repo (in case you don't have vpn on server machine)
/mnt/data/k8s-pvs/clearml-agent-pv
is a mounted folder
pip-download-cache
folder is created but it stays empty
my current agent config for worker looks like the following now
agent {
worker_id: "k8s-service-worker"
worker_name: "k8s-service-worker"
git_user: "gitlab+deploy-token-4"
git_pass: "xxx"
docker_force_pull: true
package_manager.system_site_packages: false
venvs_dir: ~/.cache/venvs-builds
venvs_cache: {
max_entries: 5
free_space_threshold_gb: 10.0
path: /mnt/data/k8s-pvs/clearml-agent-pv/venvs-cache
},
...
@<1523701070390366208:profile|CostlyOstrich36> Could you please suggest something?
Hi! Try excluding “http/https” from the uri in the config. Have you configured your client correctly?Try adding “output_uri” (or smth like that) when initializing your Task.
Also worth to mention that I'm using helm chart 7.11.4
@<1523701070390366208:profile|CostlyOstrich36> Hello, mate!
Here's my values.yaml
cleaned from keys.
Alright I've solved the issue by myself. The problem was that my agent used internal k8s domain names. I've changed them (in agent's helm values) to real ones, added certificates, set REQUESTS_CA_BUNDLE env var to point to my custom ca cert bundle and voila - debug samples are shown
Here's what I see. And the url of the image is k8s internal url
like None
If I substitute the internal address part ( None with actual address I can see the image ( None )
![image](https://clearml-web-assets.s3.amazonaws.com/scoold/images/TT9ATQXJ5-F0...