
Reputation
Badges 1
14 × Eureka!Hi @<1523701070390366208:profile|CostlyOstrich36>
Thanks for looking at this.
- Pasted the body below
{
"aws": {
"key": "emEijEB2wZtj1rgaUN3y",
"secret": "oTxr3w3nlygv85oULOBWkaJi6Zj41OFBLB1e1m0L",
"region": "",
"token": "",
"use_credentials_chain": false,
"buckets": [{
"bucket": "clearml",
"host": "
",
"key": "emEijEB2wZtj1rgaUN3y",
"secret": "oTxr3w3nlygv85oULOBWkaJi6Zj41OFBLB1e1m0L",
"token": "",
"secure": false,
"region": "",
"ver...
Just to make sure the domain is available I did
jovyan@hub-54bbb78ff4-bphnj:/srv/jupyterhub$ dig api-clearml.domain.duckdns.org
; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> api-clearml.domain.duckdns.org
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1149
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 3ac65d14c9554b0f (echoed)
;; QUESTION SECTION:
;ap...
It’s a homelab k8s cluster that I tear down for laughs and giggles 🙂
It’s running behind an ingress so the port is there.
I also tried
Task.set_credentials(
api_host="
",
web_host="
",
files_host="
",
key='E5BYXIM0JXX5N9MRZSHEE2ACKXPTY2',
secret='govErSIYdtu-67EBGVPhriMOOB0QCT_OZ_B2073rGjYO14uYP802dMuOk1_oVV4STxY'
)
Same result
I also couldn’t see any example credentials in the documentation but the bucket is not public so should I assume that credentials will be used if set?
Also. Can I skip setting the bucket name in the UI as it’s part of the URL?
Is there a way to test if the connection is working?
Thanks for checking though 🙂
No problem. This is all very ephemeral and will die imminently.
Highly unlikely as those are all local dns records. 🙂
Oh. I see what you mean now.
“To force usage of a non-AWS endpoint, port declaration is always needed (e.g. host: "my-minio-host:9000"
), even for standard ports like 433
for HTTPS (e.g. host: "my-minio-host:433"
).”
Spun up a jupyterlab locally and was able to connect with clearml. This is obviously a problem with networking on my k8s cluster.