Reputation
Badges 1
137 × Eureka!And yes these appear in the dropdown menu when I want to enqueue an experiment
The workaround that works for me is:
clone the experiment that I run on my laptop in the newly cloned experiment, modify the hyperparameters and configurations to my need in user properties set "k8s-queue" to "cpu" (or the name of queue I want to use) enqueue the experiment to the same queue I just set...
When I do like that in the K8sGlue pod for the cpu queue I can see that it has been correctly picked up:
` No tasks in queue 54d3edb05a89462faaf51e1c878cf2c7
No tasks in Queues, sleeping fo...
Ah sorry, I thought what where the names of the queue I created like (in case I used some weird character or stuff like that)
And this is the list of variables defined in the K8SGlue pod:
` CLEARML_REDIS_MASTER_PORT_6379_TCP_PROTO
CLEARML_REDIS_MASTER_SERVICE_HOST
CLEARML_REDIS_MASTER_PORT
CLEARML_MONGODB_PORT_27017_TCP
CLEARML_ELASTIC_MASTER_PORT_9300_TCP_PROTO
CLEARML_WEBSERVER_SERVICE_HOST
K8S_GLUE_EXTRA_ARGS
CLEARML_ELASTIC_MASTER_PORT_9300_TCP_PORT
CLEARML_FILESERVER_PORT_8081_TCP_PROTO
HOSTNAME
CLEARML_MONGODB_PORT_27017_TCP_PORT
CLEARML_MONGODB_PORT
CLEARML_ELASTIC_MASTER_SERVICE_PORT
CLEARML_FILESERVER_PORT_...
the queues already exist, I created them through the UI.
now, I go to experiment, clone an experiment that I previously executed on my laptop. In the newly created experiment, I modify some parameter, and enqueue the experiment in the CPU queue.
Hi Martin, thanks. My doubt is:
if I configure manually the pods for the different nodes, how do I make clearml server aware that those agents exist? This step is really not clear to me from the documentation (it talks about user, and it uses interactive commands which would mean entering in the agents manually) I will try also the k8s glue, but I would like first to understand how to configure a fixed number of agents manually
Exactly that :) if I go in the queue tab, I see a new queue name (that I didn't create),
with a name like "4gh637aqetc"
Thanks SuccessfulKoala55 . Any idea why going to the address https://allegroai.github.io/clearml-helm-charts
returns a 404 error?
Other repositories that are used in Argo CD examples (e.g. https://bitnami-labs.github.io/sealed-secrets , which is also hosted on Github) instead of returning a 404, the index.yaml page is loaded instead.
I suspect this might be the reason why I can't make it work with ClearML.
I think it's because the proxy env var are not passed to the container (I thought they were the same as the extraArgs from the agentservice, but it doesn't look like that's the case)
By the way, after fixing the agentservice issue, and having the pod configured correctly, now I see an error in the agentgroup-cpu pod, because it says that the token is not the correct one:
http://:8081 http://:8080
`
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7fa4b00438d0>, 'Connection to pypi.org timed out. (connect timeout=15)')':...
Hi Jake, sorry I left the office yesterday. On my laptop I have clearml==1.6.4
the same that is available in the agent: - clearml==1.6.4
Oh I see... for some reason I thought that all the dependencies of the environment would be tracked by ClearML, but it's only the ones that actually get imported...
If locally one detects that pandas is installed and can be used to read the csv, wouldn't it be possible to store this information in the clearml server so that it can be implicitly added to the requirements?
but I can confirm that adding the requirement with Task.add_requirements()
does the trick
actually there are some network issues right now, I'll share the output as soon as I manage to run it
sure, give me a couple of minutes to make the changes
And if instead I want to force "get()" to return me the path (e.g. I want to read the csv with a library that is not pandas) do we have an option for that?
Thanks Martin! If I end up having sometime I'll dig into the code and check if I can bake something!
About .get_local_copy... would that then work in the agent though?
Because I understand that there might not be a local copy in the Agent?
Hi Martin, I'll try to get the logs on Monday, though the K8s configuration doesn't "scare" me, I can solve that with my colleagues.
But I'll share it if it helps debug the issue
AgitatedDove14 I used the default configuration from the helm chart for the k8s glue.
The way I understand it is that K8s glue agent is enabled by default (and I do see a Deployment for clearml-k8sagent
After trying Gaspard changes to the helm chart values, I do now see that also a pod for the agentservice is deployed,
And some of the logs point to a misconfigurations on my side (the fact it can't access resources externally),
some others I don't understand:Err:1
` bionic InRelease
Could not connect to archive.ubuntu.com:80 (185.125.190.36), connection timed out Could not connect to archive.ubuntu.com:80 (185.125.190.39), connection timed out Could not connect to archive.ubuntu.com...
OK. In the pod spawned by the K8s Glue Agent, clearml.conf is the same as the K8S Glue Agent
Hi Jake thanks for your answer!
So I just have a very simple file "project.py" with this content:
` from clearml import Task
task = Task.init(project_name='project-no-git', task_name='experiment-1')
import pandas as pd
print("OK") If I run
python project.py ` from a folder that is not in a git repository, I can clone the task and enqueue it from the UI, and ti runs in the agent with no problems.
If I copy the same file, in a folder that is in a git repository, when I enqueue the ex...
Hi SuccessfulKoala55 I can confirm that the "id-like" queue created by ClearML
actually correspond to the id of queue "k8s_scheduler" (so it looks like that instead of submitting the experiment to the scheduler to be enqueued to the right queue), a new queue whose name corresponds to the id of the k8s_scheduler is created instead.
Hope this helps 🙂