BTW why using the api calls and not clearml sdk?
Because the training part is only the sub system of our whole system.
And the python stuff is not facing to the web, where training request is coming.
Relating to it but another question.
With that task, which is running under an agent, task.connect_label_enumeration
does not look working.
For the agent run, I posted only the following params:
name project script typeto tasks.create
endpoint and let an agent to pick it.
The log says the repository is:
ssh://github.com/allegroai/clearml
I confirmed that worked if it is not started by an agent.
BTW why using the api calls and not clearml sdk?
Which clearml
and clearml-agent
versions are you using?
Can you run this one - https://github.com/allegroai/clearml/blob/master/examples/reporting/model_config.py ?
Do you get the labels for both local and clearml-agent run?
We have a web server which accepts various requests and manages database resources.
This web server arranges the request and creates a task on the clearml api server, which is running an internal network.
Even though I called task.connect_label_enumeration
, there is no data to show on the output model.
I will set repository url as https and retry.
Can you run this one -
?
Do you get the labels for both local and clearml-agent run?
Okay, I did the example.
For the local run, I got the labels.
For the agent run, I did not get the labels.
As for the versionsroot@120eb0cddb60:~# pip list | grep clearml clearml 0.17.5 clearml-agent 0.17.1
By the way, we found that when I added labels param and post a tasks.create
request, it worked.
Um..
and if you clone the local task run and enqueue it to the agent?
It failed.
Saying: Could not read from remote repository.
Maybe I should have clone the repo with https instead of ssh.
I mean, the output model comes with the labels which is posted.
Can you see it in the model? Click on the model link to get into the model
Does this task
(started by an agent) have some limitation?
Like being disabled to connect labels?
No, I have checked it on the web frontend, following the model link and the LABELS tab.
and if you clone the local task run and enqueue it to the agent?
The labels are attached for that clone task output model.
and the LABELS section is empty for running with the agent? Running locally works?