Reputation
Badges 1
38 × Eureka!Hm, clearml-data looks very much like git.
Is it handling data just in a form of regular files?
Yeah, as I have known that, now the CLI looks much more familiar to me.
Um..
and if you clone the local task run and enqueue it to the agent?
It failed.
Saying: Could not read from remote repository.
Even though I called task.connect_label_enumeration
, there is no data to show on the output model.
BTW why using the api calls and not clearml sdk?
Because the training part is only the sub system of our whole system.
And the python stuff is not facing to the web, where training request is coming.
By the way, we found that when I added labels param and post a tasks.create
request, it worked.
I confirmed that worked if it is not started by an agent.
Can you run this one -
 ?
Do you get the labels for both local and clearml-agent run?
Okay, I did the example.
For the local run, I got the labels.
For the agent run, I did not get the labels.
Relating to it but another question.
With that task, which is running under an agent, task.connect_label_enumeration
does not look working.
Maybe I should have clone the repo with https instead of ssh.
Does this task
(started by an agent) have some limitation?
Like being disabled to connect labels?
No, I have checked it on the web frontend, following the model link and the LABELS tab.
I mean, the output model comes with the labels which is posted.
The labels are attached for that clone task output model.
For the agent run, I posted only the following params:
name project script typeto tasks.create
endpoint and let an agent to pick it.
I will set repository url as https and retry.
As for the versionsroot@120eb0cddb60:~# pip list | grep clearml clearml 0.17.5 clearml-agent 0.17.1
Hi AgitatedDove14
Thanks, that is it!
Yeah, I have noticed the --id
option.
What I wanted is to automate making dataset from some set of files.
And it requires the dataset id after running clearml-data create ...
.
Reading ~/.clearml_data.json
looks much better than parsing the command output.
now I am going AFK.
Thanks for your support!
I think it would be nicer if the CLI had a subcommand to show the content of ~/.clearml_data.json
.
In that way, users can be more confident to query the dataset id on which the CLI currently focusing.
My scripts will keep working when the CLI changed how to store the dataset id in the future.
But maybe we should have a cmd line that just outputs the current datasetid, this means it will be easier to grab and pipe
That sounds good.
It definitely helps!