Reputation
Badges 1
38 × Eureka!No, I have checked it on the web frontend, following the model link and the LABELS tab.
BTW why using the api calls and not clearml sdk?
Because the training part is only the sub system of our whole system.
And the python stuff is not facing to the web, where training request is coming.
Relating to it but another question.
With that task, which is running under an agent, task.connect_label_enumeration
does not look working.
For the agent run, I posted only the following params:
name project script typeto tasks.create
endpoint and let an agent to pick it.
I confirmed that worked if it is not started by an agent.
The labels are attached for that clone task output model.
Well, yeah, it would be cleaner if we could go fully python.
But our system is already built and running, and now we are planning to add some training functionality.
The training part can be written in Python but the sample collecting part will be deeply connected to the existing system which is not written in python.
For now using CLI looks much reasonable for that part.
I think it would be nicer if the CLI had a subcommand to show the content of ~/.clearml_data.json
.
In that way, users can be more confident to query the dataset id on which the CLI currently focusing.
My scripts will keep working when the CLI changed how to store the dataset id in the future.
But maybe we should have a cmd line that just outputs the current datasetid, this means it will be easier to grab and pipe
That sounds good.
It definitely helps!
GrumpyPenguin23 Hi, thanks for your instruction!
Putting some metadata into the model sounds nice.
I was exactly wondering how to take care of labels and being afraid of handling them as a dataset even when inferring.
I tried clearml.model.InputModel
and successfully downloaded a model.
Is this an expected way to consume a trained model for inference?
Does this task
(started by an agent) have some limitation?
Like being disabled to connect labels?
Hi AgitatedDove14
Thanks, that is it!
Yeah, I have noticed the --id
option.
What I wanted is to automate making dataset from some set of files.
And it requires the dataset id after running clearml-data create ...
.
Reading ~/.clearml_data.json
looks much better than parsing the command output.
Yeah, what I have done is uploaded:
https://github.com/kayhide/PyTorch-YOLOv3/tree/clearml
This is a fork of well-known torch yolo sample and adapted to clearml.
Is it handling data just in a form of regular files?
We have a web server which accepts various requests and manages database resources.
This web server arranges the request and creates a task on the clearml api server, which is running an internal network.
Hm, clearml-data looks very much like git.
Is there any example of how to use clearml-data
?
Hi JitteryCoyote63 ,
Oh, you have somethings, Nice!
I will look into that document, thanks!
Even though I called task.connect_label_enumeration
, there is no data to show on the output model.
Are you talking about the public demo server?
If so, it says:This server is reset daily at 24:00 PST.
I will set repository url as https and retry.