Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SoggyFrog26
Moderator
4 Questions, 40 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

38 × Eureka!
0 Votes
9 Answers
925 Views
0 Votes 9 Answers 925 Views
Hi, I have a question about clearml-data . It looks the CLI remembers "previously created/accessed dataset". Where is that information saved? Or how can I re...
3 years ago
0 Votes
30 Answers
1K Views
0 Votes 30 Answers 1K Views
Hi everyone, I have a question when running a task under a clearml-agent. When the script is executed by an agent, that should be bound to some task. How can...
3 years ago
0 Votes
4 Answers
916 Views
0 Votes 4 Answers 916 Views
Hi, is there any example which demonstrates a typical workflow of model handling? Especially, I would like to know how to specify and download a model, which...
3 years ago
0 Votes
13 Answers
1K Views
0 Votes 13 Answers 1K Views
3 years ago
0 Hi, Is There Any Example Which Demonstrates A Typical Workflow Of Model Handling? Especially, I Would Like To Know How To Specify And Download A Model, Which Is Trained In Former Experiments. Thanks!

GrumpyPenguin23 Hi, thanks for your instruction!

Putting some metadata into the model sounds nice.
I was exactly wondering how to take care of labels and being afraid of handling them as a dataset even when inferring.

3 years ago
0 Hi, I Have A Question About

I think it would be nicer if the CLI had a subcommand to show the content of ~/.clearml_data.json .
In that way, users can be more confident to query the dataset id on which the CLI currently focusing.
My scripts will keep working when the CLI changed how to store the dataset id in the future.

3 years ago
0 Hi, I Have A Question About

Well, yeah, it would be cleaner if we could go fully python.
But our system is already built and running, and now we are planning to add some training functionality.
The training part can be written in Python but the sample collecting part will be deeply connected to the existing system which is not written in python.
For now using CLI looks much reasonable for that part.

3 years ago
0 Hi, I Have A Question About

But maybe we should have a cmd line that just outputs the current datasetid, this means it will be easier to grab and pipe

That sounds good.
It definitely helps!

3 years ago
0 Hi, Is There Any Example Which Demonstrates A Typical Workflow Of Model Handling? Especially, I Would Like To Know How To Specify And Download A Model, Which Is Trained In Former Experiments. Thanks!

I tried clearml.model.InputModel and successfully downloaded a model.
Is this an expected way to consume a trained model for inference?

3 years ago
0 Can I Ask How Often Does The Hosted Clearml Reset? I'M In A Hackathon And Thought Of Using It.

Are you talking about the public demo server?

If so, it says:
This server is reset daily at 24:00 PST.

3 years ago
0 Hi Everyone, I Have A Question When Running A Task Under A Clearml-Agent. When The Script Is Executed By An Agent, That Should Be Bound To Some Task. How Can I Get That Task From The Script?

Relating to it but another question.

With that task, which is running under an agent, task.connect_label_enumeration does not look working.

3 years ago
0 Hi Everyone, I Have A Question When Running A Task Under A Clearml-Agent. When The Script Is Executed By An Agent, That Should Be Bound To Some Task. How Can I Get That Task From The Script?

For the agent run, I posted only the following params:
name project script typeto tasks.create endpoint and let an agent to pick it.

3 years ago
0 Hi Everyone, I Have A Question When Running A Task Under A Clearml-Agent. When The Script Is Executed By An Agent, That Should Be Bound To Some Task. How Can I Get That Task From The Script?

Um..

and if you clone the local task run and enqueue it to the agent?

It failed.
Saying: Could not read from remote repository.

3 years ago
0 Hi Everyone, I Have A Question When Running A Task Under A Clearml-Agent. When The Script Is Executed By An Agent, That Should Be Bound To Some Task. How Can I Get That Task From The Script?

Can you run this one -

 ?

Do you get the labels for both local and clearml-agent run?

Okay, I did the example.
For the local run, I got the labels.
For the agent run, I did not get the labels.

3 years ago
0 Hi Everyone, I Have A Question When Running A Task Under A Clearml-Agent. When The Script Is Executed By An Agent, That Should Be Bound To Some Task. How Can I Get That Task From The Script?

BTW why using the api calls and not clearml sdk?

Because the training part is only the sub system of our whole system.
And the python stuff is not facing to the web, where training request is coming.

3 years ago
0 Hi, I Have A Question About

Sounds good, thanks!

3 years ago
0 Hi, I Have A Question About

Hi AgitatedDove14
Thanks, that is it!

Yeah, I have noticed the --id option.
What I wanted is to automate making dataset from some set of files.
And it requires the dataset id after running clearml-data create ... .
Reading ~/.clearml_data.json looks much better than parsing the command output.

3 years ago
Show more results compactanswers