Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ExasperatedCrab78
Moderator
2 Questions, 221 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

2 × Eureka!
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
A little something else: Using ClearML, an OAK-1 AI camera and a raspberry pi to create a pushup counter that locks my PC every hour and only unlocks again w...
2 years ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
We're working on ClearML serving right now and are very interested in what you all are searching for in a serving engine, so we can make the best serving eng...
2 years ago
0 Hello, I’M Using The Free Self-Hosted Version Of Clearml On Our K8S Cluster ( The Latest Chart Version). I’M Trying To Deploy And Undeploy The Server Several Times But Each Time It Keeps Deleting The Data Associated With The Experiments (It Keeps Deleting

Hi there! There are several services who need persistent storage, check here for an overview diagram.

If I'm not mistaken, there's the fileserver, elastic, mongo and redis. All info is scattered over these (e.g. model files on fileserver, logs on elastic) so there is no one server holding everything.

I'm not a k8s expert, but I think that even a dynamic PVC should not delete itself. Just to be sure though, you can indee...

one year ago
0 Hey, Trying To Figure Out How To Create An

FierceHamster54 I saw you saying the YOLOv5 project and name are hardcoded in there. Fixed that for ya 😉 https://github.com/ultralytics/yolov5/pull/10100

2 years ago
0 Hey Everyone, Is It Possible To Use The

Yes you can! The filter syntax can be quite confusing, but for me it helps to print task.__ dict__ on an existing task object to see what options are available. You can get values in a nested dict by appending them into a string with a .

Example code:

` from clearml import Task

task = Task.get_task(task_id="17cbcce8976c467d995ab65a6f852c7e")
print(task.dict)

list_of_tasks = Task.query_tasks(task_filter={
"all": dict(fields=['hyperparams.General.epochs.value'], p...

one year ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

Not exactly sure what is going wrong without an exact error or reproducible example.

However, passing around the dataset object is not ideal, because passing info from one step to another in a pipeline requires ClearML to pickle said object and I'm not exactly sure a Dataset obj is picklable.

Next to that, running get_local_copy() in the first step does not guarantee that you can access that data from the other step. Both might be executed in different docker containers or even on different...

one year ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

I'm able to reproduce, but your workaround seems to be the best one for now. I tried launching with clearml-task command as well, but we have the same issue there: only argparse arguments are allowed.
AgitatedDove14 any better workaround for this, other than waiting for the jsonargparse issue to be fixed?

2 years ago
0 Hey Everyone, I Have Been Trying To Get The Pytorch Lightning Cli To Work With Remote Task Execution, But It Just Won'T Work. I Took The

HomelyShells16 Thanks for the detailed write-up and minimal example. I'm running it now too

2 years ago
0 When Dumping Model Via Clearml Serving, What Are The Things That The Clearml Will Look At To Populate The Input_Size And Output_Size? I Tried To Dump An Sklearn Model, And The Input_Size And Output_Size Is Null. I Prefer Not To Update It Separately Using

Unfortunately no, ClearML serving does not infer input or output shapes from the saved models as of today. Maybe you could open an issue on the github of ClearML serving to request it? Preferably with a clear, minimal example, that would be awesome! We'd take it into account for next releases

one year ago
0 When Dumping Model Via Clearml Serving, What Are The Things That The Clearml Will Look At To Populate The Input_Size And Output_Size? I Tried To Dump An Sklearn Model, And The Input_Size And Output_Size Is Null. I Prefer Not To Update It Separately Using

No inputs and outputs are ever set automatically 🙂 For e.g. Keras you'll have to specify it using the CLI when making the endpoint, so Triton knows how to optimise as well as set it correctly in your preprocessing so Triton receives the format it expects.

one year ago
0 When Dumping Model Via Clearml Serving, What Are The Things That The Clearml Will Look At To Populate The Input_Size And Output_Size? I Tried To Dump An Sklearn Model, And The Input_Size And Output_Size Is Null. I Prefer Not To Update It Separately Using

Just to be sure I understand you correctly: you're saving/dumping an sklearn model in the clearml experiment manager, then want to serve it using clearml serving, but you do not wish to specify the model input and ouput shapes in the CLI?

one year ago
0 Hello Everyone. When Pressing The "Copy Embed Code" Button In Scalar Plots, I Don'T Get To Choose The Embedding Type Like In The Video, It Seems That I Get Only Code For Clearml Reports. How To Get The Code For Embedding Plots Into External Tools?

Most likely you are running a self-hosted server. External embeds are not available for self-hosted servers due to difficult network routing and safety concerns (need access from the public internet). The free hosted server at app.clear.ml does have it.

one year ago
0 Hpo App Question: My Config Includes 11 Parameter Values (0 - 1, Step 0.1). I'Ll Expect To See 11 Experiments, But I Fact It Was "52 Iterations". What I'M Missing (Last Time I Asked Similar Question, But This Time There Is No Issue With Hpo-App Integratio

In the meantime, it might help to limit the amount of jobs using the advanced settings. If you know the exact amount and want to do every one for sure, just set it that way 🙂

one year ago
0 Hi Guys, I'M Currently Work With Clearml-Serving For Deployment Of My Model, But I Have Few Questions And Errors: 1. In The Preprocess Class, I Need To Get Some Value That I Got From Training Process For Example, In My Time Series Anomaly Detection I Save

1 Can you give a little more explanation about your usecase? It seems I don't fully understand yet. So you have multiple endpoints, but always the same preprocessing script to go with it? And you need to gather a different threshold for each of the models?

2 Not completely sure of this, but I think an AMD APU simply won't work. ClearML serving is using triton as inference engine for GPU based models and that is written by nvidia, specifically for nvidia hardware. I don't think triton will ...

2 years ago
0 Hey, Is There An Easy Way To Retrieve The Code Used To Run An Experiment? Without Recreating The Whole Environment Etc. The Problem: I Have Ran A

You can apply git diffs by copying the diff to a file and then running git apply <file_containing_diff>

But check this thread to make sure to dry-run first, to check what it will do, before you overwrite anything
https://stackoverflow.com/questions/2249852/how-to-apply-a-patch-generated-with-git-format-patch

3 years ago
0 Hey, Is There An Easy Way To Retrieve The Code Used To Run An Experiment? Without Recreating The Whole Environment Etc. The Problem: I Have Ran A

If you didn't use git, then clearML saves your .py script completely in the uncommited changes section like you say. You should be able to just copy paste it to get the code. In what format are your uncommited changes logged? Can you paste a screenshot or paste the contents of uncommitted changes ?

3 years ago
0 Hi All

Check your agent logs (through clearml console tab) and check if there isn't any error thrown.

What is probably happening is that your agent tries to upload the model but fails due to some kind of networking/firewall/port issue. For example: make sure you host your self-hosted server on 0.0.0.0 host so it's able to accept external connections other than localhost

one year ago
0 Hi, Is There A Way To Log Sklearn Metrices (Like Accuracy/Precision) In A Tabular Way Rather Than Plot ?

I agree, I came across the same issue too. But your post helps make it clear, so hopefully it can be pushed! 🙂

2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

It depends on how complex your configuration is, but if config elements are all that will change between versions (i.e. not the code itself) then you could consider using parameter overrides.

A ClearML Task can have a number of "hyperparameters" attached to it. But once that task is cloned and in draft mode, one can EDIT these parameters and change them. If then the task is queued, the new parameters will be injected into the code itself.

A pipeline is no different, it can have pipeline par...

2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

RoundMosquito25 it is true that the TaskScheduler requires a task_id , but that does not mean you have to run the pipeline every time 🙂

When setting up, you indeed need to run the pipeline once, to get it into the system. But from that point on, you should be able to just use the task_scheduler on the pipeline ID. The scheduler should automatically clone the pipeline and enqueue it. It will basically use the 1 existing pipeline as a "template" for subsequent runs.

2 years ago
0 Tasks Can Be Put In Draft State - If We Will Execute:

That's what happens in the background when you click "new run". A pipeline is simply a task in the background. You can find the task using querying and you can clone it too! It is places in a "hidden" folder called .pipelines as a subfolder on your main project. Check out the settings, you can enable "show hidden folders"

2 years ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

It is not filled in by default?

projects/debian-cloud/global/images/debian-10-buster-v20210721

one year ago
0 Hi Team, I’M Trying To Generate Gcp Autoscaler, And Received The Following Error:

Are you running a self-hosted/enterprise server or on app.clear.ml? Can you confirm that the field in the screenshot is empty for you?

Or are you using the SDK to create an autoscaler script?

one year ago
0 It Would Be Nice To Group Experiments Within Projects Use Cases:

Could you use tags for that? In that case you can easily filter on which group you're interested in, or do you have a more impactful UI change in mind to implement groups? 🙂

2 years ago
0 Hi I Saw This Announcement From Nvidia On Tao'S Integration With Clearml. How Can We Use It?

Hi Jax! We have a blogpost explaining how to use it almost ready to go. I'll ping you here when its out.

In the meantime you can check out the https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/tao-getting-started of TAO. Download the zipfile with examples and under notebooks>tao_launcher_starter_kit>detectnet_v2 you'll find a notebook with an example on how to use the integration.

2 years ago
0 Hi All! I Recently Started Working With Clearml Serving. I Got This Example Working

Hmm I think we might have made it more clear in the documentation then? How would you have been helped before you figured it out? (great job BTW, thanks for the updates on it :))

one year ago
Show more results compactanswers