Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
EnthusiasticShrimp49
Moderator
0 Questions, 96 Answers
  Active since 18 February 2023
  Last activity one year ago

Reputation

0
0 Hi! I Have A Dataset Like This: V1.0.0

You can create a new dataset and specify the parent datasets as all the previous ones. Is that something that would work for you ?

one year ago
0 Hello, I Am Trying To Modify My Clearml-Agent Running On A Aws Autoscaler (From Clearml Applications). I Want To Be Able To Clone My Repo (Working), And Install My Poetry Dependencies From

Do you know whether the agent VM/image has python 3.9 installed ? Also, you emphasised that this happens when setting the package manager to poetry, does it mean this issue doesn’t happen when leaving package manager settings to default values ?

one year ago
0 Hello Everyone! I Have A Pipeline That Stores A Metric X (A Single Number) At The End And I Want To Display A Graph Of This Metric In Project Dashboard, So That I Can See How It Changes With Each Pipeline Run And Get A Slack Notification If The Value Of T

Yes, metrics can be saved in both steps and pipelines. As for project dashboards, I think as of now we don't support them in UI for pipelines. But what you can do instead is to run a special "reporting" Task that will query all the pipeline runs from a specific project, and with it you can then manually plot all the important information yourself.

To get the pipeline runs, please see documentation here: [None](https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelineco...

4 months ago
0 Hello, I Saw, That Clearml Data Was Integrated Into Yolov5

Hey Pawel, thanks for opening the PR on Ultralytics’ side. The full support should come from them, so if it’s missing for YOLOv8 it means they didn’t enable it. Still , you can try clearml-task for auto-logging support in case of remote execution .

Also, I’d say you could easily have the possibility to use a ClearML dataset id as input to YOLOv8 with a few lines of code by basically downloading/ get ing the dataset by id yourself and passing the path to it as input to the ultralytics...

one year ago
0 Hello, I Having Been Exhausting The Metrics Quota Way To Fast For The Current Use That I Am Making Of Clearml. Is The Quota Cumulative ? I.E. Do We Get 1G Per Month ? I Am Concerned Because If We Upgrade And Need To Pay

Hey @<1644147961996775424:profile|HurtStarfish47> , you can use S3 for debug images specifically , see here: https://clear.ml/docs/latest/docs/references/sdk/logger/#set_default_upload_destination but the metrics (everything you report like scalars, single values, histograms, and other plots) are stored in the backend. The fact that you are almost running out of storage could be because of either t...

4 months ago
0 Hi

Hey @<1678212417663799296:profile|JitteryOwl13> , just to make sure I understand, you want to make your imports inside the pipeline step function, and you're asking whether this will work correctly?

If so, then the answer is yes, it will work fine if you move the imports inside the pipeline step function

4 months ago
0 Quick Question - Does Clearml'S Task Support Subprocesses Launched Within A Script? I Have This Scenario

Hey @<1535069219354316800:profile|PerplexedRaccoon19> , yes it does. Take a look at this example, and let me know if there are any more questions: None

one year ago
0 Hi People! I Think The Clearml

Hello @<1523710243865890816:profile|QuaintPelican38> , could you try Dataset.get ing an existent dataset and tell whether there are any errors or not?

one year ago
0 Hello Everyone! I Have A Pipeline That Stores A Metric X (A Single Number) At The End And I Want To Display A Graph Of This Metric In Project Dashboard, So That I Can See How It Changes With Each Pipeline Run And Get A Slack Notification If The Value Of T

Hey @<1661904968040321024:profile|SpotlessOwl43> that's a great question!

how the metric should be saved, via report_single_value?

That's correct

what should I enter into the title and series fields in Project Dashboard?

The title should be "Summary" and series is the name of the single value you reported

4 months ago
0 Hello Guys, I Have 4 Workers (2 In Default And 2 In Service Queue On Same Machine) And Running A Cron Job Of Data Preparation.It Works Well For About 3 Days But After That Tasks Are Getting Failed By Their Own With Given Below Error.Can Anyone Help Me O

Can you also tell what OS are you using? And when you mentioned that the clearml version: 1.5.1 did you mean the ClearML package or the clearml-agent package? Because they are different

one year ago
0 Hello Everyone, I Would Like To Know What Your Projects Are In Terms Of The Usage Of Clearml Pipelines? What Are Your Most Elaborate Pipelines? So Far, I Am Using "Only" A Pipeline That Looks Like This:

Sounds interesting. But my main concern with this kind of approach is if the surface of the (hparam1, hparam2, objective_fn_score) is non-convex, using your method you may not reach the best set of hyperparameters. Maybe try using smarter search algorithms, like BOHB or TPE if you have a large search space, otherwise, you can try to do a few rounds of manual random search, reducing the search space around the region of most-likely best hyperparameters after every round.

As for why struct...

one year ago
0 Hi Guys, I Have A Question Regarding Model Tracking. I Have Pipelines That Use Xgboost Through The Scikit-Learn Api To Perform:

This is the method you're looking for None . But make sure you have a model saved on disk before using it. And if you don't want the model to be deleted from disk after it, make sure to set auto_delete_file=False

one year ago
0 Hi

Ah, I see now. There are a couple of ways to achieve this.

  • You can enforce that the pipeline steps execute within a predefined docker image that has all these submodules - this is not very flexible, but doesn't require your clearml-agents to have access to your Git repository
  • You can enforce that the pipeline steps execute within a predefined git repository, where you have all the code for these submodules - this is more flexible than option 1, but will require clearml-agents to have acce...
4 months ago
0 Hello clearml :slightly_smiling_face: I am having a small problem with `clearml-agents` mainly related to: _private repository, cache and vcs._ I am using the latest version `clearml-agents==1.5.2rc` on _python 3.10 (ubuntu:22.04_). I am running a `scri

Hey @<1574207113163444224:profile|ShallowCoyote86> , what exactly do you mean by "depends on private_repo_b "? Another question - after you push the changes, do you re-run script_a.py ?

one year ago
0 How To Version Models While Training In Production

This sounds like you don't have clearml installed in the ubuntu container. Either this, or your clearml.conf in the container is not pointing to the server, as a result all information is missing.

I'd rather suggest you change the approach, and run a clearml-agent setup with docker and when you want to run YOLOv5 training you actually execute it remotely on the queue that the agent is listening to

4 months ago
0 How To Version Models While Training In Production

Hey @<1639074542859063296:profile|StunningSwallow12> what exactly do you mean by "training in production"? Maybe you can elaborate what kind of models too.

ClearML in general assigns a unique Model ID to each model, but if you need some other way of versioning, we have support for custom tags, and you can apply those programmatically on the model

4 months ago
0 Hi, I Configured An On-Prem File Server For Clearml Which Is Mounted On My Pc.

Ok, then launch an agent using clearml-agent daemon --queue default that way your steps will be sent to the agent for execution. Note that in this case, you shouldn't change your code snippet in any way.

one year ago
0 Hey Guys

Hey, yes, the reason for this issue seems to be our currently limited support for lightning 2.0. We will improve the support in the following releases. Right now one way to circumvent this issue, that I can recommend, is to use torch.save if possible, because we fully support automatic model capture on torch.save calls.

one year ago
0 Hello Everyone, I Am Having Issues With The Gcp Autoscaler. This Is In The Output Logs:

Hey @<1529271085315395584:profile|AmusedCat74> , I may be wrong , but I think you can’t attach a gpu to an e2 instance , it should be at least an n1, no?

8 months ago
0 Hello Everyone, I Would Like To Know What Your Projects Are In Terms Of The Usage Of Clearml Pipelines? What Are Your Most Elaborate Pipelines? So Far, I Am Using "Only" A Pipeline That Looks Like This:

Hey @<1523704157695905792:profile|VivaciousBadger56> , I was playing around with the Pipelines a while ago, and managed to create one where I have a few steps in the begining creating and ClearML datasets like users_dataset , sessions_dataset , prefferences_dataset , then I have a step which combines all 3, then an independent data quality step which runs in parallel with the model training. Also, if you want to have some fun, you can try to parametrize your pipelines and run HPO on...

one year ago
0 Hi Team,

Hello @<1533257278776414208:profile|SuperiorCockroach75> , thanks for asking. It’s actually unsupervised, because modern LLMs are all trained to predict next/missing words, which is an unsupervised method

11 months ago
0 My Project Pipeline Is Runnung Well And Good But Instead Of Completed It Is Coming As Aborted After Complete Execution

Is this a jupyter notebook or something ? Can you download it properly as either a .ipynb or .py file?

one year ago
0 I Know At Least One Other Person Has Posted About This Previously, But When I Interact With

It happens due to an internal use of Dataset.get , the larger the dataset, the more verbose it will be. We’ll fix this in the upcoming releases

one year ago
0 How Can I Send A Composed Chunk Of Code For Remote Execution

Ah, I think I understand. To execute a pipeline remotely you need to use None pipe.start() not task.execute_remotely . Do note that you can run tasks remotely without exiting the current process/closing the notebook, (see here the exit_process argument None ) but you won't be able to return any values from this task....

6 months ago
Show more results compactanswers