Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8060 Answers
  Active since 10 January 2023
  Last activity 9 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hey Just Wanting To Know: What Is The Recommended Best Practice To Write Clearml Pipelines Between Controller And Decorators ?

So it seems decorator is simply the superior option?

Kind of yes 😊

In which case would we use add_task() option?

When you have existing Tasks, and the piping is very straight forward (i.e. input / output in the code is basically referencing other Tasks/artifacts, and there is no real need to do any magic for serializing/deserializing data between steps

2 years ago
0 Given I Want To Run A Task In A Pipeline Using A Base Task Id. One Of My Steps Just Finds The Latest Model To Use. I Want The Task To Output The Id, And The Next Step To Use It. How Would I Go About Doing This?

but I can't seem to figure out a way to do something similar using a task in add_step

VexedCat68 With "add_step" it assumes the Task you are adding is self contained (i.e. there is no "return object" to serialize), this means you can only add arguments, or use the artifacts the Task (i.e. step) will recreate, obviously you knowing in advance what the step creates. Make sense ?

3 years ago
0 Hi, I'M Trying To Install A New Server, This Is A Fresh Ubuntu 18.04 Install. When I Try To Run The Docker Composer Up Command I Get Error Messages Like This One:

CourageousLizard33 VM?! I thought we are talking fresh install on ubuntu 18.04?!
Is the Ubuntu in a VM? If so, I'm pretty sure 8GB will do, maybe less, but I haven't checked.
How much did you end up giving it?

4 years ago
0 Is There Any Simple Way To Orchestrate A Batch To Train A Model With Different Features (In Order To Do Feature Selection, For Example) Through A Single .Py File? I Saw The Following Example

using only a subset of the features

ShallowGoldfish8 if you have some parameter that controls it (i.e. select different features) then you can launch it with two sets f parameters.
Am I missing something?
for example:
` my_features_select = {"type": "set_a"}
Task.current_task().connect(my_features_select)

if my_features_select["type"] == "set_a":

do something

else

do something else `wdyt?

2 years ago
0 Hello Everyone. I'M Getting Started With Clearml. I'M Trying Hpo Atm And Have Successfully Run The Base Task. When Running The Clone Of The Base Task In One Of The Agents, I'M Getting Following Error. Any Suggestions? Tia

The base task is self-contained i.e. it downloads training/eval directly data and has direct access to it

I think this is the main issue, how come it does not catch it? Are you using argparser ?

2 years ago
0 I Have A Problem With Clearml-Agent, The Agent Is Cloning Repository, But When Executing This Command:

UpsetTurkey67 are you saying there is a sym link in the original repository, and when it copies it, it breaks the symlink ?

2 years ago
0 Hi

It all depends how we store the meta-data on the performance. You could actually retrieve it from the say val metric and deduce the epoch based on that

4 years ago
0 <image>

How do I reproduce it ? (all the processes are on the same machine?)

3 years ago
0 Is There Any Way To Clear The Installed Packages Of A Task Programmatically? (I.E. Using The Python Sdk And Not The Ui)

GiddyTurkey39

A flag would be really cool, just in case if theres any problem with the package analysis.

Trying to think if this is a system wide flag (i.e. trains.conf) or a flag in task.init.
What do you think?

4 years ago
3 years ago
0 Hello

FYI: pipeline callbacks are already part of v1.0 πŸ™‚

3 years ago
0 Hello

Sorry I need the full log ... feel free to DM it to me

one year ago
0 Hey, Here’S A Quickie – Is It Possible To Specify Different “Types” Of Input Parameters (“Args/…“) Such That They Are Handled Nicely On The Front End? Basically, I Have A Task That Needs A Datetime As Input And It Would Be Really Nice To Have A Gui To Do

I basically just mean having a date input like you would in excel where it brings up a calendar and a clock if it’s time – and defaults to β€œnow”

I would love that as well, but I kind of suspect the frontend people will say these things tend to start small and grow into a huge effort. At the moment what we do is the UI is basically plain text and the casting is done on the SDK side.
You can however provide type information and help (you can see it when you hover over the arguments on th...

one year ago
3 years ago
0 Has Anyone Had Success Using Clearml With Huggingface Models? I Create My Hf

Hi @<1523702786867335168:profile|AdventurousButterfly15>
I do not think they log more than that ?!
(what happens if you use TB?)

one year ago
0 Is There Possibility Of Using Centralized Authentication For Clearml Web Ui? I Mean

Hi DisgustedDove53
Unfortunately SSO in general is not part of the open-source (the integration is way to complex and will cause too many security issues).
On the paid tier there is full SSO integration including SAML. I'm pretty sure it also has a permission system on-top so you can control visibility / access inside the clearml platform.

3 years ago
0 How Can I Remove A Service With Clearml-Serving?

What does spin mean in this context?

This line:
docker-compose --env-file example.env -f docker-compose-triton-gpu.yml up

But these have: different task ids, same endpoints (from looking through the tabs)
So I am not sure why they are here and why not somewhere else

You can safely ignore them for the time being πŸ™‚

but is it true that I can have multiple models on the same docker instance with different endpoints?

Yes! this is exactly the idea (and again I'm not sure ...

2 years ago
0 Hi, Can I Run An

I know that there is possibility to set up some budget - for example seconds of running after which optimization stops. But is there a possibility to specify a boolean condition when work should stop?

RoundMosquito25 you mean when you reach a limit of loss<Threshold or something similar ?

2 years ago
0 Hi I Have A Most Probably A Beginer Question Abour Loading The Data In Pycharm And Later On In Google Colab From An Dataset From Clearml. I Used From Page:

If I access the dataset on the same location directly it works fine:

wait, I'm confused, how is it the datset us there? did it download the dataset?

are you saying this line for example will fail? (assuming you actually have a dataset by that name)

data_path = Dataset.get(dataset_name="002_Datenset_MASAM_for_fintuning", alias="002_Datenset_MASAM_for_fintuning").get_local_copy()
one year ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

Does StorageManager.upload and upload_artifact use the same methods?

Yes they both use StorageManager.upload

Is the only difference is task being async?

Two differences:
Upload being async Registering the artifact on the experiment. StorageManager will only upload, where as upload_artifact will make sure the file is registered as an artifact on the experiment, together with all of the artifacts properties.

4 years ago
0 "5451Af93E0Bf68A4Ab09F654B222Ccae": { "1B790A3Da2E8D6Cd939Cf271694Fe81B": { "Metric": ":Monitor:Gpu", "Variant": "Gpu_0_Utilization", "Value": 0.0, "Min_Value": 0.0,

Is gpu_0_utilization also in % then?

Correct πŸ™‚

I was trying to find, what are those min and max value for above metrics.

Oh that makes sense, notice that you can get the values over time, so you can track the usage over the experiment lifetime (you can of course see it in the Scalar tab of the experiment)

2 years ago
0 Hello! I'M Trying To Make A Simple Eval.Py Script That Will Go Pull The Best Model Of A Given Experiment, Load It Locally And Evaluate It On Whatever Data I Give. Question 1: Is There A Standard Way Documented Somewhere To Do This? Question 2: I'M Loadin

Hi MistakenDragonfly51
Notice that Models are their own entity, you can query them based on tags/projects/names etc.
Querying and getting Models is done by Model class:
https://clear.ml/docs/latest/docs/references/sdk/model_model#modelquery_models

task.get_models()

is always empty. (edited)

How come there are no Models on the Task? (in other words how come this is empty?)

2 years ago
0 Hi, I'M Trying To Upload My Dataset Via

I think the limit is in the free tier hosting ...

3 years ago
0 Hey, I'Ve Spin Up A Worker Using Aws Autoscaler In Clearml Self Hosted Server Running On Kubernetes. However, I Can'T Find The Agent On The Workers Page. Any Idea Why It'S Not Showing Up? Full_Log:

@<1595587997728772096:profile|MuddyRobin9> are you sure it was able to spin the EC2 instance ? which clearml version autoscaler are you running ?

one year ago
0 Hey There, I Would Like To Increase The

Set it on the PID of the agent process itself (i.e. the clearml-agent python process)

3 years ago
0 [Task Gets Interrupted / Aborted / Reset When In Offline Mode] For Local Testing, We Have Added A

Let me try to build a minimal reproducible version

Thank you!

2 years ago
Show more results compactanswers