Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello Everyone, I'M Working On Building A Training Pipeline Using Clearml And I'M Encountering Some Challenges In Assembling The Pipeline.

Hi @<1619505588100665344:profile|GrievingHare27>

My understanding is that initiating a task with

Task.init()

captures the code for the entire notebook. I'm facing difficulties when attempting to build a final training pipeline (in a separate notebook) that uses only certain functions from the other notebooks/tasks as pipeline steps.

Well this is is kind of the limit of working with jupyter notebooks, referencing code from one to another is not really feasible (of co...

12 months ago
0 Hi Everyone

Hi MinuteCamel2

I can I disable it from automatically uploading model checkpoints to ClearML servers?

Maybe this one can help :)
https://www.youtube.com/watch?v=etGjxOKG9lo

deleted all of the models from my ClearML project but I still receive this message. Do you know why?

It might take it a few hours to update... 😞

one year ago
0 Hi, I Have A Question About Queue Management Of Clearml Agents. I Am Still A Beginner To Clearml And Still Discovering The Potential It Has And As Of Now It Has Amazed Me With It Versatile Features

Hi UpsetBlackbird87

I might be wrong, but it seems like ClearML does not monitor GPU pressure when deploying a task to a worker rather rely only on its configured queues.

This is kind of accurate, the way the agent works is that you allocate a resource for the agent (specifically a GPU), then sets queues (plural) to listen to (by default priority ordered). Then each agent is individually pulling jobs and running on the allocated GPU.
If I understand you correctly, you want multiple ...

3 years ago
0 Hello Everyone! First, Thanks A Lot To Everyone That Made Clearml Possible, I'Ve Been Looking For A Tool Like That For Years. I Just Installed The Open Source Server (

Hi MistakenDragonfly51

Hello everyone! First, thanks a lot to everyone that made ClearML possible,


To your questions 🙂
long story short, no unless you really want to compile the dockers, which I can't see the real upside here Yes, add the following /opt/clearml.conf:/root/clearml.conf herehttps://github.com/allegroai/clearml-server/blob/5de7c120621c2831730e01a864cc892c1702099a/docker/docker-compose.yml#L154
and configure your hosts " /opt/clearml.conf" with ...

2 years ago
0 Hello All , Good Morning ! Can You Help Better Understand The Distinction Of Cleargpt? How Is It Different From Chatgpt And What Gpt Model Are We Using In Clearml ? Thank You In Advance !

Hi @<1628565287957696512:profile|AloofBat92>
Yeah the name is confusing, we should probably change that. The idea is it is a low code / high code , train your own LLM and deploy it. Not really chatgpt 1:1 comparison, more like, GenAI for enterprises. make sense ?

11 months ago
0 I’M Using Catboost For Training, But Sadly It Does Not Have A Native Integration With Clearml (Xgboost And Lightgbm Do Have Integrations). But Catboost Writes Down Training Logs In Tensorboard Format (Into A

Yes, but as you mentioned everything is created inside the lib, which means the python is not able to intercept the metrics so that clearml can send them to the backend.

3 years ago
0 Thought I Would Share This. Something To Think About Over The New Year.

Thanks SubstantialElk6 !
Happy new year 🎉 🍺 🍾 🎇

2 years ago
0 What Is Being Stored Exactly In

So how do I solve the problem? Should I just relaunch the agents? Because they can't execute jobs now

Are you running in docker mode ?
If so you can actually delete mapped files (they will still be available inside the docker), just make sure you delete them X hours after they were created, and you should be fine.
wdyt?

2 years ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

Anyhow if the StorageManager.upload was fast, the upload_artifact is calling that exact function. So I don't think we actually have an issue here. What do you think?

4 years ago
0 Hello, I'M A Bit Lost In The Docs For The Mlops, I Have Script Which Already Integrate Clearml Logging, Should I Use Clearml-Task To Launch It On An Agent ? (I Already Have A Clearml-Server And A Clearml-Agent Running).

Could it be the code is not in a git repository ?
clearml support either a single script or a git repository, but Not a collection of standalone files. wdyt?

3 years ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

upload_artifact will actually do two things:
upload the file to the trains-server register it as an artifact on the experiment
What did you mean by "register the artifact manually"? You still need to upload the file to the trains-server (so it is later accessible )

4 years ago
0 Hi! I Have Local Minio Setup, Via Minio Browser I Can Upload 50-100 Mb Per Second As Its Local. But When I Try To Use Task.Upload_Artifact It Uploads 500 Kb Per Second. Does Anyone Have An Idea About This?

When I give my Minio to output_uri argument, it uploads 500 KB /sec as before.

But it worked well when using StorageManager and uploading to the minio directly, is that correct?

.. I give my Minio to output_uri argument

How long did it take to run the demo code I posted?
(The one you mentioned took 0.16s to run locally)

4 years ago
0 After I Finish Training A Model, I Want To Call Logger.Report_Scalars To Help Monitor Inferencing Status (We Do A Lot Of Batch) But After The Model Finishes Training, Scalars Are No Longer Accepted By The Task As It Is Considered Completed. Help!

@<1523711619815706624:profile|StrangePelican34> are you saying that after the " with " block the task is marked completed? how is that possible? is this done manually ?

one year ago
3 years ago
4 years ago
0 Hi There, I'Ve Encountered A Problematic Behavior In Python. When Defining An Argument A Default Value Of

PompousBeetle71 quick question, will you ever want to pass an empty string ? reason for asking is that it is either one or the other, there is no way for Trains to actually differentiate (from the web UI, perspective this is just an empty string field...)

4 years ago
0 Hi There, I'Ve Encountered A Problematic Behavior In Python. When Defining An Argument A Default Value Of

PompousBeetle71 BTW: if you remove the type=str from the argparse, it will do what you want, None will stay None (instead of ''), all other values will be of type str as this is always the default 🙂

4 years ago
0 Hi There, I'Ve Encountered A Problematic Behavior In Python. When Defining An Argument A Default Value Of

PompousBeetle71 If this is argparser and the type is defined, the trains-agent will pass the equivalent in the same type, with str that amounts to '' . make sense ?

4 years ago
Show more results compactanswers