Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Is It Possible To Import User-Defined Modules When Wrapping Tasks/Steps With Functions And Decorators? As Far As I Know, When I Want To Define A Single “Step” In A Pipeline Using Function For Decorator, I Need To Import All Required Libs Inside This Wrapp

Great ascii tree πŸ™‚
GrittyKangaroo27 assuming you are doing:
@PipelineDecorator.component(..., repo='.') def my_component(): ...The function my_component will be running in the repository root, so in thoery it could access the packages 1/2
(I'm assuming here directory "project" is the repository root)
Does that make sense ?
BTW: when you pass repo='.' to @PipelineDecorator.component it takes the current repository that exists on the local machine running the pipel...

2 years ago
0 Hi, I’M Getting This Error When I Try To Run Task On A Remote Agent With Docker Mode Web Ui:

HI BurlyRaccoon64
Yes, we did the latest clearml-agent solves the issue, please try:
'pip3 install -U --pre clearml-agent'

2 years ago
0 Hi! Had A Basic Question: I Want To Retrieve All Tasks Created By A Clearml User Id (Using Task.Get_Tasks() And Filter). Is It Possible To Get User Id Of The Current User Configured In The Clearml.Config Using Clearml Python Api? Thanks In Advanced!

I think it is on the JWT token the session gets from the server
a bit of a hack but should work πŸ™‚

session = task.session # or Task._get_default_session() 
my_user_id = session.get_decoded_token(session.token)['identity']['user']
7 months ago
0 I Am Using Opennmt-Tf (2.18.1) And Clearml (1.1.2) For Training And Testing My Translation Models. I Am Wanting To Register The Incremental Bleu Scores And Final Test Data With Clearml (For Plotting, Comparison, Etc.), But It Is Not Working. I Cannot Fi

From the docs I think what's going on is that the https://opennmt.net/OpenNMT-tf/package/opennmt.Runner.html#opennmt.Runner.train is spinning a new subprocess, and the training itself happens on the subprocess.
If this is the case this will explain the lack of automagic, as the subprocess is lacking the "Task.init" call
wdyt, could that be the case ?

2 years ago
0 Hello Again, How Can I Use The

Sure thing πŸ™‚

3 years ago
0 By The Way Guys, Your Survey Link Points To An Error.

Thanks TrickyRaccoon92
I think it's about time we remove the survey link anyhow πŸ™‚
I'll make sure it happens ..,

3 years ago
0 <image>

we need to evaluate the result across many random seeds, so each task needs to log the result independently.

Ohh that kind of makes sense to me πŸ™‚
Yes I'm also getting:

/usr/local/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 74 leaked semaphores to clean up at shutdown
  len(cache))

Not sure about that ...

3 years ago
0 Hello, I’M Trying To Update Our Clearml Server Running On Kubernetes (1.6.0-213) But I Get This Error:

should i only do mongodb

No, you should do all 3 DBs ELK , Mongo, Redis

one year ago
0 Hi There, I’Ve Been Trying To Play Around With The Model Inference Pipeline Following

Also what do you have in the "Configuration" section of the serving inference Task?

one year ago
0 Hi

Hi SarcasticSparrow10 , so yes it does, this is more efficient when using pytorch loaders, and in some other situations.
To disable it add to your clearml.conf:sdk.development.report_use_subprocess = false2. interesting error, maybe we can revert to "thread mode" if running under a daemon. (I have to admit, I'm not sure why python has this limitation, let me check it...)

3 years ago
0 Hey, What Is The Exact Difference Between

It should work πŸ™‚ as long as the versions match, if they don't the venv will install the version you need (which is great, only penalty is the install, download wise it will be cached)

4 years ago
0 Hi Folks, I Am Having An Issue I Can'T Properly Understand: I Have Tried To Run The "Dataset" Example From The Official Clearml Repository (From My Laptop) For Some Reason It Got Stuck, So I Killed The Process, But In Clearml Ui It Still Results As "Runn

You can definitely configure the watchdog to set the timeout to 15min, it should not have any effect on running processes, they basically ping every 30 sec alive message

2 years ago
0 Hi I Saw This On The Clearml-Agent Docs But Other Than The Docker Image, I'M Not Sure How To Integrate This With Clearml Py And Clearml-Server. Please Advise.

Hi SubstantialElk6
Yes this is the queue the glue will pull jobs from and push into the k8s. You can create a new queue from the UI (go to the workers&queues page and to the Queue Tab and press on "create new" Ignore it πŸ™‚ this is if you are using config maps and need TCP routing to your pods As you noted this is basically all the arguments you need to pass for (2). Ignore them for the time being This is the k8s overrides to use if launching the k8s job with kubectl (basically --override...

3 years ago
3 years ago
0 Any Idea Why I Would Be Getting The Following Error When Running A Task In A Clearml-Agent? (Python 3.7.9, Package_Manager.Type = Conda)

I am using importlib and this is probably why everythings weird.

Yes that will explain a lot πŸ™‚
No worries, glad to hear it worked out

3 years ago
0 Hi, I'M Trying To Run Task.Init Inside A Jupyter Notebook For The First Time (Used It A Lot Before In Normal Python Scripts), And I Get A Warning-

I did not start with python -m, as a module. I'll try that

I do not think this is the issue.
It sounds like anything you do on your specific setup will end with the same error, which might point to a problem with the git/folder ?

3 years ago
0 Hi! I'Ve Been Trying Out The

(2) yes weekdays with specific hour should do exactly that:)
(3) yes I see your point, maybe we should add boolean allowing you to run immediately?
Back to (1) , let me see if I can reproduce, anything specific I need to add to the schedule call?

3 years ago
0 I Am Back With Another Question: Is There A File Similar To The

ReassuredTiger98 no, but I might be missing something.
How do you mean project-specific?

3 years ago
0 Is It Possible To Add A Callback For A Pipeline From A Step?

Ephemeral Dataset, I like that! Is this like splitting a dataset for example, then training/testing, when done deleting. Making sure the entire pipeline is reproducible, but without storing the data long term?

3 years ago
0 Hi, I Am Trying To Use The Aws Autoscaler To Assign Instance Profiles To New Machines. This Is A Better Way Than Managing Credentials. I Added The Configuration To The Autoscaler Config Like So:

RoughTiger69

Apparently,

, doesn’t populate that dict with

any keys that don’t already exist in it

.

Are you saying new entries are not added to the Dict even if they are on the Task (i.e. only entries that already exist on the dict are populated ?
But you already have all the entries defined here:
https://github.com/allegroai/clearml/blob/721569bb77d89d89e5b4f32a0ed98311c4574650/examples/services/aws-autoscaler/aws_autoscaler.py#L22

Since all this is ha...

2 years ago
0 Hi! I'M Currently Considering Switching To Clearml. In My Current Trials I Am Using Up The Api Calls Very Quickly Though. Is There Some Way To Limit That? The Documentation Is A Bit Sparse On What Uses How Many Api Calls. Is It Possible To Batch Them For

FlutteringWorm14 an RC is out (1.7.3dc1) with the ability to configure from clearml.conf
you can now set
sdk.development.worker.report_event_flush_threshold from clearml.conf

one year ago
0 Hi All

This one should work:
` path = task.connect_configuration(path, name=name)
if task.running_locally():
my_params = read_from_path(path)
my_params = change_parmas(my_params) # change some staff

store back the change, my_params assumed to be the content of the param file (text)

task.set_configuration_object(name=name, config_taxt=my_params) `

3 years ago
0 Hey, I Would Like My Experiment To Call At Some Point A Cli Program Installed As A Dependency Of The Experiment. Here Is What I Do:

So I'm gusseting the cli will be in the folder of python:
import sys from pathlib2 import Path (Path(sys.executable).parent / 'cli-util-here').as_posix()

3 years ago
Show more results compactanswers