Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hello All! Quick Question, Do Any Of You Know Of A Clean Way To Access The Clearml Logger Inside Of A

I ended up using

task = Task.init(

continue_last_task

=task_id)

to reload a specific task and it seems to work well so far.

Exactly, this will initialize and auto log the current process into existing task (task_id). Without the argument continue_last_task ` it will just create a new Task and auto log everything to it πŸ™‚

3 years ago
0 Hi, I Know That Clearml Uses Local Changes For Patching And Running Script. Can It Also Do The Same With Local Commits?

The main issue is applying the patch requires git clone and that would fail on local (not pushed) commits.
What's the use case itself ?
(btw, if you copy the uncommitted changed into a file and git apply it, it will work)

2 years ago
0 Hello, I Hope You Can Help Me With This:

AgitatedTurtle16 could you check with the latest clearml RC (I remember a similar issue was fixed).
pip install clearml==0.17.5rc3Then run again
clearml-task ...

4 years ago
0 Hello Everyone I Am Trying To Use Task Scheduler To Make A Cron Job. I Have Used S3 Bucket As My File Server But When This Cron Runs It Gives The Error Not Able To Connect To S3. What Should I Do?

this is the code for task scheduler

So it makes sense the first "scheduled" job is epoch time 0 (1970) because "executes_immediately" basically means it sets a date that passed, so it triggers it. does that make sense ?

2 years ago
0 Hi There, I Have A Package Called

This looks like 'feast' error, could it be a configuration missing?

3 years ago
0 Hi All, I'Ve Successfully Run A Task Locally, And Now I'M Trying To Clone It And Send It To A Queue. It Looks Like The Environment Is Built Successfully, But It Hangs Here:

My understanding is that on remote execution Task.init is supposed to be a no-op right?

Not really a no-op, it would sync Argpasrer and the like, start background reporting services etc.

This is so odd! literally nothing printed
Can you tell me something about the node "mrl-plswh100:0" ?
is this like a sagemaker node? we have seen things similar where Python threads / subprocesses are not supported and instead of python crashing it just hangs there

one year ago
0 So From What I Can Tell Using

Hi ShinyPuppy47 ,
Yes that is correct. Use Task.init for automagic logging

2 years ago
0 Hello, In The Following Context:

Hi JitteryCoyote63
If you want to refresh the task object, call task.reload() It will also refresh the artifacts.
The reason for not always do so when accessing the .artifacts objects is for speed optimization (It might be slow compared to dict access, and we assume users will expect it to behave the dict)

5 years ago
0 Hi Everyone! Is There A Way Or A Trigger To Detect When The Number Of Workers In A Queue Reaches Zero? Sometimes, My Workers Terminate Unexpectedly, Which Causes The Worker Count In The Queue To Drop To Zero And Prevents My Scheduler From Executing. I’D L

Hi @<1523701260895653888:profile|QuaintJellyfish58>

Is there a way or a trigger to detect when the number of workers in a queue reaches zero?

You mean to spin them down? what's the rational ?

I’d like to implement a notification system that alerts me when there are no workers left in the queue.

How are they "dropping" ?

Specifically to your question, let me check I'm sure there is an API that get's that data becuase you can see it in the UI πŸ™‚

one year ago
4 years ago
0 Hi All! Quick Question: Can Clearml-Agent

So you have two options

  • Build the container from your docker file and push it to your container registry. Notice that if you built it on the machine with the agent, that machine can use it as Tasks base cintainer
  • Use the From container as the Tasks base container and have the rest as docker startup bash script. Wdyt?
2 years ago
0 How To Do Continuous Training With Trains? Can Someone Share Examples Or Docs To Get Started With Continuous Learning.

Questions

I want to trigger a retrain task when F1

That means that in inference you are reporting the F1 score, correct?

As part of the retraining I have to train all the models and then have to choose best one and deploy it

Are you using passing output_uri to Task.init? are you storing the model as artifact?
You can tag your model/task with "best" tag (and untag the previous one). Then in production , look for the "best" task and get its model
Thoughts?

4 years ago
0 Hi Guys, How Does Allegro Keep Track Of The Requirements (I'M Running The Scripts On A Remote Train-Agent With

Back to the feature request, if this is taken care of (both adding a missed package, and the S3 upload), do you still believe there is a room for this kind of feature?

4 years ago
0 Hi! For

Why can I only callΒ 

import_model

Actually creates a new Model object in the system
InputModel(id) will "load" a model based on the model id
Make sense ?

3 years ago
0 Hi Everyone, Thx So Much For This Awesome Tool! I Was Wondering, Is There A Way To Define For Trains, Which Variable In The Project Is The Kpi, And Then Cluster And Plot Experiments With The Same Hyper Parameters?

Hi UptightMouse31
First, thank you 😊
And to your question:

variable in the project is the kpi,

You mean like add it to the experiment table and get kind of leader-board ?

5 years ago
0 Hello! Since Today I Get

Damn, okay I'll make sure we fix the order.
Could you verify the ~= works as intended (if the order id correct)

4 years ago
4 years ago
0 Hello All , Good Morning ! Can You Help Better Understand The Distinction Of Cleargpt? How Is It Different From Chatgpt And What Gpt Model Are We Using In Clearml ? Thank You In Advance !

still it is a chatgpt interface correct ?

Actually, no. And we will change the wording on the website so it is more intuitive to understand.
The idea is you actually train your own model (not chatgpt/openai) and use that model internally, which means everything is done inside your organisation, from data through training and ending with deployment. Does that make sense ?

2 years ago
0 Hi

MagnificentSeaurchin79

Can this be solved by using a docker image with the preinstalled packages at a user level?

Yes πŸ™‚
BTW: I think I missed how you managed to install the object_detection API in the first place?
Is it the git repo of the Task? did you fork it? is it a submodule of your git repo?

p.s.
Yes Slack is quite good at reminding you, but generally saying always prefer @ , it will send me an email if I miss the message :)

4 years ago
0 Hello, When Running A Task With A Remote Interpreter I Get

hmm DeliciousKoala34
what are you getting if you put this at the top of your code (the one you are running in the remote docker)
import os print([(k, os.environ[k]) for k in os.environ if k.startswith("CLEARML_")])

3 years ago
0 Hi, When A Step In A Pipeline Is Aborted, It Is Marked As Gracefully Finished (Painted In Blue) And The Other Steps That Depend On It Continue. I Believe This Is Not The Expected Behavior, I'D Expect To To Be Marked As Failed, So Other Tasks That Depend

Why? The task should have completed successfully, how is this aborting?

Early stopping by the HPO process, like hyper-band, e.g. this training model is going nowhere let's stop it.

5 years ago
0 Hi, I Have Several Long Running Experiments Failing With

That makes total sense, this is exactly an OS scenario for signal 9 πŸ™‚

4 years ago
0 Is There An Easy Way To Add A Link To One Of The Tasks Panels? (As An Artifact, Configuration, Info, Etc)? Edit: And Follow Up Regarding The Dataset. As Discussed Somewhere Previously, The Datasets Are Now Automatically Moved To A Hidden "Sub-Project" Pr

Why is it using an OutputModel and an InputModel?

So calling OutputModel will create the new Model entity and upload the data, InputModel will store it as required input Model.
Basically on the Task you have input & output section, when you clone the Task you are copying the input section into the newly created Task, and the assumption is that when you execute it, your code will create the output section.
Here when you clone the Task you will be clone the reference to the InputModel (i...

3 years ago
Show more results compactanswers