Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8122 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Is There A Way To Set Precedence On Package Managers? If We Set An Agent To Use

Hmmm maybe 

 I thought that was expected behavior from poetry side actually

I think this is the expected behavior, hence bug?!

3 years ago
0 1St: Is It Possible To Make A Pipeline Component Call Another Pipeline Component (As A Substep)? Or Only The Controller Can Do It? 2Nd: I Am Trying To Call A Function Defined In The Same Script, But Unable To Import It. I Passing The Repo Parameter To The

1st: is it possible to make a pipeline component call another pipeline component (as a substep)

Should work as long as they are in the same file, you can however launch and wait any Task (see pipelines from tasks)

2nd: I am trying to call a function defined in the same script, but unable to import it. I passing the repo parameter to the component decorator, but no change, it always comes back with "No module named <module>" after my

from module import function

c...

3 years ago
0 I Have A General Question About This Part In Dynamic Gpu Allocation. If For Example I Have A Machine That Has 8 Gpus And I Have 3 Queues: Queue1 Will Take 3Gpus, Queue2 Will Take Another 3Gpus, So In Queue3 Can I Put 2-4 Gpus?? If There Are Idle Gpus So T

Hi WickedBee96

Queue1 will take 3GPUs, Queue2 will take another 3GPUs, so in Queue3 can I put 2-4 GPUs??

Yes exactly !

if there are idle GPUs so take them to process the task? o

Correct, basically you are saying, this queue needs a minimum of 2 GPUs, but if you have more allocate them to the Task it pulled (with a maximum of 45 GPUs)
Make sense ?

2 years ago
0 Hi, I Wanted To Try Model Versioning, Suppose That I'Ve A Model And Want To Have Multiple Versions Of The Same Model And To Be Able To Have Inference On These Models(For Example

Also I can’t call the “preprocess” function since there is no valid endpoint to be hitting

Wait now I'm confused, when you are calling " None " you are actually calling the preprocess function running on the inference container, and this one in turn (automatically) calls the Triton container.

Are you calling the Triton manually?
Could you share your preprcoess.py , and the command line you have used to register the two model versions ?
(based on ...

one year ago
0 Hi, I Am Trying To Run Experiment From Clearml Web Ui. I Did Experiment Copy, Enqueue, But In The Execution Log I See That It Runs Command

, I need to understand it what happens when I press "Enqueue" In web UI and set it to default queue

The Task ID is pushed into the execution queue (from the UI / backend that is it), Then you have clearml-agent running on Your machine, the agent listens on queue/s and pulls jobs from queue.
It will pull the Task ID from the queue, setup the environment according to the Task (i.e. either inside a docker container or in a new virtual-env), clone the code/apply uncommitted changes ...

4 years ago
0 Hi All, I Am Having Trouble Using The

What about output_uri?

If you are using StorageManager directly, output_uri is not relevant

3 years ago
0 I’M Trying To Use

I want to keep the above setup, the remote branch that will track my local will be on 

fork

 so it needs to pull from there. Currently it recognizes 

origin

 so it doesn’t work because the agent then can’t find the commit.

So you do not want to push the change set ?
You can basically add the entire change set (uncomitted changes) from the last pushed commit).
In your clearml.conf, set store_code_diff_from_remote: true
https://github.com/allegroai...

4 years ago
0 Any Pointers On Running Gpu Tasks With K8S Glue?

basically use the template 🙂 we will deprecate the override option soon

4 years ago
0 Hi, I Am New Here, Can I Ask Question On Trains-Server Also?

CooperativeFox72 btw, are you guys running those 20 experiments manually or through trains-agent ?

5 years ago
0 Typo: Was Going Crazy For A Short Amount Of Time Yelling To Myself: I Just Installed Clear-Agent Init!

BTW: is this on the community server or self-hosted (aka docker-compose)?

4 years ago
0 Hello, I'M Using Trains For Logging My Training Script. However, While Using The Logger I'M Getting This: Trains.Task - Warning - ### Task Stopped - User Aborted - Status Changed ### And Eventually The Process Is Killed. If I Disable The Logger, The Proc

SoreDragonfly16 notice that if in the web UI you aborting a task it will do exactly what you described, print a message and quit the process. Any chance someone did that?

5 years ago
4 years ago
0 I Am Not Familiar With Pytorch, But Is It Expected That So Many “Models” Are Created? These Are Being Repeated As Well For A Single Task (This Is Training A T5_Model With Transformers):

If Task.init() is called in an already running task, don’t reset auto_connect_frameworks? (if i am understanding the behaviour right)

Hmm we might need to somehow store the state of it ...

Option to disable these in the clearml.conf

I think this will be to general, as this is code specific , no?

4 years ago
0 Hi All! I Have A Question About Pipelines. My Pipeline Consists Of Several Steps:

But every agent is a different pod so I do not know how properly share the folder with images.

Can I conclude Kubernetes running the agents ?

2 years ago
0 Hi, I'M Looking At Clearml As An Option To Automate Our Training Pipelines. However, From Reading The Documentation I'M Confused If Clearml Can Do What We Want. In Essence, I Would Like To Understand The Methods Of Queuing A

Hi GracefulDog98
As UnevenDolphin73 pointed you might be looking for https://clear.ml/docs/latest/docs/references/sdk/task#execute_remotely
Which will stop the current local process, and enqueue the task on the "default" queue, for the agent to execute.
Is this what you are looking for ?
The idea is you can run your code once in "development" mode, so you know everything is working, then from the UI (or programmatically) you can clone the experiment, edit the configuration (or anythin...

4 years ago
0 Hi Guys, With The New Venv Caching Available In Clearml, I Have The Following Problem: I Force My Pip Requirements To Be:

Since my deps are listed in the dependencies of my setup.py, I don't want clearml to list the dependencies of the current environment

Make sense 🙂
Okay let me check regrading the "." in the venv cache.

4 years ago
0 Hi All! Is There Any Simple Way To Use

Hi @<1556450111259676672:profile|PlainSeaurchin97>

Is there any simple way to use

argparse

to pass a clearml task name?

need to call

args = task.connect(args)

.

noooo 🙂 there is no need to do that, the arguments are automatically detected
see for yourself

args = parse_args()
task = Task.init(task_name=args.task_name)
2 years ago
0 What Could Be The Reason For Fail Status Of A Task That Seems To Have Completed Correctly? No Information In The Log Whatsoever

This is odd , and it is marked as failed ?
Are all the Tasks marked failed, or is it just this one ?

4 years ago
0 Another Quick Question About Fileservers And Clearml-Agent: Clearml-Agent Seems To Ignore The Output Destination Set In The Task Config

Hi @<1523701868901961728:profile|ReassuredTiger98>
The sdk.development.default_output_uri is used for Artifacts and Models. debug samples (or anything else the Logger class creates) will use the api.file_server
On the Task itself, you have the "output destination" (in the Execution tab) which would override the "output_uri" on a Task level
Does that make sense ?

2 years ago
0 Hi I Saw This On The Clearml-Agent Docs But Other Than The Docker Image, I'M Not Sure How To Integrate This With Clearml Py And Clearml-Server. Please Advise.

Hi SubstantialElk6
Yes this is the queue the glue will pull jobs from and push into the k8s. You can create a new queue from the UI (go to the workers&queues page and to the Queue Tab and press on "create new" Ignore it 🙂 this is if you are using config maps and need TCP routing to your pods As you noted this is basically all the arguments you need to pass for (2). Ignore them for the time being This is the k8s overrides to use if launching the k8s job with kubectl (basically --override...

4 years ago
0 Hi! I Was Wondering If It'S Possible For A Clearml Agent To Create An Environment From A Conda Environment.Yml File Every Time An Experiment Is Run

Hi SmugOx94
Hmm are you creating the environment manually, or is it done by Task.init ?
(Basically Task.init will store the entire environment of conda, and if the agent is working with conda package manager it will use it to restore it)
https://github.com/allegroai/clearml-agent/blob/77d6ff6630e97ec9a322e6d265cd874d0ab00c87/docs/clearml.conf#L50

4 years ago
0 Hello! I Add To Inject The Configuration Into Clearml With

I think it would make sense to have one task per run to make the comparison on hyper-parameters easier

I agree. Could you maybe open a GitHub issue on it, I want to make sure we solve this issue 🙂

4 years ago
0 Hi, Guys! I’M Trying To Connect Clearml To My Task And Getting Strange Error: After

DepressedChimpanzee34
What's the hydra version ?
I tested with 1.1.0dev3 and it worked for me

4 years ago
Show more results compactanswers