Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8126 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi, I'M Trying To Get An Understanding Of How

Hi GiddyTurkey39 ,

When you say trains agent, are you referring to the trains agent command ...

I mean running the trains-agent daemon on a machine. This means you have a daemon pulling jobs from the execution queue and executing them (either in virtual environment, or inside a docker)
You can read more about https://github.com/allegroai/trains-agent and https://allegro.ai/docs/concepts_arch/concepts_arch/

Is it sufficient to queue the experiments

Yes there is no ne...

5 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

But thanks to you I realized one thing: I use 

hparams

 further in the code, not 

normalize_and_flat_config(hparams)

 .

This is the main issue , any reason not to use normalize_and_flat_config(hparams) later in the code?
or maybe update back the hparam?

3 years ago
0 Hey.

Was trying to figure out how the method knows that the docker image ID belongs to ECR. Do you have any insight into that?

Basically you should have the docker service login before running the agent, then the agent uses docker to run the image from the ECR.
Make sense ?

3 years ago
0 Hi Community! I Have Difficulty Using Clearml Pipeline. I Am Writing The Code Using The Pipeline Decorator, But The Pipeline Does Not Work With The Following Error When Specifying The Docker Image As A Argument Of The Decorator. How Should I Solve It?

Just to make sure, the first two steps are working ?
Maybe it has to do with the fact the "training" step specifies a docker image, could you try to remove it and check?
BTW: A few pointers
The return_values is used to specify multiple returned objects stored individually, not the type of the object. If there is a single object, no need to specify
The parents argument is optional, the pipeline components optimizes execution based on inputs, for example in your code, all pipeline comp...

2 years ago
3 years ago
0 I'M Trying To Configure The Glue Agent To Use Aws Ecr Via Helm Charts. Below Is My Configuration. It Is Not Pulling The Image Though, It Is Failing With

Shouldn't this be a real value and not a template

you mean value being pulled to the pod that failed ?

3 years ago
0 Hello, I'M Trying To Save A Keras Model As A Task Artifact, And Then Upload It From Another Task. Does Anyone Know The Syntax For That? What I'Ve Seen Is Not Quite Working.

If you are using the latest RC:
pip install clearml==0.17.5rc5You can pass True it will use the "files_server" as configured in your clearml.conf
I used the http link as a filler to point to the files_server.
Make sense ?

4 years ago
0 Is There Any Way To Clear The Installed Packages Of A Task Programmatically? (I.E. Using The Python Sdk And Not The Ui)

Hi GiddyTurkey39
Are you referring to an already executed Task or the current running one?
(Also, what is the use case here? is it because the "installed packages are in accurate?)

4 years ago
0 Hi! Trying To Run The Following Very Basic Code. The First Few Parts Works As They Should:

I cannot reproduce, tested with the same matplotlib version and python against the community server

4 years ago
0 Hi, Is It Possible To Specify Per Experiment (Task In Clearml) Where The Results (Artifacts) Are Saved?

. It is not possible to specify the full output destination right?

Correct 😞

4 years ago
0 Anyway To Make A Job Fail If The Required Python Version (3.7 Vs 3.8 For Example) Is Not Available In The Agent?

then when we triggered a inference deploy it failed

How would you control it? Is it based on a Task ? like a property "match python version" ?

4 years ago
0 I Am Trying To Use

pip install "pyjwt<2.0.0"

4 years ago
0 Hello, I Am Getting `Valueerror: Could Not Get Access Credentials For '

I'm so glad you mentioned the cron job, it would have taken us hours to figure

5 years ago
0 I Have Code That Does Torch.Load(Path) And Deserializes A Model. I Am Performing This In Package A.B.C, And The Model’S Module Is Available In In A.B.C.Model Unfortunately, The Model Was Serialized With A Different Module Structure - It Was Originally Pla

Hi RoughTiger69

unfortunately, the model was serialized with a different module structure - it was originally placed in a (root) module called

model

....

Is this like a pickle issue?

Unfortunately, this doesn’t work inside clear.ml since there is some mechanism that overrides the import mechanism using

import_bind

.

__patched_import3

What error are you getting? (meaning why isn't it working)

3 years ago
0 Hi! I Noticed A Bug Related To Reusing The Same Component In A Pipeline. I Have Prepared A Mock Example So That You Can Reproduce It:

Oh right, I missed the fact the helper functions are also decorated, yes it makes sense we add the tags as well.
Regarding nested pipelines, I think my main question is , are they independent or are we generating everything from the same code base?

4 years ago
0 Hi, I Have A Worker On A Machine Using Gpus 0,1 And Another Worker On The Same Machine Using Gpus 0,1,2,3,4,5. A Worker Ran A Task On Gpus 0,1 But For Some Reason The Second Worker Started Additional Task In Queue On Gpus 0,1,2,3,4,5, Which Caused Both Of

you mean in the enterprise

Enterprise with the smarter GPU scheduler, this is inherent problem of sharing resources, there is no perfect solution, you either have fairness, but then you get idle GPU's of you have races, where you can get starvation

5 years ago
0 Question About Pipelines - So The Default For Pipeline Tasks That Are Executed Remotely Is To Execute On The

It's relatively new and it is great as from the usage aspect it is exactly like a user/pass only the pass is the PAT , really makes life easier

3 years ago
0 Hello World,

Hi PerplexedGoat65

it appears, in a practical sense, this means to mount the second drive, and then bind them in ClearML’s configuration

Yes, the entire data folder (reason is, if you loose it, you loose all the server storage / artifacts)

Also, thinking about Docker and slower access speed for Docker mounts and such,

If the host OS is linux, you have nothing to worry about, speed will be the same.

3 years ago
0 When I Do

Yes exactly !

3 years ago
0 Question About Pipelines - So The Default For Pipeline Tasks That Are Executed Remotely Is To Execute On The

Hi WackyRabbit7
the services (or the agent running there) is spinning multiple Tasks (as opposed to regular agent where it is one task at a time).

how can I give this agent git access?

in the docker-compose you can configure the git credentials (user/pass or user/key it is the same).
https://github.com/allegroai/clearml-server/blob/d0e2313a24eb1248ebf0ddf31bf589de0d675562/docker/docker-compose.yml#L137

3 years ago
0 Hello, Does Clearml Have A Feature Like The Wandb'S Reports? E.G.

Notice that you can embed links to specific view of an experiment, by copying the full address bar when viewing it.

4 years ago
2 years ago
0 Hey, I'Ve Spin Up A Worker Using Aws Autoscaler In Clearml Self Hosted Server Running On Kubernetes. However, I Can'T Find The Agent On The Workers Page. Any Idea Why It'S Not Showing Up? Full_Log:

@<1595587997728772096:profile|MuddyRobin9> are you sure it was able to spin the EC2 instance ? which clearml version autoscaler are you running ?

2 years ago
Show more results compactanswers