Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4181 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 What Privileges/Iam Role Would The Aws Autoscaler Need?

I think it basically needs the ability to raise/terminate instances

3 years ago
0 Hey, How Can I Find Which Tasks/Projects Have The Most Metrics?

I'm afraid there is no such capability at the moment. However, I'd suggest opening a GitHub feature request for this 🙂

one year ago
0 Hey, How Can I Find Which Tasks/Projects Have The Most Metrics?

Hi @<1533257411639382016:profile|RobustRat47> , what would you define as most metrics?

one year ago
0 Hey, How Can I Find Which Tasks/Projects Have The Most Metrics?

Are you self hosted or using the community?

one year ago
0 In The "Models" Tab Under A Project I Cannot Add A Custom Column Of Metrics Or Metadata. It Is Just Grayed Out. Is This A Bug?

@<1719524641879363584:profile|ThankfulClams64> , there is a difference between models & tasks/experiments. Everything during training is automatically reported to the task/experiment, not the model. If you want to add anything to models themselves you have to add it manually. (Keep in mind that taks/experiments are separate entities from models, although there is a connection between the two)

Once you manually add either metadata or metrics you will be able to add custom columns. This is not...

7 months ago
0 Any Idea What I'Ve Missed Here? Thanks

The following command should give you something:
docker logs --follow clearml-elastic

3 years ago
0 Hi All! When I Try To Run Tasks For A Agent On Machine Without Gpu This Error Occurs:

Hi EnviousPanda91 , are you running in docer mode? It looks like you're trying to use a CUDA image without a GPU on it

2 years ago
0 Hello! A Python Api - Related Question: Is There A Way To Query The Name Of The Queue A Task Is Running In, From Task Class / By Task Id? Thanks In Advance!

Hi TeenyHamster79 ,

I think the API you're looking for is tasks.get_by_id and the fields you're looking for are:
data.tasks.0.execution.queue.name
data.tasks.0.execution.queue.id

Tell me if it helps 🙂

3 years ago
0 Hi, I'M Trying To Install A Self-Hosted Clearml-Server On An Ubuntu22.04 Computer Using The Pre-Built Docker Image And Following

Hi @<1652120623545061376:profile|FrightenedSealion82> , do you see any errors in the apiserver or the webserver containers?

one year ago
0 Hi everyone, :wave: Is it possible to execute ML projects on one platform? Execute Ops part on the ClearML platform?

Hi @<1753589101044436992:profile|ThankfulSeaturtle1> , not sure I understand what you mean. Can you please elaborate?

11 months ago
0 Hey Everyone, I'M Setting Up Clearml Agents And Workers With The Open Source Version Within My Org. Was Wondering What Is The Best Way To Handle Different Python Version Requirements For Different Projects?

Hi @<1535069219354316800:profile|PerplexedRaccoon19> , the agent will try to use the relevant python version according to what the experiment ran on originally. In general, it's best to run inside dockers with a docker image specified per experiment 🙂

2 years ago
0 What Determines The Value Of User? (See Screenshot) Right Now All My Teammates' Experiments Show Up Under My Own User Name. Thanks

WittyOwl57 , when creating credentials, the credentials are associated with your user. So even if you give others those credentials, the experiments in the system will show up under the user who's credentials were being used when running the experiment 🙂

Hope this helps

4 years ago
0 What Determines The Value Of User? (See Screenshot) Right Now All My Teammates' Experiments Show Up Under My Own User Name. Thanks

You can edit the mongodb manually (strongly suggest against) to change users of experiments. Besides that, I'm afraid not. Each user would have to create separate credentials for themselves under their own user in the system.

A suggestion I might have is using the 'Description' field to write down the relevant user manually and adding that as a column in your view. The small cogwheel near the top right (next to the refresh button) will give you the option to add that column.

Hope this helps...

4 years ago
0 What Determines The Value Of User? (See Screenshot) Right Now All My Teammates' Experiments Show Up Under My Own User Name. Thanks

WittyOwl57 , It determines the user that created the object. What is the sign in method that you and your team are using?

4 years ago
0 Hi There, I Would Like To Know How We Can Pass An Environment Variable For The Running Script When The Script Is Run Remotely By An Agent?

Hi @<1529995795791613952:profile|NervousRabbit2> , if you're running in docker mode you can easily pass it in the docker_args parameter for example so you can set env variables with -e docker arg

2 years ago
0 Hi Everyone! I Have A Question About

Hi @<1569496075083976704:profile|SweetShells3> , do you mean to run the CLI command via python code?

2 years ago
0 Hi Everyone! I Have A Question About

How are you currently trying to wrap it up in python?

2 years ago
0 Hello Everyone, I’M Currently Facing An Issue While Using Cloud Clearml With Aws_Autoscaler.Py. Occasionally, Some Workers Become Unusable When An Ec2 Instance Is Terminated, Either Manually Or By Aws_Autoscaler.Py. These Workers Are Displayed In The Ui W

Hi @<1571308079511769088:profile|GentleParrot65> , ideally you shouldn't be terminating instances manually. However you mean that the autoscaler spins down a machine and still recognizes it as running and refuses to spin up a new machine?

2 years ago
0 Hi, Quick Y/N Question: Is It Possible To Build A Pipeline Of Pipelines? I'D Imagine This To Be Possible, Since Pipelines Are Also Treated As Tasks.

Hi @<1523703572984762368:profile|SlimyDove85> , conceptually I think it's possible. However, what would be the use case? In the end it would all be abstracted to a single pipeline

2 years ago
0 Hi, I Have An Issue With Mongo Container When Try To Start Service. Here’S The Log From Mongo Container.

Aight. What OS are you on, also, what is the status of this deploy? Is this a clean install, version upgrade or it just stopped working after a restart? 🙂

3 years ago
0 Hello, I’M Using

Is there a vital reason why you want to keep the two accounts separate when they run on the same machine?
Also, what if you try aligning all the cache folders for both configuration files to use the same folders?

3 years ago
0 Hi All - I Am New To Clearml And Trying It Out Using The Free Plan, And I Am Generally Quite Impressed With The Amount Of Features Available For Free

Hi @<1655744373268156416:profile|StickyShrimp60> , happy to hear you're enjoying ClearML 🙂
To address your points:

Is there any way to lock setting of scalar plots? Especially, I have scalars that are easiest comparable on log scale, but that setting is reverted to default linear scale with any update of the comparison (e.g. adding/removing experiments to the comparison).

I would suggest opening a GitHub feature request for this

Are there plans of implementing a simple feature t...

one year ago
0 Can I Run A Random Task From A Queue? Like This

Cant you paste the output until the stuck point? Sounds very strange. Does it work when it's not enqueued? Also, what version of clearml-agent & server are you on?

3 years ago
0 Can I Run A Random Task From A Queue? Like This

On what OS are you on?
Regarding your question - I can't recall for sure. I think it still creates a virtualenv

3 years ago
0 Hi, In The "Choose Compared Experiments" View Of The Webui, Would It Be Possible To Add A Toggle To Include Archived Experiments In The Results Of The Search? Also Add The Task Type Field?

Interesting idea. From the looks of it even by searching for the task id manually archived experiments aren't fetched. Maybe open a github issue for this, really cool feature idea 🙂

2 years ago
Show more results compactanswers