Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8049 Answers
  Active since 10 January 2023
  Last activity 5 months ago

Reputation

0

Badges 1

25 × Eureka!
0 Hi Team, Could We Just Share The Entire Project Instead Of Workspace ? I Tried Sharing With Link Of Particular Task But I Want To Share Entire Project Instead Of Every Tasks

Hi @<1536881167746207744:profile|EnormousGoose35>

, Could we just share the entire project instead of Workspace ?

You mean allow access to a project between workspaces ?
If the answer is yes, then unfortunatly the SaaS version (app.clear.ml) does not really support these level of RBAC, this is part of the enterprise version, which assumes a large organization with the need for that kind of access limit.
What is the use case ? Why not just share the entire workspace ?

one year ago
0 How Does Clearml Clones The Git Repo, Using Https Or Ssh?

Hi @<1610083503607648256:profile|DiminutiveToad80>
This depends on how you configure the agents in your clearm.conf

You can do https if user/pass are configured, and you can force SSH and it will auto-mount your host SSH folder into the container and use it.
None
[None](https://github.com/allegroai/clearml-agent/blob/0254279ed5987fbc69cebae245efaea33aec1ff2/docs/cl...

one year ago
0 Hi, I Want To Pass Environment Variables From The Host To The Docker Containers Running My Task. I Managed To Use

Hi ClumsyElephant70

extra_docker_shell_script: ["export SECRET=SECRET", ]

I think ${SECRET} will not get resolved you have to specifically have text value there.
That said it is a good idea to resolve it if possible, wdyt?

3 years ago
0 Hi, When Using The Logger.Report_Table() Method (

ETA for the next release is end of the month/early March, it is planned to include many other improvements 🙂

3 years ago
3 years ago
0 Hi Everybody. When I Want To Force The Agent To Not Reproduce My Local Pip Environment, I Add

My question is what should be the path to the requirements.txt file?
Is it relative to the repo base?

This is actually in runtime (i.e. when running the code), so relative to the working directory. Make sense ? (you can specify absolute path, probably something I would avoid in the code base though...)

2 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

What I mean is that I don't need to have cudatoolkit installed in the current conda env, right?

Wait, are you using conda as package manager ?
EDIT: meaning configured in trains.conf as package manager

3 years ago
0 Whet Is The Method For Packages Exploration When Using Conda? Agent Is Set To 'Conda' Mode. We Upload A Task From A Local Conda Env That (Obviously) Has Some Pip Packages As Well. When We Enqueue The Task To Run Remotely, Not All Conda Packages Are Instal

CrookedWalrus33 from the log it seems the code is trying to use "kwcoco" but it is not listed under any "Installed packages" nor do you see any attempt to install it. Can you confirm ?

2 years ago
0 Hi all :wave:! I got a problem regarding Grafana/Prometheus. When I deploy a model with clearml-serving and I add metrics like this: `clearml-serving --id *** metrics add --endpoint slm_POC --variable-scalar beds=0,1,5,10,50 bath=0,1,5,10,50 y=0,100000,50

hi @<1546303293918023680:profile|MiniatureRobin9>

I can still see the metrics in Grafana. I

it will not delete it from grafana, it means it's no longer collected, make sense ?

one year ago
0 Hi, I Am Giving Another Try To Clearml-Session And I Am Blocked At The Current Error Shown When The Cli Try To Establish The Tunneling:

Sorry, what I meant is that it is not documented anywhere that the agent should run in docker mode, hence my confusion

This is a good point! I'll make sure we stress it (BTW: it will work with elevated credentials, but probably not recommended)

2 years ago
0 Hello I'M New Here, I Found This Error When Running This Command "Docker-Compose --Env-File Example.Env -F Docker-Compose-Triton.Yml Up". Actually, When I Run This Command For The First Time, It Worked. And Then When I Try To Change To My Friend'S Workspa

MoodyCentipede68 could it be that the model is on one account (workspace) and your credentials (the ones provided to the docker compose) are from another workspace?
The error itself point to the triton helper failing to get the model ID from the backend. The models are uploaded to a a specific workspace, and it looks like a mismatch (I.e. the model Id is nowhere to be found) wdyt?

2 years ago
0 Greetings And Hello

When is clearml-deploy coming to the open source release?

Currently available under clearml-serving (more features are being worked on, i.e. additional stats and backends)
https://github.com/allegroai/clearml-serving

3 years ago
0 Hi

Yey! BTW: what the setup you are running it with ? does it include "manual" tasks? Do you also report on completed experiments (not just failed ones)? Do you filter by iteration numbers?

4 years ago
0 I .

Correct

2 years ago
0 Hi All, Is It Possible To Control The Number Of Steps Of The Pipeline During Run Time. Eg. If User Wants #N Parallel Steps In The Pipeline

. but when we try to do a "New Run" from UI, it tries to follow the DAG of previous run (the run with all child nodes skipped) and the new run fails too.

This is odd, is this reproducible ? what's the clearml python package version ?

one year ago
0 Hi, How Can I Change The Project.Default_Output_Destination? I Tried Setting It To None But It Is Not Updated

Because of that, I cannot create a task in this project programmatically locally because it tries to access the bucket and fails. And there is no easy way to change the default output location (not in the web UI, not in the sdk)

JitteryCoyote63 hmm that is a pickle ...
let me check the code ...

one year ago
0 Whats Different Between --Cpu-Only And --Services-Mode?

but cant catch that only one way for service queue or I can experiments with that?

UnevenOstrich23 I'm not sure what exactly is the question, but if you are asking weather this is limited, the answer is no it is not limited to that use case.
Specifically you can run as many agents in "services-mode" pulling from any queue/s that you need, and they can run any Task that is enqueued on those queues. There is no enforced limitation. Did that answer the question ?

3 years ago
0 Hi, I Am Trying To Upload A Plot To An Existing Task Using The

, I generate some more graphs with a file called 

graphs.py

  and want to attach/upload to this training task

Make total sense to use Task.get_task, I just want to make sure that you are aware of all the options, so you pick the correct one for you :)

3 years ago
3 years ago
Show more results compactanswers