Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
49 Questions, 8124 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

25 × Eureka!
0 It Seems Like Clearml Agent Does Not Support Arparse Subparsers, Right?

Just to make sure I understand, running locally creates the Args/command correctly, then when actually executed on the remote machine (i.e. execute_remotely creates the correct Args/command But when the agent actually executes it) it updates back the Args/command as a list. Is that a correct description ?

4 years ago
0 If I Set

(it will just create a new venv and install everything you need in the venv)

4 years ago
0 Hi There! Is There An Easy Way To Retrieve The Site-Package Directory That Was Created By An Agent From Inside A Task? Eg.

in my repo I maintain a bash script to setup a separate python env.

Hmm interesting, now I have to wonder what is the difference ? meaning why doesn't the agent build a similar one based on the requirements ?

2 years ago
0 Hi Fam! Sorry For The Potential Dumb Question, But I Couldn’T Find Anything On The Interwebs About It. I’M Hosting A Clearml Server On Aws, Using S3 As A Backend For Artifact Storage. I Find That Whenever I Delete Archived Artifacts In The Web App, I Get

in 

 issues a delete command to the ClearML API server,...

almost, it issues the boto S3 delete commands (directly to the S3 server, not through the cleaml-server)

And that I need to enter an AWS key/secret in the profile page of the web app here? (edited)

correct

3 years ago
0 Hi All, Is There A Way To Schedule The Tasks From The Queue Onto The Gpu Instances Based On Factors Such As Gpu Utilisation, Number Of Cpu Cores Present, Free Memory Or Custom Parameters Such As Priority Of The Task, Estimated Time Etc?

I am trying to see if the user can submit a list of resource requirements (e.g 4GPUs, 12 cores, 100GB diskspace)

This will be quite easy to implement using the cleamrl k8s glue, just use user-properties and change the template based on it. I can point to where you need to modify the code

4 years ago
0 Hello There, I Am Trying To Organize The Dl Code Into A Monorepo, The Repo Will Have A Section Of Shared Packages That Will Be Used By Other Packages That Are The Actual Training Projects. Let'S Say That I Install The Shared Libs With Pip In Editable Mod

But I am starting to wonder whether It would be easier just changing sys,path on the scripts that use the sibling libs.

that depends, how would the sibling packages get to a remote machine ?

2 years ago
0 Hi Folks, I Have A Question Related To The Storage Of Artifacts, As It Is Not Entirely Clear To Me Where To Configure It. If I Read The Documentation

when I run it on my laptop...

Then yes, you need to set the default_output_uri on Your laptop's clearml.conf (just like you set it on the k8s glue)
Make sense ?

3 years ago
0 Hello, I Am Trying To Run Some Algorithm In My Docker Container With Clearml Task . But The Algorithm Uses Ros, So I Need Somehow To Setup Environment Before Run It And Launch

. I'm trying to run to get a task to run using a specific docker image and to source a bash script before execution of the python script.

Are you running an agent in docker mode ? if so you should be able to see the Output of your bash script first thing in the log
(and it will appear in the docker CMD)

2 years ago
0 Hi All, Is There A Way To Schedule The Tasks From The Queue Onto The Gpu Instances Based On Factors Such As Gpu Utilisation, Number Of Cpu Cores Present, Free Memory Or Custom Parameters Such As Priority Of The Task, Estimated Time Etc?

The idea of queues is not to let the users have too much freedom on the one hand and on the other allow for maximum flexibility & control.
The granularity offered by K8s (and as you specified) is sometimes way too detailed for a user, for example I know I want 4 GPUs but 100GB disk-space, no idea, just give me 3 levels to choose from (if any, actually I would prefer a default that is large enough, since this is by definition for temp cache only), and the same argument for number of CPUs..
Ch...

4 years ago
0 Is There A Nicer Way To Program The Color For Report_Scalar? By Default It Use A Color Scheme That Is Very Hard To Compare When I Have Multiple Lines. I Can Change It Manually But I Do Not Want To Repeat It For Every Experiment.

Hi EnviousStarfish54
Color coding on the entire UI is stored per user (I think that on your local cookies, but I might be wrong). Anyhow any title/series combination will have the select color regardless of the project.
This way you can configure once that loss is red and accuracy is green, etc.

4 years ago
4 years ago
0 Should Dataset Triggers Also Be Activated If There Is No Trigger Condition Except Dataset_Project And A New Task Starts In That Project? Is This Expected Behavior?

Nice, that seems to be the issue. Any chance you can open a GitHub issue, so we do not loose track of it ?

3 years ago
0 Hey.

TenseOstrich47 as long as on the machine running the agent has credentials to your ECR, when the agent will run Any docker container, it will able to pull it. There is no need to manually change anything, notice the Task itself contains the name of the image it will use

3 years ago
0 Hi, Expanding On

DeliciousBluewhale87 Yes I think so, do notice that you might end up with maximum of 12 pods.
You can also do the following with max 10 nodes: (notice --queue can always get a list of nodes it will pull based on the order of the queues)
python k8s_glue_example.py --queue high_priority_q low_priority_q --ports-mode --num-of-services 10

4 years ago
0 How To Use

MelancholyElk85 what do you have under "Installed Packages" for this specific Task ?

3 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

No worries, condatoolkit is not part of it. "trains-agent" will create a new clean venv for every experiment, and by default it will not inherit the system packages.
So basically I think you are "stuck" with the cuda drivers you have on the system

4 years ago
0 Um, Is There A Way To Delete An Artifact From A Task That Is Running?

VexedCat68
delete the uploaded file, or the artifact from the Task ?

3 years ago
0 Hey There, Happy New Year To All Of You

Did you experiment any drop of performances using forkserver?

No, seems to be working properly for me.

If yes, did you test the variant suggested in the pytorch issue? If yes, did it solve the speed issue?

I haven't tested it, that said it seems like a generic optimization of the DataLoader

4 years ago
0 Hello! Since Today I Get

'conda --version'

4 years ago
0 Could You Please Explain A Bit More How Trains Adapt The Torch Version Depending On The Installed Cuda Version? Here Is My Setup:

You can set torch to be installed last:
post_packages: ["horovod", "torch"]
Which will make sure the "trains-agent" version (the one you specified in the "installed packages" will be installed last.

4 years ago
0 Is There An Easy Way To Add A Link To One Of The Tasks Panels? (As An Artifact, Configuration, Info, Etc)? Edit: And Follow Up Regarding The Dataset. As Discussed Somewhere Previously, The Datasets Are Now Automatically Moved To A Hidden "Sub-Project" Pr

Hi UnevenDolphin73

Is there an easy way to add a link to one of the tasks panels? (as an artifact, configuration, info, etc)?

You can add a link as an artifact, that is probably the easiest:
tasl.upload_artifact(name="just link", artifact_object=" ")

EDIT: And follow up regarding the dataset. As discussed somewhere previously, the datasets are now automatically moved to a hidden "sub-project" prefixed with

.datasets

. This creates several annoyances that I...

3 years ago
0 Hi There - I Am Attempting To Use The Hp Optimization Feature, But Keep Getting The Following Error:

Hi CharmingBeetle38
On the base task, do you see those arguments under the Configuration tab?
Also, if they are under Args section, you should add "Args/" prefix to the HP optimization (this is how you differentiate between the sections)

4 years ago
0 Hello, I Would Like To Optimize Hparams Saved In Configuration Objects. I Used Hydra And Omegaconf For Hparams Definition (See Img). How Should I Define The Name Of Hparam In

CurvedHedgehog15 there is not need for :
task.connect_configuration( configuration=normalize_and_flat_config(hparams), name="Hyperparameters", )Hydra is automatically logged for you, no?!

3 years ago
0 Hey, I Have A Question Regarding “Logger.Report_Table”, It’S Seems Like After The Table Is Drawn In The Ui I Cannot Change The Column Size And More Annoying I Cannot Select Content From Table To Copy It. Anyone Know What Params I Need To Pass In Order To

@<1546303254386708480:profile|DisgustedBear75> is think this was a UI bug, they are just releasing a new version that fixes that (i.e. server version), are you running a self-hosted server?

2 years ago
0 Hello! Since Today I Get

Hi @<1523701868901961728:profile|ReassuredTiger98> when you get to it...
please download the wheel, then install it with

pip3 install -U clearml_agent-0.17.3rc0-py3-none-any.whl

Then run the daemon with the additional --debug argument, basically:

clearml-agent --debug daemon --foreground ...

Once the agent is running please send the Task's log from your console 🙂

4 years ago
0 <image>

Let me know if it solved it, if it did I'll make sure we push the RC

4 years ago
Show more results compactanswers