Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
CostlyOstrich36
Moderator
0 Questions, 4210 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0
0 The Comparison Page Seems To Resize The Experiments So That All Tags Will Fit In The Screen, But Then The Experiments Are Pretty Much Impossible To Compare

Huh, what an interesting issue! I think you should open a github issue for this to be followed up on.

If you remove the tags the page resizes back?

3 years ago
0 Hello Channel, I Am Struggling A Lot On An Issue Linked To

@<1556812486840160256:profile|SuccessfulRaven86> , did you install poetry inside the EC2 instance or inside the docker? Basically, where do you put the poetry installation bash script - in the 'init script' section of the autoscaler or on the task's 'setup shell script' in execution tab (This is basically the script that runs inside the docker)

It sounds like you're installing poetry on the ec2 instance itself but the experiment runs inside a docker container

2 years ago
0 From

I think you can report these statistics manually to the Dataset

2 years ago
0 Hi! How To Fix This Error With Response?

Hi AbruptHedgehog21 , what are you trying to do when you're getting this message? Are you running a self hosted server?

3 years ago
0 I Setup A Dedicated Mongo Instance With A

@<1576381444509405184:profile|ManiacalLizard2> , why not run it as docker compose?

2 years ago
0 Also (Unrelated), I Noticed That After The Upgrade To Clearml Server 1.2.0, The Aws (Minio) Credentials Are Not Saved/Used. It Keeps Asking For Them Whenever I Switch To Debug Samples.

UnevenDolphin73 , I've encountered a similar issue with s3. I believe it's going to be fixed in the next release πŸ™‚

3 years ago
0 I'Ve Been Seeing This Message And Similar Messages A Lot In Some Of My Tasks Lately... Any Ideas?

Can you check up on the dockers and see if they're all up and running?

3 years ago
0 Has Anybody Encountered:

Happy to help πŸ™‚

2 years ago
0 Hi I Strongly To Solve This Issue Of "Python Virtual Environment Cache Is Disabled. To Accelerate Spin-Up Time Set `Agent.Venvs_Cache.Path=~/.Clearml/Venvs-Cache`" Anyone With Similar Issue And It Was Resolved?

I think this is referring to your configuration file ~/clearml.conf follow the instructions in the message to remove it or you can just ignore it

3 years ago
0 Hi ! I ’Ve Got This Error

Hi ReassuredArcticwolf33 , what are you trying to do and how is it being done via code?

3 years ago
0 In

and when you do pip show clearml the version is 1.6.4?

2 years ago
0 Hi There, Maybe This Was Already Asked But I Don'T Remember: Would It Be Possible To Have The Clearml-Agent Switch Between Docker Mode And Virtualenv Mode At Runtime, Depending On The Experiment

Hi JitteryCoyote63 , I don't believe this is possible. Might want to open a GitHub feature request for this.

I'm curious, what is the use case? Why not use some default python docker image as default on agent level and then when you need a specific image put into the experiment configuration?

2 years ago
0 Hi, When Running A Training Script From Pycharm, It Seems That Clearml Logs Only Those Packages That Are Explicitly Imported By My .Py Files; It Seems To Not Take The Pacakges That Are In The Requirements.Txt My Training Uses Keras

RoughTiger69 , you can also use Task.add_requirements Β for a specific package through the script

Example: Task.add_requirements('tensorflow', '2.4.0') Example: Task.add_requirements('tensorflow', '>=2.4') Example: Task.add_requirements('tensorflow') -> use the installed tensorflow version Example: Task.add_requirements('tensorflow', '') -> no version limit

4 years ago
0 Hey All. Wanting To Log

Hi @<1674226153906245632:profile|PreciousCoral74> , you certainly can, just use the Logger module πŸ™‚
None

one year ago
0 Hello Everyone, I Am Hosting My Own Clearml Server And It Is Great. However, I Ran Into A Problem Where One Of My Projects Has Become "Hidden" And I Have No Idea How. This Has Created Some Issues Where I Can No Longer Pull From The Project. How Do I Unhid

Hi @<1547028031053238272:profile|MassiveGoldfish6> , do you have any idea what might have caused the project to become hidden?

You can "unhide" the project via API, there is a system tag "hidden" that you can remove to unhide

one year ago
0 Has Anybody Encountered:

I think this is due to Optuna itself. It will manually kill experiments it doesn't think will have good results

2 years ago
0 Hey, I'M Using Clearml Gcp Autoscaler And It Seems That

Can you provide a full log of the VM when spun manually vs when by an autoscaler? Also I'd try spinning up manually a VM and then running an agent manually on it and see if the issue reproduces

8 months ago
0 What What I Can See, There Is No Task Status

Hi RotundHedgehog76 , from API perspective I think you are correct

2 years ago
Show more results compactanswers