Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
AgitatedDove14
Moderator
48 Questions, 8051 Answers
  Active since 10 January 2023
  Last activity 7 months ago

Reputation

0

Badges 1

25 × Eureka!
0 A Suggestion. Sometimes Newcomers That Join An Existing Project That Uses Clearml Forget To Configure Their Clearml For The Organization'S Server Resulting In Them Launching Experiments To The Public Cloud Possibly With Sensitive Data - I Think That If Y

mostly out of curiosity, what is the motivation behind introducing this as an environment variable knob rather then a flag with some default in Task.init?

DepressedChimpanzee34 we will deprecate the demo server (not exactly sure when) as we have the free community one that gives better service and stores the data. It was originally set for easy on-boarding and testing, but I think that now the user experience might be better with using the community free tier.
Make sense ? btw: what ...

3 years ago
0 Hello Everyone! Is It Possible To Deactivate Package Analysis For Remote Execution? I Run My Code With Clearml-Agent In Docker Mode With Nvidia:Pytorch Container. When Clearml Is Running Inside The Docker The Installed Packages Of The Webui Get Updated. H

clearml will register conda packages that cannot be installed if clearml-agent is configured to use pip. So although it is nice that a complete package list is tracked, it makes it cumbersome to rerun the experiment.

Yes mixing conda & pip is not supported by clearml (or conda or pip for that matter)
Even python package numbers might not exist on both.
We could add a flag not to update back the pip freeze, it's an easy feature to add. I'm just wondering on the exact use case

3 years ago
4 years ago
0 A Suggestion. Sometimes Newcomers That Join An Existing Project That Uses Clearml Forget To Configure Their Clearml For The Organization'S Server Resulting In Them Launching Experiments To The Public Cloud Possibly With Sensitive Data - I Think That If Y

well.. having the demo server by default lowers the effort threshold for trying ClearML and getting convinced it can deliver what it promises, and maybe test some simple custom use cases. I

This was exactly what we thought when we set it up in the first place 🙂
(I can't imagine the cost is an issue, probably maintenance/upgrades ...)
There is still support for the demo server, you just need to set the env key:
CLEARML_NO_DEFAULT_SERVER=0 python ...

3 years ago
0 We Are Facing Performance Issues Of Our Self-Hosted Clearml Server Looking At The Cpu Utilization \ Memory \ Networking We Couldn'T Identify A Bottleneck We Are At The Moment Using ~100 Workers For Some Hpo, And The Main Performance Issues We Observe Are

Hi DepressedChimpanzee34 , took me a while but I think there is a solution:
In your docker file, replace:
https://github.com/allegroai/clearml-server/blob/a64c4d264d00eadd2d11818b37151d3cc6266d99/docker/docker-compose.yml#L5
with
entrypoint: /bin/bash command: -c "mkdir -p /var/log/clearml && cd /opt/clearml/ && python3 -m apiserver.apierrors_generator && gunicorn -w 4 -t 600 --bind=0.0.0.0:8008 apiserver.server:app"

3 years ago
0 Hey All, Hope You’Re All Doing Well. I’M Running A Self-Deployed Server (0.17, I Think, Where Can You Find The Version In Use?). I’M Having Trouble With The Automatic Plot Capture. If I Run

Sure thing, hopefully I'll remember to ping tomorrow once GitHub is synced, I'd appreciate it if you could verify the fix works 🙂

3 years ago
0 I Want To Run My Clearml Task On An Agent In K8S Together With A Memory Profiler (Maybe

FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).

3 years ago
0 A Suggestion. Sometimes Newcomers That Join An Existing Project That Uses Clearml Forget To Configure Their Clearml For The Organization'S Server Resulting In Them Launching Experiments To The Public Cloud Possibly With Sensitive Data - I Think That If Y

maybe worth updating the main Readme.md in the github.. if someone try to follow the instructions there it breaks

Hmm I thought we already did, Yes you are absolutely correct, I'll make sure we do

3 years ago
0 Hey, I'M Running A Pipeline, And 1 Stage Passed - But The Next One Failed. I Fixed The Bug For The Second One - Is There Any Way To Retry The Pipeline From The Failure?

The pipeline stores the state of it's previous run, specifically the executed steps.
In our case the executed step was reset (I assume) so it cannot find the output model you are referring to, hence crashing
CleanPigeon16 make sense ?

3 years ago
0 Hey, I'M Running A Pipeline, And 1 Stage Passed - But The Next One Failed. I Fixed The Bug For The Second One - Is There Any Way To Retry The Pipeline From The Failure?

CleanPigeon16 Coming very soon, we adding a few features for the pipeline, this one will also be included :)

3 years ago
0 Hey All, Hope You’Re All Doing Well. I’M Running A Self-Deployed Server (0.17, I Think, Where Can You Find The Version In Use?). I’M Having Trouble With The Automatic Plot Capture. If I Run

Okay let's see if I can reproduce it:
new conda env py==3.8 install clearml == 0.17.5rc5 matplotlib == 3.3.4 numpy == 1.20.1 seaborn == 0.11.1Clone repo run `python examples/frameworks/matplotlib/matplotlib_example.pyRight ?

3 years ago
3 years ago
0 When We Train The Models, We Often Choose Checkpoint Based On The Validation Accuracy, But Test Set Accuracy (Or Specific Class Validation Accuracy) Is Not Necessarily The Best For This Checkpoint. Right Now There Are Options To Add Columns With Max And L

Hi DilapidatedDucks58

eg, we want max validation accuracy and all other metric values for the corresponding epoch

Is this the equivalent of nested sort ?
Wouldn't you get the requested behavior if you add all metric columns but sort based on the "accuracy" column ?

3 years ago
0 When We Train The Models, We Often Choose Checkpoint Based On The Validation Accuracy, But Test Set Accuracy (Or Specific Class Validation Accuracy) Is Not Necessarily The Best For This Checkpoint. Right Now There Are Options To Add Columns With Max And L

Okay, I think I lost you...
DilapidatedDucks58 you mean detect at which "iteration" the max value was reported, and then extract all the other metrics for that iteration ?

3 years ago
Show more results compactanswers