Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
DisgustedBear75
Moderator
3 Questions, 11 Answers
  Active since 14 March 2023
  Last activity one year ago

Reputation

0

Badges 1

8 × Eureka!
0 Votes
4 Answers
566 Views
0 Votes 4 Answers 566 Views
Hi guys ! I'm adding a requirements.txt to a task before initializing it. Is it the way it is supposed to look like on the UI ? Shouldn't I see directly the ...
one year ago
0 Votes
9 Answers
556 Views
0 Votes 9 Answers 556 Views
one year ago
0 Votes
3 Answers
552 Views
0 Votes 3 Answers 552 Views
Hi, Is this a well known issue, that running a task in a virtual environment, messes up the reproducibility feature ?
one year ago
0 Hi, Is This A Well Known Issue, That Running A Task In A Virtual Environment, Messes Up The Reproducibility Feature ?

The experiment fails.
Yes this is my scenario.

Basically I'm using this command "Task.force_requirements_env_freeze(requirements_file='requirements.txt')" on a requirement file that I know is working in local (if i set up a venv with that requirement file, the script is running)

But when I clone and rerun the experiment, clearml isnt able to install the requirements (i checked and the same version of python is used)

one year ago
0 Hi Everyone ! :) I'M Trying To Test The "One Click Reproducibility" Feature But It Keeps Failing. My Question Is On A High-Level: Is It Normal That This Happen, If Yes, What Are The Common Reasons That Make An Experiment Not One-Click Reproducible ?

I have a script. Before running it, I set up a venv, install the libraries from requirements.txt, then launch the script
I then try to relaunch the experiment from the ui but it keeps failing

one year ago
0 Hi Everyone ! :) I'M Trying To Test The "One Click Reproducibility" Feature But It Keeps Failing. My Question Is On A High-Level: Is It Normal That This Happen, If Yes, What Are The Common Reasons That Make An Experiment Not One-Click Reproducible ?

logs:

Successfully built numpy
Installing collected packages: numpy
Successfully installed numpy-1.23.5
WARNING: The directory '/Users/michaelresplandy/Library/Caches/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting matplotlib==3.5.3
Downloading matplotlib-3.5.3.tar.gz (35.2 MB)

1678725812818 Ordinateur-portabl...

one year ago
0 Hey, I Have A Question Regarding “Logger.Report_Table”, It’S Seems Like After The Table Is Drawn In The Ui I Cannot Change The Column Size And More Annoying I Cannot Select Content From Table To Copy It. Anyone Know What Params I Need To Pass In Order To

On my page they don't appear next to the other. If the name of the table is the same between two experiments, I just have one of two tables showing (even if the values are different)

one year ago
0 Hi, Is This A Well Known Issue, That Running A Task In A Virtual Environment, Messes Up The Reproducibility Feature ?

This is why I'm wondering if running the initial experiment in local in a venv is the reason why clearml is struggling to reproduce the experiment

one year ago