Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetBadger76
Moderator
1 Question, 239 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
8 Answers
567 Views
0 Votes 8 Answers 567 Views
hello TartSeagull57 This is a bug introduced with version 1.4.1, for which we are working on a patch. The fix is actually in test, and should be released ver...
one year ago
0 Hey Guys, Is There An E2E Working Example Of Writing A Pipeline With 2-3 Tasks? Just An Hello World. I Am The First One Who Tries To Make Clearml Pipeline To Work I Wasn'T Able To Make It:

Agent is a process that pulls task from a queue and assigns ressources (worker) to them. In the pipeline, when not runned locally, steps are enqueued tasks

one year ago
0 Hey Guys, Is There An E2E Working Example Of Writing A Pipeline With 2-3 Tasks? Just An Hello World. I Am The First One Who Tries To Make Clearml Pipeline To Work I Wasn'T Able To Make It:

you are in a regular execution - i mean not a local one. So the different pipeline tasks has been enqueued. You simply need to fire an agent to pull the enqueued tasks. I would advice you to specify the queue in the steps (parameter execution_queue ).
You then fire your agent :
clearml-agent daemon --queue my_queue

one year ago
0 Hello! I'M Running A Task For Which I Want To Log Several Checkpoints Of A Model. I Have A Reason To Save The Checkpoints In Different Folders Locally But Them Having The Same File Name. I Use

Hi SillySealion58
you can discriminate between your output models when you instantiate them. There are like parameters name, tags or comment that all belong to the constructor OutputModel .
It would thus be a way of using the same filename for all the checkpoints, and have them differentiated in the task. Does it make sense ?

one year ago
0 Hi All. I Was Using Clearml Server Hosted On A Box That I Reach Behind Traefik Using Alias For Web, File And Api. After Migration It Works Perfect For New Experiments. I Changed The Name Of The Alias From

Hi MotionlessCoral18
You need to run some scripts when migrating, to update your old experiments. I am going to try to find you soem examples

one year ago
0 Hi, I Would Like To Log Locally Each Link To My Experiments, How Can I Get The Link To The Experiment (The One Created At The Beginning Of The Run And Printed To The Console), From The Task Object? Is It Always Going To Be:

hi DizzyHippopotamus13
Yes you can generate a link to the experiments using this format.
However I would suggest you to use the SDK for more safety :
task = Task.get_task(project_name=xxx, task_name=xxx)
url = task.get_output_log_web_page()

Or in one line
url = Task.get_task(project_name=xxx, task_name=xxx).get_output_log_web_page()

one year ago
0 I’M Trying To Get The Meta-Information About The Code (Section Execution) To Be Auto-Filled, However When I Run The Script With The Pycharm Testrunner, It Is Missing. If I Use

it is a bit old - i recommand you to test again with the latest version 1.4.1
can you please give me some more details about what you intent to do ? it would be easier then to reproduce the issue

one year ago
one year ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

Hi TeenyBeetle18
If the dataset could be basically built from a local machine, you could use the sync_folder (sdk https://clear.ml/docs/latest/docs/references/sdk/dataset#sync_folder or cli https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_folder_sync#syncing-a-folder ). then you would be able to modify any part of the dataset and create a new version, with only the items that changed.

There is also an option to download only parts of the dataset, have a l...

one year ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

If the data is updated into the same local / network folder structure, which serves as a dataset's single point of truth, you can schedule a script which uses the dataset sync functionality which will update the dataset based on the modifications made to the folder.

You can then modify precisely what you need in that structure, and get a new updated dataset version

one year ago
0 Hi, Having Issue With Clearml Agent Not Installing Package Installed Directly From Github Using “Git+

hey Ofir
did you tried to put the repo in the decorator where you need the import ?
if you can send me some code to illustrate what you are doing, it could help me to reproduce the issue

one year ago
0 Hi There, Is There An Option To Show Plots In The Order They Are Inserted To Clearml Instead Of Alphabetic Order Of Titles?

Hey Atalya 🙂

Thanks for your feedback. This is indeed a good feature to think asbout.
So far there is no other ordering than the alphabetical. Could you please create a feature request on github ?

Thanks

one year ago
0 Is There A Way To Retrieve The Debug Samples Logged By Clearml (Or At Least Retrieve Their S3 Links) Via The Python Api?

hi RattyLouse61
here is a code example, i hope it will help you to understand better the backend_api.

` from clearml import Task, Logger
from clearml.backend_api import Session
from clearml.backend_api.services import events

task = Task.get_task('xxxxx', 'xxxx')
session = Session()
res = session.send(events.GetDebugImageSampleRequest(
task=task.id,
metric=title,
variant=series)
)
print(res.response_data) `

one year ago
0 Hi, I Am Trying To Use The Parameterset For Hyper-Parameter Tuning With Dependencies, An Example Of How I Use It: Parameterset([{“Prm1”:1, “Prm2": 1},{“Prm1”:2, “Prm2":2}]) But I Get A Warning :

Last (very) little thing : could you please open a Github issue for this irrelevant warning 🙏 ? It makes sense to register on GH those bugs, because our code and releases are hosted there.
Thank you !
http://github.com/allegroai/clearml/issues

one year ago
0 Hi Everyone! Does Anyone Know If It Possible To Change The

Hi NonsensicalWoodpecker96
you can you the SDK 🙂

task = Task.init(project_name=project_name, task_name=task_name)
task.set_comment('Hi there')

one year ago
0 Hello, Ive Been Reading The Docs Of Hyperparameteroptimizer, And Various Questions In The Channel, But Couldn'T Find An Answer. I Have A Working Hpo Run, But Many Times Experiments Fail , For Uncontrollable Reasons. Is There A Way To Tell The Optimizer To

hi NervousFrog58
Can you share some more details with us please ?
Do you mean that when you have an experiment failing, you would like to have a snippet that reset and relaunch it, the way you do through the UI ?
Your ClearML packages version, and your logs would be very userful too 🙂

one year ago
0 Hi Everyone, Quick Question Regarding Minio And Logging:

Hey ReassuredTiger98
Is there any update from your side ?
I confirm that you need to put your key and secret in the credentials section of the configuration file . As Idan, I let my policy configuration untouched

one year ago
0 Hi I'M Looking Into How Clearml Supports Datasets And Dataset Versioning And I'M A Bit Confused. Is Dataset Versioning Not Supported At All In The Non-Enterprise Or Is Versioning Available By A Different Mechanism? I See That

Hi PanickyMoth78
There is indeed a versioning mechanism available for the open source version 🎉

The datasets keep track of their "genealogy" so you can easily access the version that you need through its ID

In order to create a child dataset, you simply have to use the parameter "parent_datasets" when you create your dataset : have a look at
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#datasetcreate

You also alternatively squash datasets together to create a c...

one year ago
0 Hi, That I'M Running The Line Dataset = Clearml.Dataset.Get (Dataset_Project = 'Datasets', Dataset_Tags = ....) I Get: File "/Root/.Clearml/Venvs-Builds/3.8/Lib/Python3.8/Site-Packages/Clearml/Datasets/Dataset.Py", Line 1534, In Get Dataset_Id = Cls

Hi SparklingElephant70
The function doesn't seem to find any datasets which project_name matches your request.
Some more detailed code on how you create your dataset, and how you try to retrieve it, could help me to better understand the issue 🙂

one year ago
Show more results compactanswers