Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ShinyLobster84
Moderator
5 Questions, 14 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

11 × Eureka!
0 Votes
4 Answers
545 Views
0 Votes 4 Answers 545 Views
Hi, When I read an argument from the configuration tab in the ClearML using the get_parameter() function, does it always read it as an str at any case? or fo...
2 years ago
0 Votes
4 Answers
616 Views
0 Votes 4 Answers 616 Views
Hi, is there a way to get the plot that exists in the tab results->scalars into my notebook ?
2 years ago
0 Votes
8 Answers
600 Views
0 Votes 8 Answers 600 Views
2 years ago
0 Votes
4 Answers
554 Views
0 Votes 4 Answers 554 Views
Hi again:) Is there a similar way like get_parameter() to extract the COMPLETED AT parameter from INFO tab ?
2 years ago
0 Votes
3 Answers
499 Views
0 Votes 3 Answers 499 Views
Hi
Hi 🙂 Is there a code line (like task.get_reported_scalars() ) that can download a table from ClearML (and it is located in results->plots) directly to the J...
2 years ago
0 Hi, When I Read An Argument From The Configuration Tab In The Clearml Using The

I would like to run a pipeline controller from a ClearML draft. One of the argument I want to define is a flag that tells me if to include an external data or not. I know I can use a str argument for it too but I think that a boolean argument would be more correct to use here.

2 years ago
0 Hi

Hi SuccessfulKoala55 CostlyOstrich36 ,
Thanks for your answers!
The code you write here was very helpful, I used it and succeeded to extract the relevant table from its output directly to my notebook:). There are cases that I want to investigate the outputs I saved in ClearML. In my case it was a table that I wanted to filter and get specific values from it, so this functionality helps me to analyze the results (I know I can download the json to my computer and then upload it back to my no...

2 years ago
0 Hi, Is There A Way To Get The Plot That Exists In The Tab Results->Scalars Into My Notebook ?

Yes, I meant to jupyter-notebook. For example, I can download an artifact from ClearML as a local copy using :
preprocess_task = Task.get_task(task_id='preprocessing_task_id') local_csv = preprocess_task.artifacts['data'].get_local_copy()and then I can load it again to my notebook.
Is there a similar code line for downloading also plots and scalars information that exist in the results ?

2 years ago
0 Hi, I'M Trying To Run A Pipeline Object Using Clearml Agent On Mac Os And It Fails. I See In The Error I Get That It Doesn'T Succeed To Install The Relevant Packages.  When I Run Each Script Separately, It Works Well; And When I Run The Pipeline In Linux,

I just changed the project name with XXXXX when I copied and pasted the error here, but the original project name is the same as it is written in the name below [tool.poetry] in pyproject.toml

2 years ago
0 Hi, I'M Trying To Run A Pipeline Object Using Clearml Agent On Mac Os And It Fails. I See In The Error I Get That It Doesn'T Succeed To Install The Relevant Packages.  When I Run Each Script Separately, It Works Well; And When I Run The Pipeline In Linux,

Hi, we succeed to solve this issue. The problem was that the src main folder of the project was defined in the poetry requirement file and clearML was looking for this "package" .

2 years ago
0 Hi, I'M Trying To Run A Pipeline Object Using Clearml Agent On Mac Os And It Fails. I See In The Error I Get That It Doesn'T Succeed To Install The Relevant Packages.  When I Run Each Script Separately, It Works Well; And When I Run The Pipeline In Linux,

` Collecting jupyter-core==4.7.1
Using cached jupyter_core-4.7.1-py3-none-any.whl (82 kB)
Collecting jupyter-highlight-selected-word==0.2.0
Using cached jupyter_highlight_selected_word-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting jupyter-latex-envs==1.4.6
Using cached jupyter_latex_envs-1.4.6.tar.gz (861 kB)
Collecting jupyter-nbextensions-configurator==0.4.1
Using cached jupyter_nbextensions_configurator-0.4.1.tar.gz (479 kB)
Collecting jupyterlab-pygments==0.1.2
Using cached jupy...

2 years ago
2 years ago
0 Hi Again:) Is There A Similar Way Like

I'm not sure how to extract this information.
I'm trying to get the "completed at" information (in my case is Mar 3 2022 12:14 as it is shown in the pic I attached) using the API to my python script.
The closest think I found is by doing the following:
task = clearml.Task.get_task(task_id=<task_id>) task.commentwhich gives me the result:
'Auto-generated at 2022-03-01 06:27:52 UTC by havi@dgroup'This solution can be good too but if there is a way to extract the specific information I...

2 years ago
0 How Come

Hi Martin,
I upgraded the ClearML version to 1.1.1 and I updated the pipeline code according to v2 as you wrote here and I got a new error which I haven't got before.
Just noting that I did a git push before.
Do you know what can cause this error?
Thanks!
` version_num = 1c4beae41a70c526d0efd064e65afabbc689c429
tag =
docker_cmd = ubuntu:18.04
entry_point = tasks/pipelines/monthly_predictions.py
working_dir = .
Warning: could not locate requested Python version 3.8, reverting to version 3.6
c...

2 years ago
0 How Come

And I'm getting this in the command line when the process fails:
/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 73 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

2 years ago
0 How Come

Yes it is a common case but I think the pipe.start_locally(run_pipeline_steps_locally=False) solved it. I started running it again and it seems to have passed the phase where it failed last time

2 years ago