Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetBadger76
Moderator
1 Question, 239 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
8 Answers
588 Views
0 Votes 8 Answers 588 Views
hello TartSeagull57 This is a bug introduced with version 1.4.1, for which we are working on a patch. The fix is actually in test, and should be released ver...
one year ago
0 Hi Guys, In Web Ui, I See Metadata Tab For Models. I'Ve Checked All Documentation And Didn'T Find How Can I Update My Model Metadata From Code Level. Any Suggestions? Manual Work On Web Ui Is Not Interesting For Me

Hi HandsomeGiraffe70
There is a way, this is the API. You can use it this way :
retrieve the task the model belongs to retrieve the model you want (from a lit of input and output models) create the metadata inject them to the model
Here is an example :

` from clearml import Task
from clearml.backend_api import Session
from clearml.backend_api.services import models
from clearml.backend_api.services.v2_13.models import MetadataItem

task = Task.get_task(project_name=project_name, task_name=...

one year ago
0 Hi, Is There Any Approach To Record Some Experiment Metric (E.G., Accuracy) And Display In The Experiment Table So I Can Compare The Metric Among Different Experiments? The Approach I Found Is

report_scalar pernits to manually report a scalar series. This is the dedicated function. There could be other ways to report a scalar, for example through tensorboard - in this case you would have to report to tensorboard, and clearML will automatically report the values

one year ago
0 Is There A Way To Automatically Upload Images That Were Uploaded With

Hi Alek
It should be auto logged. Could you please give me some details about your environment ?

one year ago
0 Hi, I Would Like To Log Locally Each Link To My Experiments, How Can I Get The Link To The Experiment (The One Created At The Beginning Of The Run And Printed To The Console), From The Task Object? Is It Always Going To Be:

hi DizzyHippopotamus13
Yes you can generate a link to the experiments using this format.
However I would suggest you to use the SDK for more safety :
task = Task.get_task(project_name=xxx, task_name=xxx)
url = task.get_output_log_web_page()

Or in one line
url = Task.get_task(project_name=xxx, task_name=xxx).get_output_log_web_page()

one year ago
0 Hi All. I'Ve Been Mistakenly Using

Hi WittyOwl57 ,
The function is :
task.get_configuration_object_as_dict ( name="name" )
with task being your Task object.

You could find a bunch of pretty similar functions in the docs. Have a look at here : https://clear.ml/docs/latest/docs/references/sdk/task#get_configuration_object_as_dict

one year ago
one year ago
0 Hi, I Am Getting This Error When Using The Aws Auto_Scaler Service (With The Pro Version):

Hi,
We are going to try to reproduce this issue and will update you asap

one year ago
0 Need

can you also check that you can access the servers ?
try to do curl http://<my server>:port for your different servers ? and share the results 🙂

one year ago
one year ago
0 Hi Folks, I Have A Question On

hi ObedientToad56
the API will return you raw objetcs, thus not dictionary
you can use the SDK. For example, if task_id is your pipeline main task id, then you can retrieve the configuration objects this way :

task = Task.get_task(task_id=task_id) config = task.get_configuration_object_as_dict('Pipeline') for k in list(config.keys()): print(f'Step {k} has job id {config[k]["job_id"]}')

one year ago
0 Can We Use S3 Buckets To Cache Environments?

hey UnevenDolphin73
you can mount your s3 in a local folder and provide that folder to your clearml.conf file.
I used s3fs to mount by s3 bucket as a folder. Then i modified agent.venvs_dir and agent.venvs_cache
(As mentionned here https://clear.ml/docs/latest/docs/clearml_agent#environment-caching )

one year ago
0 Hi Everybody, I’M Getting Errors With Automatic Model Logging On Pytorch (Running On A Dockered Agent).

it works locally and not on a remote exec : can you check that the machine that the agent if executed from is correctly configured ? the agent there needs to be provided with the correct credentials the autolog uses the file extension to determine what it is reporting. can you try to use the regular .pt extension ?

one year ago
0 Hello, Ive Been Reading The Docs Of Hyperparameteroptimizer, And Various Questions In The Channel, But Couldn'T Find An Answer. I Have A Working Hpo Run, But Many Times Experiments Fail , For Uncontrollable Reasons. Is There A Way To Tell The Optimizer To

hi NervousFrog58
Can you share some more details with us please ?
Do you mean that when you have an experiment failing, you would like to have a snippet that reset and relaunch it, the way you do through the UI ?
Your ClearML packages version, and your logs would be very userful too 🙂

one year ago
0 Hi Everyone! Does Anyone Know If It Possible To Change The

Hi NonsensicalWoodpecker96
you can you the SDK 🙂

task = Task.init(project_name=project_name, task_name=task_name)
task.set_comment('Hi there')

one year ago
0 Need

can you show the logs ?

one year ago
0 Hey,

regarding the file extension, it should not be a problem

one year ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

Hi TeenyBeetle18
If the dataset could be basically built from a local machine, you could use the sync_folder (sdk https://clear.ml/docs/latest/docs/references/sdk/dataset#sync_folder or cli https://clear.ml/docs/latest/docs/clearml_data/data_management_examples/data_man_folder_sync#syncing-a-folder ). then you would be able to modify any part of the dataset and create a new version, with only the items that changed.

There is also an option to download only parts of the dataset, have a l...

one year ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

If the data is updated into the same local / network folder structure, which serves as a dataset's single point of truth, you can schedule a script which uses the dataset sync functionality which will update the dataset based on the modifications made to the folder.

You can then modify precisely what you need in that structure, and get a new updated dataset version

one year ago
0 Hi. When Using The Logger'S

In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms

` log = task.get_logger()

x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1

fig = go.Figure()

fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))

fig.update_layout(barmode='overlay') ...

one year ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :
Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')

one year ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

you can freeze your local env and thus get all the packages installed. With pip (on linux) it would be something like that :
pip freeze > requirements.txt
(doc here https://pip.pypa.io/en/stable/cli/pip_freeze/ )

one year ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

hey H4dr1en
you just specify the packages that you want to be installed (no need to specify the dependancies) and the version if needed.
Something like :

pytorch==1.10.0

one year ago
0 Hi, I Have A Local Package That I Use To Train My Models. To Start Training, I Have A Script That Calls

Hi
could you please share the logs for that issue (without the cred 🙂 ) ?

one year ago
one year ago
Show more results compactanswers