Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SweetBadger76
Moderator
1 Question, 239 Answers
  Active since 10 January 2023
  Last activity 2 years ago

Reputation

0

Badges 1

4 × Eureka!
0 Votes
8 Answers
1K Views
0 Votes 8 Answers 1K Views
hello TartSeagull57 This is a bug introduced with version 1.4.1, for which we are working on a patch. The fix is actually in test, and should be released ver...
2 years ago
0 Hey, So I'M Trying To Upload An Artefact To Clearml’S Fileserver(I Have A Self Hosted Clearml Server Running), I'Ve Uploaded The File Using Storagemanager.Upload_File(Path, Url) And Giving The Url As “

Hi WickedElephant66
When you are in the Projects section of the WebApp (second icon on the left), enter either "All Experiments" or any project you want to access to. Up on the center is the Models section. You csn find the url the model can be downloaded from, in the details, section

2 years ago
0 Is There A Way To Automatically Upload Images That Were Uploaded With

Hi Alek
It should be auto logged. Could you please give me some details about your environment ?

2 years ago
0 Hi, I Am Trying To Use The Parameterset For Hyper-Parameter Tuning With Dependencies, An Example Of How I Use It: Parameterset([{“Prm1”:1, “Prm2": 1},{“Prm1”:2, “Prm2":2}]) But I Get A Warning :

hi MoodySheep3
I think that you use ParameterSet the way it is supposed to be 🙂
When I run my examples, I also get this warning - which is weird ! because
This is just a warning, the script continues anyway (and reaches end without issue) Those HP exists - and all the sub tasks corresponding to a given parameters set find them !

2 years ago
0 Hey, So I'M Trying To Upload An Artefact To Clearml’S Fileserver(I Have A Self Hosted Clearml Server Running), I'Ve Uploaded The File Using Storagemanager.Upload_File(Path, Url) And Giving The Url As “

yes but it is supposed to be logged in the task corresponding to the step the model is being saved from. monitor_model makes the logging to the main pipeline task.

2 years ago
0 Hi Folks, Is There A Way To Force Clear-Ml Agent With --Docker To

hey RoughTiger69
Can you describe me how you are setting up the environment variable please ?

Setting up that flag will skip the virtual env installation : the agent will use your environment and the packages installed into it.

Using Task.add_requirements(requirements.txt) allows to add specific packages at will. Note that this function will be executed even with the flag CLEARML_AGENT_SKIP_PIP_VENV_INSTALL set

2 years ago
0 Hey,

however the model is saved in the step task - this is what i am trying to figure out

2 years ago
0 Hi There! We Work On The Project Together With My Partner. He Shared His Workspace With Me And I Have An Access To His Projects And Tasks. I Am Trying To Launch Project From His Working Space On My Remote Server. I Ran Clearml Daemon But It Does Not See T

you can run a clearml agent on your machine, in a way that it is dedicated to a certain queue. You can then clone the experiment you are interested in (either bleonging to your workspace or to the one from you partner), and enqueue it on into the queue you assigned your worker to.
clearml-agent daemon --queue 'my_queue'

2 years ago
0 Hi I'M Looking Into How Clearml Supports Datasets And Dataset Versioning And I'M A Bit Confused. Is Dataset Versioning Not Supported At All In The Non-Enterprise Or Is Versioning Available By A Different Mechanism? I See That

Hi PanickyMoth78
There is indeed a versioning mechanism available for the open source version 🎉

The datasets keep track of their "genealogy" so you can easily access the version that you need through its ID

In order to create a child dataset, you simply have to use the parameter "parent_datasets" when you create your dataset : have a look at
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#datasetcreate

You also alternatively squash datasets together to create a c...

2 years ago
0 Hi All. I Was Using Clearml Server Hosted On A Box That I Reach Behind Traefik Using Alias For Web, File And Api. After Migration It Works Perfect For New Experiments. I Changed The Name Of The Alias From

Hi MotionlessCoral18
You need to run some scripts when migrating, to update your old experiments. I am going to try to find you soem examples

2 years ago
0 Hi All! Any Example Or Doc To Use Clearml With Slurm As A Workload Manager ?

Hi MoodySparrow34
We have an user that wrote this example https://github.com/marekcygan/clearml-slurm-workers
It is a simple glue code to spin SLURM workers when the tasks are enqueued. Hope it will help

2 years ago
0 Can I Upload Debug Samples To Gcp? I Only See Aws And Azure In The

Hi CourageousKoala93
Yes, you can use Google as a storage. You can have a look at the docs https://clear.ml/docs/latest/docs/integrations/storage/#configuring-google-storage
Basically, this part of the doc will show you how to set the credentials into the configuration file.

You will also have to specify the destination uri, by adding to Task.init() : output_uri="path to my bucket"
Do not hesitate to ask for some precisions if needed

2 years ago
0 Hi Folks, I Have A Question On

hi ObedientToad56
the API will return you raw objetcs, thus not dictionary
you can use the SDK. For example, if task_id is your pipeline main task id, then you can retrieve the configuration objects this way :

task = Task.get_task(task_id=task_id) config = task.get_configuration_object_as_dict('Pipeline') for k in list(config.keys()): print(f'Step {k} has job id {config[k]["job_id"]}')

2 years ago
0 Back To This

good to know. we will try to enquire that. thanks

2 years ago
0 Hi Everyone

Hey LuckyKangaroo60
So far there isnt a CLI command to check the conf file format : if there is an error, it is detected from the beginning of the execution and the program fails. Here is what i use as a conf for accessing my local docker based minio :

`
s3 {
# S3 credentials, used for read/write access by various SDK elements

        # Default, used for any bucket not specified below
        region: ""
        # Specify explicit keys
        key: "david"
...
2 years ago
0 Hi There, Is There An Option To Show Plots In The Order They Are Inserted To Clearml Instead Of Alphabetic Order Of Titles?

Hey Atalya 🙂

Thanks for your feedback. This is indeed a good feature to think asbout.
So far there is no other ordering than the alphabetical. Could you please create a feature request on github ?

Thanks

2 years ago
0 Hi. When Using The Logger'S

In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms

` log = task.get_logger()

x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1

fig = go.Figure()

fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))

fig.update_layout(barmode='overlay') ...

2 years ago
0 If I Create A Dataset With

hey
"when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty"

you are right, i think there is a bug here. We will release a fix asap 🙂

2 years ago
0 Hi, Is There Any Approach To Record Some Experiment Metric (E.G., Accuracy) And Display In The Experiment Table So I Can Compare The Metric Among Different Experiments? The Approach I Found Is

report_scalar pernits to manually report a scalar series. This is the dedicated function. There could be other ways to report a scalar, for example through tensorboard - in this case you would have to report to tensorboard, and clearML will automatically report the values

2 years ago
0 Hi Guys, In Web Ui, I See Metadata Tab For Models. I'Ve Checked All Documentation And Didn'T Find How Can I Update My Model Metadata From Code Level. Any Suggestions? Manual Work On Web Ui Is Not Interesting For Me

Hi HandsomeGiraffe70
There is a way, this is the API. You can use it this way :
retrieve the task the model belongs to retrieve the model you want (from a lit of input and output models) create the metadata inject them to the model
Here is an example :

` from clearml import Task
from clearml.backend_api import Session
from clearml.backend_api.services import models
from clearml.backend_api.services.v2_13.models import MetadataItem

task = Task.get_task(project_name=project_name, task_name=...

2 years ago
0 Hey,

No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?

2 years ago
0 Hi, I Am Trying The Triggerscheduler To Catch When A User Add Specific Tag To A Task. I Used The Below Code But The Schedule_Function Is Not Called When Adding Tags To Task (It Seems The Task.Last_Update Is Not Modified After Adding Tag)

hey ApprehensiveSeahorse83
can you please check that the trigger is correctly added ? Simply retrieve the return value of add_task_trigger
res = trigger.add_task_trigger( .....
print(f'Trigger correctly added ? {res}')

2 years ago
Show more results compactanswers