Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
StaleButterfly40
Moderator
6 Questions, 19 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

10 × Eureka!
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
After saving models during training I cant reuse the same task to continue training Is there any way to continue task even though there are models saved? Alt...
3 years ago
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
I have some old training jobs that I logged with tensorboard, is it possible to add them to clearml?
2 years ago
0 Votes
3 Answers
932 Views
0 Votes 3 Answers 932 Views
When viewing scalars is it possible to: Have a grid view (e.g. 3 plots per line instead of just one) Group the metrics differently (seems like if I log X/a a...
3 years ago
0 Votes
10 Answers
1K Views
0 Votes 10 Answers 1K Views
Is it possible to set values in clearml config file programmatically? specifically aws.s3.use_credentials_chain
2 years ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Hi, in tensorboard there is an option to "ignore outliers in chart scaling". Is there such an option when viewing scalars in clearml?
2 years ago
0 Votes
2 Answers
1K Views
0 Votes 2 Answers 1K Views
2 years ago
0 Hi, Is It Possible To Sync Expiriment Using S3 Or Gs? I Loved To Have A Look At The Some Documentation. We Want To Sync The Training While They Are Running[Not Just When They Are Finished] Thanks,

Just the import part should support it - in offline cache dir it can be 2 separate tasks (or even from 2 different training machines)
e.g. trained on 1 machine in offline mode - machine crashed in the middle but checkpoint was saved. start a new training job from that checkpoint (also in offline mode).
Then I would like to create 1 real task that combines both of these runs

2 years ago
0 Is It Possible To Set Values In Clearml Config File Programmatically? Specifically

I want to set use_credentials_chain to true, but do not want to change the config file because I am running in cloud and do not want to have to download it each time I run

2 years ago
0 Hi, Is It Possible To Sync Expiriment Using S3 Or Gs? I Loved To Have A Look At The Some Documentation. We Want To Sync The Training While They Are Running[Not Just When They Are Finished] Thanks,

Hi AgitatedDove14 ,
I played around with offline mode for a bit and I see 2 issues:
We would like to sync periodically so that we can see the progress of the training, but if I sync more than once I get a duplication of each line in log (e.g. if I call import_offline_session 3 times with the same session_folder I will get each line in the log 3 times) sometime we resume training - using import_offline_session this is not possible (although it is possible using ` TaskHandl...

2 years ago
0 I Have Some Old Training Jobs That I Logged With Tensorboard, Is It Possible To Add Them To Clearml?

I can read them programmatically using tensorboard and the log the using clearml logger, but was hoping to avoid that

2 years ago
2 years ago
0 After Saving Models During Training I Cant Reuse The Same Task To Continue Training Is There Any Way To Continue Task Even Though There Are Models Saved? Alternatively Is There Any Way To Make Torch.Save Not Automatically Save The Model?

If it interests you this seems to work
last_task = Task.get_task(project_name="playground-sandbox", task_name='foo2') task = Task.init(project_name="playground-sandbox", task_name='foo2', continue_last_task=last_task.id if last_task else None)

3 years ago
0 After Saving Models During Training I Cant Reuse The Same Task To Continue Training Is There Any Way To Continue Task Even Though There Are Models Saved? Alternatively Is There Any Way To Make Torch.Save Not Automatically Save The Model?

CostlyOstrich36 Thanks!
But it seems like this only works if I am running both times from the same machine because clearml is not checking if task exists in server - it is checking if it is in cache_dir

3 years ago
0 Hi, Is It Possible To Sync Expiriment Using S3 Or Gs? I Loved To Have A Look At The Some Documentation. We Want To Sync The Training While They Are Running[Not Just When They Are Finished] Thanks,

AgitatedDove14
I was thinking of something like reuse_task_name
if set to True- the import function will not create a new task but rather use the task with the name of the offline task (if available).
And in metric+log reporting it would check when the last "event" was and filter out everything before it
How does that sound to you?

2 years ago
0 Hi, In Tensorboard There Is An Option To "Ignore Outliers In Chart Scaling". Is There Such An Option When Viewing Scalars In Clearml?

The autoscaling ignores the outliers.
e.g. when starting training loss is high (10) but quickly does (<1) if I plot the scalar I will not be able to see the changes in loss too well because the graph is on a large range (0-10)
If I ignore the outliers I will get a scale of 0-1

2 years ago
0 Hi All, I Hope I'M In The Right Channel, We(

no plots, only a couple of scalar metrics
there are a large number of artifacts

2 years ago
0 Hi All, I Hope I'M In The Right Channel, We(

it gets stuck when comparing a 2 experiments even if one of them does not have the artifacts
I deleted the artifacts and it seems to work now

2 years ago
0 Hi All, I Hope I'M In The Right Channel, We(

about 25 input models and 125 output models

2 years ago
0 Hi All, I Hope I'M In The Right Channel, We(

most of the ouputs had previews

2 years ago
0 Hi All, I Hope I'M In The Right Channel, We(

I deleted all the artifacts - so currently don't have an example..
I think the previews should be loaded lazily so something like this does not happen

2 years ago
0 Hi, When Continuing A Task, Debug Images Are Overwritten. E.G. If I Run A Task And Save Images In Iterations 1, 2, And 3 And The I Continue The Task And Save In Iteration 4 - I Will Lose The Debug Image In Iteration 1 I Think This Happens Because The Imag

This only happens when I "continue" a task
the counter gets reset..
My current workaround is overridingĀ 
UploadEvent._get_metric_countby adding an offset to _counter

2 years ago