Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
TeenyBeetle18
Moderator
6 Questions, 14 Answers
  Active since 10 January 2023
  Last activity one month ago

Reputation

0

Badges 1

13 × Eureka!
0 Votes
6 Answers
1K Views
0 Votes 6 Answers 1K Views
Hello! Currently ClearML open-source supports only AWS auto-scaler. Have any one tried to implement auto-scaler (e.g. spin up/down compute instance) in googl...
2 years ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
2 years ago
0 Votes
3 Answers
760 Views
0 Votes 3 Answers 760 Views
Hello! I’m trying to re-use model, which is already in model regystry and I came up with two ideas: Query model and connect it to the task task.connect(model...
one year ago
0 Votes
5 Answers
847 Views
0 Votes 5 Answers 847 Views
Hello! Have any one tried to convert large dataset to Clear ML format ? I am tryting to convert 350GB dataset with ~42 million files and getting Process fail...
one year ago
0 Votes
2 Answers
140 Views
0 Votes 2 Answers 140 Views
Hello! Any plans on supporting dev containers in clearml-sessions ?
one month ago
0 Votes
4 Answers
136 Views
0 Votes 4 Answers 136 Views
Hello! Is there any way to use original files in cleaml datasets ? I have batch of tar archives and want to create dataset from them, however clearml compres...
one month ago
0 Hello! I’M Trying To Re-Use Model, Which Is Already In Model Regystry And I Came Up With Two Ideas: Query Model And Connect It To The Task

I resolved issue. Works like a charm. I disabled framework auto logging, and clearml does not try to store local model again.

one year ago
0 Hello! I’M Trying To Re-Use Model, Which Is Already In Model Regystry And I Came Up With Two Ideas: Query Model And Connect It To The Task

Do you mean that in the Model tab when you look into the model details the URL points to a local location (e.g. file:///mnt/something/model) ?

Exactly.

And your goal is to get a copy of that model (file) from your code, is that correct ?

See, it happens when I tried to connect existed model (in model registry, model is already uploaded to remote storage). I query this model and connect it to the task

model = InputModel.query_models(model_name=name
task.connect(model[0...
one year ago
0 Hello! Currently Clearml Open-Source Supports Only Aws Auto-Scaler. Have Any One Tried To Implement Auto-Scaler (E.G. Spin Up/Down Compute Instance) In Google Cloud? Have You Faced Any Significant Issues?

CostlyOstrich36 hello, thank you! But what if I wanna have it in open-source version? It’s only one feature I want, and I can’t convince my CTO to buy PRO tier only because of it 🙂

2 years ago
0 Hello! Currently Clearml Open-Source Supports Only Aws Auto-Scaler. Have Any One Tried To Implement Auto-Scaler (E.G. Spin Up/Down Compute Instance) In Google Cloud? Have You Faced Any Significant Issues?

It’s sad, but due to security measures we have to use self-hosted version and it seems like PRO plan does not provide such option

2 years ago
0 Hello! Is There Any Way To Download A Part Of Dataset? For Instance, I Have A Large Dataset Which I Periodically Update By Adding A New Batch Of Data And Creating A New Dataset. Once, I Found Out Mistakes In Data, And I Want To Download An Exact Folder/Ba

Let’s say I have a dataset from source A, dataset is finalised, upload and looks like this:
train_data/data_from_source_AEach month I receive new batch of data, create new dataset and upload it. And after few months my dataset looks like this:
train_data/data_from_source_A train_data/data_from_source_B train_data/data_from_source_C train_data/data_from_source_D train_data/data_from_source_EEach batch of data was added via creating a new dataset and adding files. Now, I have a large da...

2 years ago
0 Hello! Have Any One Tried To Convert Large Dataset To Clear Ml Format ? I Am Tryting To Convert 350Gb Dataset With ~42 Million Files And Getting

Nothing special

    dataset = Dataset.create(dataset_name = 'my_dataset', parent_datasets=None, use_current_task=False)
    dataset.add_files(dataset_dir, verbose=False)
    dataset.upload(output_url='
)
    dataset.finalize(verbose=True)
one year ago
0 Hello! Have Any One Tried To Convert Large Dataset To Clear Ml Format ? I Am Tryting To Convert 350Gb Dataset With ~42 Million Files And Getting

Have you ever benchmarked clear ml datasets on large datasets ? How good is it on handling them ?

one year ago
0 Hello! Is There Any Way To Use Original Files In Cleaml Datasets ? I Have Batch Of Tar Archives And Want To Create Dataset From Them, However Clearml Compresses Them. I Tried To Use

Why does it matter how clearml stores datasets? If you get the dataset locally, all files will be unzipped.

  • It takes time to compress. 8 archives , 5gb each , takes half of hour.
  • I can stream archives from bucket directly to network for training without getting them locally, which saves storage space
one month ago
one month ago
0 Hello! Any Plans On Supporting Dev Containers In Clearml-Sessions ?

Its a json manifiest + docker file in repository
None
it'd be great if clearml sessions could clone the repo, set up docker container and open repo.
However, I see that clearml session cant automatically clone the repo and open vscode in it.

one month ago
0 Hello! Is There Any Way To Use Original Files In Cleaml Datasets ? I Have Batch Of Tar Archives And Want To Create Dataset From Them, However Clearml Compresses Them. I Tried To Use

Seems like it does not let to use ability of clearml to track and version datasets. I mean, I can't create next version of dataset from dataset with external files

one month ago