Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
HarebrainedToad56
Moderator
1 Question, 19 Answers
  Active since 13 July 2023
  Last activity 5 months ago

Reputation

0

Badges 1

3 × Eureka!
0 Votes
6 Answers
341 Views
0 Votes 6 Answers 341 Views
Hey 👋 Tensorboard Logs Overwhelming Elasticsearch I am running a clear ml server, however when running experiments with tensorboard logging I am seeing the ...
5 months ago
9 months ago
0 Hello Everyone, I'D Like To Stop Clearml From Automatically Logging Models From Yolov8. Is There Any Way To Do It Without Disconnecting From Framework (Pytorch)?

If you can identify a patten in the YOLOv8 output files you can probably also filter them out 🙂

9 months ago
0 Hello Everyone, I'D Like To Stop Clearml From Automatically Logging Models From Yolov8. Is There Any Way To Do It Without Disconnecting From Framework (Pytorch)?

If you added a print there like:

def filter_out_pt_files(operation_type, model_info):
    print(model_info.__dict__)

    return model_info

You can see what is bring picked up. If there is a common path etc you can filter that out

9 months ago
0 Hello Everyone, I'D Like To Stop Clearml From Automatically Logging Models From Yolov8. Is There Any Way To Do It Without Disconnecting From Framework (Pytorch)?

Hey 🙂 I had a similar issue today and found this solution:

In my case this codebase was using a .pt filetype which was being picked up and logged as a model even though it was not.

import os
from clearml import Task
from clearml.binding.frameworks import WeightsFileHandler

task = Task.init(
    project_name="task_project",
    task_name="task_name",
    task_type=Task.TaskTypes.training,
)


def filter_out_pt_files(operation_type, model_info):
    is_pt_file = os.path.splitext...
9 months ago
0 Hi Guys, From The

This is a great place to start on PyTorch Lightning

None

9 months ago
0 Hey :wave: *Tensorboard Logs Overwhelming Elasticsearch* I am running a clear ml server, however when running experiments with tensorboard logging I am seeing the elastic indexing time increase drastically and in some cases I have also seen timeout erro

For an update 🙂
I think we identified that when moving from a training to fine tuning dataset (which was 1/1000th the size) our training script was set to upload every epoch. Seems like this resulted in a torrent of metrics being uploaded.

Since modifying this to be less frequent we have seen the index latency drop dramatically

5 months ago
0 Hi Guys, From The

As pytorch lightning is a framework on top of Pytorch it will work the same, if not better with Clear ML

9 months ago
0 Hello All, I Am Trying To Report A Confusion Matrix For My Output Model As Follows:

What does it look like when you instantiate the output_model object?

9 months ago
0 Hi All, I Dont Know What Happened But I Am Unable To Download A Dataset I Used To Download To Cached Folder. Now, When I Try To Download, The Dataset Show The Following Error. Just Few Day Ago, I Still Can Download And Run With The Dataset Sucessfully. I

Looks like its a /mnt which might mean its a drive or something similar that was connected and may not be any more?

For something quick, if you create a new folder to put your dataset:
mkdir ./test_dataset_location
Then you can run your command with
CLEARML_CACHE_DIR='./test_dataset_location' clearml-data ... <your command here>

It will try to download into that folder

9 months ago