Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Hi! Which Method Is Used To Delete Outputmodel From The Storage?

Hi! Which method is used to delete OutputModel from the storage?
https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel/

  
  
Posted 2 years ago
Votes Newest

Answers 19


e.g. if I want to store only top-3 running best checkpoints

  
  
Posted 2 years ago

if the loss is lower than the best stored loss so far, add the new checkpoint and remove the top-4th

  
  
Posted 2 years ago

this is how I implemented it by myself. Looks like clearml functionality is quite opinionated and requires some tweaks every time I try to replace my own stuff with it

  
  
Posted 2 years ago

You mean you would like to delete an output model of a task if other models in the task surpass it?

  
  
Posted 2 years ago

During a run, correct?

  
  
Posted 2 years ago

Strictly speaking, there is only one training task, but I want to keep top-3 best checkpoints for it all the time

  
  
Posted 2 years ago

yeah, during a run

  
  
Posted 2 years ago

If I keep track of 3 OutputModels simultaneously, the weights would need to shift between them every epoch (like, updated weights for top-1, then top-1 becomes top-2, top-2 becomes top-3 etc)

  
  
Posted 2 years ago

if I just use plain boto3 to sync weights to/from S3, I just check how many files are stored in the location, and clear up the old ones

  
  
Posted 2 years ago

Is there a way to simplify it with ClearML, not make it more complicated?

  
  
Posted 2 years ago

How are you saving your models? torch.save ("<MODEL_NAME>") ?

  
  
Posted 2 years ago

yes

  
  
Posted 2 years ago

If I'm not mistaken, models reflect the file names. So if you recycle the file names you recycle the models. So if you save torch.save(" http://top1.pt ") then later torch.save(" http://top2.pt ") and even later do torch.save(" http://top1.pt ") again, you will only have 2 OutputModels, not three. This way you can keep recycling the best models 🙂

  
  
Posted 2 years ago

hm, not quite clear how it is implemented. For example, this is how I do it now (explicitly)

  
  
Posted 2 years ago

` clearml_name = os.path.basename(save_path)

output_model_best = OutputModel(
task=task,
name=clearml_name,
tags=['running-best'])

output_model_best.update_weights(
save_path,
upload_uri=params.clearml_aws_checkpoints,
target_filename=clearml_name
) `

  
  
Posted 2 years ago

This way I would want to keep track of 3 OutputModel s and call update_weights 3 times every update - and probably do 2 redundant uploadings

  
  
Posted 2 years ago

Well, you can simply do the following:
Start with top 3 models named top1, top2, top3 Keep all 3 in disk cache during run Build logic to rate new model during run depending on it's standing compared to top 3 Decide on new standing of top 3 Perform update_weights_package on the relevant "new" top 3 models once per modelThis is only from the top of my head. I'm sure you could create something better without even the need to cache 3 models during the run

  
  
Posted 2 years ago

CostlyOstrich36 thank you for the answer! Maybe I just can delete old models along with corresponding tasks, seems to be easier

  
  
Posted 2 years ago

is there a some sort of OutputModel.remove method? Docs say there isn't

  
  
Posted 2 years ago
1K Views
19 Answers
2 years ago
one year ago
Tags