Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Answered
Is There A Way To Save The Models Completely On The Clearml Server? It Seems That Clearml Server Does Not Store The Models Or Artifacts Itself, But They Are Stored Somewhere Else (E.G., Aws S3-Bucket) Or On My Local Machine And Clearml Server Is Only Sto

Is there a way to save the models completely on the ClearML server?

It seems that ClearML Server does not store the models or artifacts itself, but they are stored somewhere else (e.g., AWS S3-bucket) or on my local machine and ClearML Server is only storing configuration parameters and previews (e.g., when the artifact is a pandas dataframe). Is that right?

  
  
Posted one year ago
Votes Newest

Answers 45


FWIW It’s also listed in other places @<1523704157695905792:profile|VivaciousBadger56> , e.g. None says:

In order to make sure we also automatically upload the model snapshot (instead of saving its local path), we need to pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket…

  
  
Posted one year ago

Do you mean "exactly" as in "you finally got it" or in the sense of "yes, that was easy to miss"?

  
  
Posted one year ago

By the way, output_uri is also documented as part of the Task.init() docstring ( None )

  
  
Posted one year ago

The documentation is messy, I’ve complained about it the in the past too 🙈

  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> I think I might have made a mistake earlier - but not in the code I posted before. Now, I have the following situation:

  • In my training Python process on my notebook I train the custom made model and put it on my harddrive as a zip file. Then I run the code
output_model = OutputModel(task=task, config_dict={...}, name=f"...")
output_model.update_weights(weights_filename=r"C:\path\to\mymodel.zip", is_package=True)
  1. I delete the "C:\path\to\mymodel.zip", because it would not be available on my colleagues' computers.

  2. In a second process, the model-inference process, I run

mymodel = task.models['output'][-1]
mymodel = mymodel.get_local_copy(extract_archive=True, raise_on_error=True)

and get the error

ValueError: Could not retrieve a local copy of model weights 8ad4db1561474c43b0747f7e69d241a6, failed downloading

I do not have an aws S3 instance or something like that. This is why I would like to store my mymodel.zip file directly on the ClearML Hosted Service. The model is around 2MB large.

How should I proceed?

  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> Also, I think that - in this case, but also in other cases - the issue is not just the documentation, but also the design of the SDK.

  
  
Posted one year ago

We'll try to add referenced to that in other places as well 👍

  
  
Posted one year ago

@<1523704157695905792:profile|VivaciousBadger56> It seems like whatever you pickled in the zip file relies on some additional files that are not pickled.

  
  
Posted one year ago

@<1523701083040387072:profile|UnevenDolphin73> : From which URL is your most recent screenshot?

  
  
Posted one year ago

From the one you sent - None

  
  
Posted one year ago

"Messy" is putting it nicely.

  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> : That is the link I posted as well. But this should be mentioned also at places where it is about about the external or non-external storage. Also it should be mentioned everywhere we talk about models or artifacts etc. Not necessarily in details, but at least with a sentence and a link.

  
  
Posted one year ago

@<1523704157695905792:profile|VivaciousBadger56> regrading: None
Is this a discussion or PR ?
(general ranting is saved for our slack channel 🙂 )

  
  
Posted one year ago

We're certainly working hard on improving the documentation (and I do apologize for the frustrating experience)

  
  
Posted one year ago

@<1523701083040387072:profile|UnevenDolphin73> : I do not get this impression, because during update_weights I get the message

2023-02-21 13:54:49,185 - clearml.model - INFO - No output storage destination defined, registering local model C:\Users..._Demodaten_FF_2023-02-21_13-53-51.624362.model

  
  
Posted one year ago

We have the following, works fine (we also use internal zip packaging for our models):

model = OutputModel(task=self.task, name=self.job_name, tags=kwargs.get('tags', self.task.get_tags()), framework=framework)
model.connect(task=self.task, name=self.job_name)
model.update_weights(weights_filename=cc_model.save())
  
  
Posted one year ago

@<1523701083040387072:profile|UnevenDolphin73> : I see. I did not make the connection that output_uri=True is what I was missing. I thought this was the default. But the default is actually "None", which is different than "True".

  
  
Posted one year ago

Yes, you're correct, I misread the exception.
Maybe it hasn't completed uploading? At least for Datasets one needs to explicitly wait IIRC

  
  
Posted one year ago

@<1523701083040387072:profile|UnevenDolphin73> : How do you figure? In the past, my colleagues and I just shared the .zip file via email / MS Teams and it worked. So I don't think so.

  
  
Posted one year ago

But, I guess @<1523701070390366208:profile|CostlyOstrich36> wrote that in a different chat, right?

  
  
Posted one year ago

@<1523701083040387072:profile|UnevenDolphin73> : I do not see any way to download the model manually from the web app either. All I see is the link to the file on my harddrive (see shreenshot).

The second process says there is not file at all. I think, all that happened is that the update_weights only uploaded the location of the .zip file (which we denote as a .model file) on my harddrive, but not the file itself.
image

  
  
Posted one year ago

Heh, well, John wrote that in the first reply in this thread 🙂
And in Task.init main documentation page (nowhere near the code), it says the following -
image

  
  
Posted one year ago

I wouldn't put past ClearML automation (a lot of stuff depend on certain suffixes), but I don't think that's the case here hmm

  
  
Posted one year ago

But we do use S3

  
  
Posted one year ago

@<1523704157695905792:profile|VivaciousBadger56> I'm not sure I'm following you - is the issue not being able to upload to the ClearML server or to load the downloaded file?

  
  
Posted one year ago

Well you could start by setting the output_uri to True in Task.init .

  
  
Posted one year ago

@<1523701083040387072:profile|UnevenDolphin73>

  
  
Posted one year ago

@<1523701087100473344:profile|SuccessfulKoala55> : I referenced this conversation in the issue None

  
  
Posted one year ago

Unbelievable! That worked.

  
  
Posted one year ago

@<1523701070390366208:profile|CostlyOstrich36>

My training outputs a model as a zip file. The way I save and load the zip file to make up my model is custom made (no library is directly used), because we invented the entire modelling ourselves. What I did so far:

output_model = OutputModel(task=..., config_dict={...}, name=f"...")
output_model.update_weights("C:\io__path\...", is_package=True)

and I am trying to load the model in a different Python process with

mymodel = task.models['output'][0]
mymodel = mymodel.get_local_copy(extract_archive=True, raise_on_error=True)

and I get in the clearml cache a . training.pt file, which seems to be some kind of archive. Inside I have two files named data.pkl and version and a folder with the two files named 86922176 and 86934640 .

I am not sure how to proceed after trying to use pickle, zip and joblib. I am kind of at a loss. I suspect, my original zip file might be somehow inside, but I am not sure.

Sure, we could simply use the generic artifacts sdk, but I would like to use the available terminological methods and functions.

How should I proceed?

  
  
Posted one year ago
19K Views
45 Answers
one year ago
one year ago
Tags