Hi, I think this came up when we discussed the joblib integration right? We have a model registry, ranging from auto spec to manual reporting. E.g. https://allegro.ai/clearml/docs/docs/examples/frameworks/pytorch/manual_model_upload.html
LudicrousParrot69
I "think" I have a better handle on what you wish to do.
Is it kind of generic "serving" solution?
FYI:
Model artifact is, usually, a weights/model file. The idea that later you will be able to access it and serve it. Now the problem is (and I think this is what you are referring to) there is usually a specific piece of code tied to that model that can use it (a.k.a pyfunc)
A few ideas:
These days everyone is trying to build their models with generic interface, so that scikit learn will be able to serve any model it was storing. (tf serving and pytorch script are of similar nature). If this is the case the Model framework could be used in order to detect which of these frameworks is to be used (this could actually be done in runtime) You could pickle the function itself and store it as a second artifact (basically upload_artifcat could auto_pickle it for you). That said, pickling is quite fragile and you have to have all the function dependencies in order to unpickle it.WDYT?
BTW: on a diff note, the auto-archive of the HPO will probably be in the nest version due in a few days 😉
Yeah its trying to plan down the line into model deployment. Whilst its easy to save out a keras SavedModel or similar and have that artifact uploaded into the store, just wanted to check if there was a more generic solution. I could just create a Python class and serialise that out such that it has a standard interface, but good to check. So for example, some artifact representing an arbitrary math function. For better context, the idea is to make deploying any artifact we upload using clearml as easy as possible. Back in my last project, which was airflow+mlflow, all models were executable using a standard interface (pyfunc), and making a custom model which was interfaced with the same way as sklearn/keras/ and thus deployed/served the same way, was done by extending PythonModel ( https://www.mlflow.org/docs/latest/models.html#example-creating-a-custom-add-n-model ). Im trying to get the hang of how to do similar things with ClearML, and have been over the docs in clearml.model.Model but this doesnt seem what I want - which is to be able to get a Task’s model and run using it, with bonus points if I dont have to care about what the model itself is
Hi LudicrousParrot69
Not sure I follow, is this pyfunc running remotely ?
Or are you looking for interfacing with previously executed Tasks ?
Ah fantastic, thanks! Another one for me - is there support for custom python models at all? For example, dummy models that simply return the output of an equation run over the dataframe after transforming some of the input columns. Something similar to mlflows custom pyfunc that allows a standard way of interfacing with custom models as you do with keras/sklearn/pytorch models