
Reputation
Badges 1
9 × Eureka!nevermind. In the docs already said to point to serving-inference
alright, thanks for the second pair of eyes
Ahhh, i see. Now i know what i was missing. I thought i could skip the preprocessing part. Does this mean for other engine/framework especially TF/Keras, the serving also sets the input/output based on preprocessing as well?
alright. Can you at least point me to an example of setting the input-size and output-size (via the clearml-serving cli)? Can't find it in the main doc
As i understand, if im runnning an sklearn experiment locally, i can also save the model artifact by using joblib.dump. How do i set the metadata of the artifact within the sourcecode of the experiment as well or am i meant to add the metadata separately?
yeah, cause i thought that it would be able to figure that out from the model file, and im just missing some code/configuration
i think as long as u install the clearml in that venv, it would only be executed within it
ah alright. I can look in that direction, thanks
Um, that is not a valid command. And what i want to do is remove the serving instance, not an endpoint
yeah, it was previously restarted
in windows, the clearml.conf would be located in the User folder. Is there any way to configure it to be moved inside the project's folder, or maybe configure it using cli?