
Reputation
Badges 1
86 × Eureka!I'm facing the issue during the initial set up of clearml serving - i.e the step where you use docker-compose to launch the serving containers
Hey, thanks for the reply
I have another question ;
Are Kwargs supported in functions decorated as a pipeline component?
How do I provide an output storage destination for that stage of the pipeline?
So the issue is that the model url points to the file location on my machine,
Is there a way for me to pass the model url something else?
More context:
I have agents running the stages and the pipeline being executed locally here.
Yep, no clue why I had two of them either;
It started my pipeline and a few seconds in, another pipeline shows up
http://localhost:9000 http://localhost:9000/%3Cbucket%3E
My minio instance is hosted locally at the 9000 port.
Thanks for actively replying, David
Any update on the example for saving a model from within a pipeline( specifically in .pth or h5 formats?)
How do I provide a specific output path to store the model? (Say I want to server to store it in ~/models)
I'm training my model via a remote agent.
Thanks to your suggestion I could log the model as an artefact(using PipelineDecorator.upload_model()) - but only the path is reflected; I can't seem to download the model from the server
Hey so I was able to get the local .py files imported by adding the folder to my path sys .path
stuff is a package that has my local modules - I've added it to my path by sys.path.insert, though here it isn't able to unpickle
Umm I suppose that won't work - this package consists of .py scripts that I use for a set of configs and Utils for my model.
Here's the code, we're trying to make a pipeline using PyTorch so the first step has the dataset that ’ s created using ‘stuff’ - a local folder that serves as a package for my code. The issue seems to be in the unpicking stage in the train function.
Hey We figured a temporary solution - by importing the modules and reloading the contents of the artefact by pickle. It still gives us a warning, though training works now. Do send an update if you find a better solution