Thank you! It solved my problem but I'm now seeing something else.
I have a data_prepping step which contains a LightningDataModule. In it, I load the data and prep it. My function then returns an initialized datamodule which i give to the training function. I have PipelineDecorator.component(task_type = TaskTypes.data_processing,cache= False) . When I am done training, the pipeline saves my entire dataset(64GB) as an artifact and I am not sure why. Would you happen to know what I am doing wrong? Would you have a example of how the pipeline decorator is used with a Pytorch Lightning ML pipeline?
Hi @<1547028031053238272:profile|MassiveGoldfish6> , you have the auto_connect_frameworks parameter of the component to disable auto logging of checkpoints - None