Hey We figured a temporary solution - by importing the modules and reloading the contents of the artefact by pickle. It still gives us a warning, though training works now. Do send an update if you find a better solution
Umm I suppose that won't work - this package consists of .py scripts that I use for a set of configs and Utils for my model.
How do we close pipelinedecorators?
It is showing running even after pipeline was completed
you can also specify a package, with or without specifying its version
https://clear.ml/docs/latest/docs/references/sdk/task#taskadd_requirements
btw here is the content of the imported file:
import
torch
from
torchvision
import
datasets, transforms
import
os
MY_GLOBAL_VAR = 32
def my_dataloder
():
return
torch.utils.data.DataLoader(
datasets.MNIST(os.path.join('./', 'data'), train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor()
])),
batch_size=32, shuffle=True)
have you tried to add the requirements using Task.add_requirements( local_packages ) in your main file ?
Though as per your docs the add_requirements is for a requirements .txt
stuff is a package that has my local modules - I've added it to my path by sys.path.insert, though here it isn't able to unpickle
hey WickedElephant66 TenderCoyote78
I'm working on a solution, just hold on, I update you asap
Here's the code, we're trying to make a pipeline using PyTorch so the first step has the dataset that ’ s created using ‘stuff’ - a local folder that serves as a package for my code. The issue seems to be in the unpicking stage in the train function.
No, it is supposed to have its status updated automatically. We may have a bug. Can you share some example code with me, so that i could try to figure out what is happening here ?
However, I use this to create an instance of a dataloader(torch) this is fed into my next stage in the pipeline - though I import the local modules and add the folders to the path it is unable to unpickle the artifact
Is there a way to store the return values after each pipeline stage in a format other than pickle?
can you share with me an example or part from your code ? I might miss something in wht you intend to achieve
I'm facing the same issue, is there any solution to this?
How would you structure PyTorch pipelines in clearml? Especially dealing with image data
I tried it - it works for a library that you can install, not for something local I suppose
TenderCoyote78
the status should normally be automatically updated . Do all the steps finish successfully ? And also the pipeline ?
Hey so I was able to get the local .py files imported by adding the folder to my path sys .path
Yep, the pipeline finishes but the status is still at running . Do we need to close a logger that we use for scalers or anything?