Hello  @<1523710243865890816:profile|QuaintPelican38> , could you try  Dataset.get ing an existent dataset and tell whether there are any errors or not?
What happens if you comment or remove the  pipe.set_default_execution_queue('default')  and use  run_locally  instead of  start_locally ?
Because in the current setup, you are basically asking to run the pipeline controller task locally, while the rest of the steps need to run on an agent machine. If you do the changes I suggested above, you will be able to run everything on your local machine.
Hey @<1671689458606411776:profile|StormySeaturtle98> we do support something called "Model Design" previews, basically an architecture description of the model, a la Caffe protobufs. None For example we store this info automatically with Keras
Hey  @<1564422650187485184:profile|ScaryDeer25> , we just released  clearml==1.11.1rc2  which should solve the compatibility issues for lightning >= 2.0. Can you install it and check whether it solves your problem?
Hey  @<1574207113163444224:profile|ShallowCoyote86> , what exactly do you mean by "depends on  private_repo_b "? Another question - after you push the changes, do you re-run  script_a.py  ?
Hey @<1547390444877385728:profile|ThickSnake12> , how exactly do you access the artifact next time? Can you provide a code sample?
Can you please attach the code for the pipeline?
The issue may be related to the fact that right now we have some edge cases when working with lightning >= 2.0, we should have better support in the upcoming release
Can you paste here the code of the pipeline that you're trying to run?
Ah, I think I understand. To execute a pipeline remotely you need to use  None   pipe.start()  not  task.execute_remotely  . Do note that you can run tasks remotely without exiting the current process/closing the notebook, (see here the  exit_process  argument  None ) but you won't be able to return any values from this task....
To my knowledge, no. You'd have to create your own front-end and use the model served with clearml-serving via an API
The line before the last in your code snippet above.  pipe.start_locally .
Hey @<1523701066867150848:profile|JitteryCoyote63> , could you please open a GH issue on our repo too, so that we can more effectively track this issue. We are working on it now btw
Are referring to the  clearml-serving  project ?
To copy the artifacts please refer to docs here: None
That seems strange. Could you provide a short code snippet that reproduces your issue?
Hey @<1569858449813016576:profile|JumpyRaven4> , about your first point, what exactly is the question?
About your second point - you can try to manually save the final model and give it a proper file name, that way we will show it in the UI with the name you provided. Make sure to use  xgboost.save_model  and not raw pickle.
For your final question ,  given that your models have customised code, I can suggest trying to use  clearml.OutputModel  which will register the file you provide ...
Hey @<1523701083040387072:profile|UnevenDolphin73> what you're building here sounds like a useful tool. Let me understand what you're trying to achieve here, please correct me if I'm wrong:
- You want to create a set of  Stepclasses with which you can define pipelines, that will be executed either locally or remotely.
- The pipeline execution is triggered from a notebook.
- The  stepsare predefined transformations, the user normally won't have to create their own steps
 Did I get all...
Hey, yes, the reason for this issue seems to be our currently limited support for lightning 2.0. We will improve the support in the following releases. Right now one way to circumvent this issue, that I can recommend, is to use  torch.save  if possible, because we fully support automatic model capture on  torch.save  calls.
Can you please check with the latest 1.10.2 SDK version if the checkpointing issue still happens. As for the example code which couldn't be reproduced, we're already working on it and should have a fix for it for the next minor SDK version
Ok, then launch an agent using  clearml-agent daemon --queue default  that way your steps will be sent to the agent for execution. Note that in this case, you shouldn't change your code snippet in any way.
Hey @<1523705721235968000:profile|GrittyStarfish67> , we have just released 1.12.1 with a fix for this issue
Can you please attach the full traceback here?
Which gives me an idea. Could you please remove the entrypoint from the docker image altogether and try again ?
Overriding the entrypoint in the image can lead to docker run/docker exec failing to work properly , because instead of a shell it will use your entrypoint to run everything
Wait, my config looks a bit different, what clearml package version are you using?
Hey @<1603198163143888896:profile|LonelyKangaroo55> If you only use the summary writer, does it report properly to both TB and ClearML?
Can you update the clearml version to latest (1.11.1) and see whether the issue is fixed?
