Your object is likely holding some file descriptor or something like that. The pipeline steps are all running in separate processes (they can even run on different machines while running remotely). You need to make sure that the objects that you are returning are thus pickleable and can be passed between these processes. You can try to see that the logger you are passing around is indeed pickalable by calling  pickle.dump(s)  on it an then loading it in another run.
The best practice would ...
SmallGiraffe94  You should use  dataset_version=2022-09-07   (not  version=... ). This should work for your use-case.Dataset.get  shouldn't actually accept a  version  kwarg, but it does because it accepts some  **kwargs  used internally. We will make sure to warn the users in case they pass values to  **kwargs   from now on.
Anyway, this issue still exists, but in another form:Dataset.get  can't get datasets with a non-semantic version, unless the version is sp...
Hi @<1817731756720132096:profile|WickedWhale51> ! ClearML is tolerant to network failures. Anyway, if you wish the upload the offline data periodically, you could zip the offline mode folder and import it:
# make sure the state of the offline data is saved
Task.current_task()._edit()
# create zip file
offline_folder = Task.current_task().get_offline_mode_folder()
zip_file = offline_folder.as_posix() + ".zip"
with ZipFile(zip_file, "w", allowZip64=True, compression=ZIP_DEFLATED) as zf:
...Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Could you please share some code that could help us reproduced the issue? I tried cloning, changing parameters and running a decorated pipeline but the whole process worked as expected for me.
Hi DangerousDragonfly8 ! At the moment, this is not possible, but we do have it in plan (we had some prior requests for this feature)
@<1526734383564722176:profile|BoredBat47> Yeah. This is an example:
 s3 {
            key: "mykey"
            secret: "mysecret"
            region: "us-east-1"
            credentials: [
                 {
                     bucket: "
"
                     key: "mykey"
                     secret: "mysecret"
                    region: "us-east-1"
                  },
            ]
}
# some other config
default_output_uri: "
"
Hi @<1628202899001577472:profile|SkinnyKitten28> ! What code do you see that is being captured?
Hi  @<1570583237065969664:profile|AdorableCrocodile14> !  get_local_copy  will always copy/download external files to a folder. To get the external files, there is property on the dataset called  link_entries   which returns a list of  LinkEntry  objects, which contain a   link   attribute, and each such link should point to a extrenal file (in this case, your local paths prefixed with  file:// )
Hi @<1675675705284759552:profile|NonsensicalAnt77> ! How are you uploading the model weights without using the SDK? Can you please share a code snippet (might be useful in finding why your config doesn't work). Also, what is your clearml version?
Hi @<1715175986749771776:profile|FuzzySeaanemone21> ! Are you running this remotely? If so, you should work inside a repository such that the agent can clone the repository which should include the config as well. Otherwise, the script will run as a "standalone"
Hi  @<1523701504827985920:profile|SubstantialElk6> !
Regarding 1: pth files get pickled.
The flow is like this:
- The step is created by the controller by writing some code to a file and running that file in python
- The following line is ran in the step when returning values: None
- This is eventually ran: [None](https://github.com/allegroai/clearml/blob/cbd...
Hi @<1643060801088524288:profile|HarebrainedOstrich43> ! Thank you for reporting. We will get back to you as soon as we have something
Yeah, that's always the case with complex systems 😕
Please add it to github! No other info is needed, we know what the issue is
Hello  MotionlessCoral18 . I have a few questions that might help us find out why you experience this problem:
Is there any chance you are running the program in offline mode? Is there any other message being logged that might help? The error messages might include  Action failed  ,  Failed sending  ,  Retrying, previous request failed  ,  contains illegal schema Are you able to connect to the backend at all from the program you are trying to get the dataset?
Thank you!
Regarding pending pipelines: please make sure a free agent is bound to the queue you wish to run the pipeline in. You can check queue information by accessing the INFO section of the controller (as in the first screenshort)
then by pressing on the queue, you should see the worker status. There should be at least one worker that has a blank "CURRENTLY EXECUTING" entry

Or if you ran it via an IDE, what is the interpreter path?
Hi  @<1545216070686609408:profile|EnthusiasticCow4> ! I have an idea.
The flow would be like this: you create a dataset, the parent of that dataset would be the previously created dataset. The version will auto-bump. Then, you sync this dataset with the folder. Note that sync will return the number of added/modified/removed files. If all of these are 0, then you use  Dataset.delete  on this dataset and break/continue, else you upload and finalize the dataset.
Something like:
parent =...Hi @<1694157594333024256:profile|DisturbedParrot38> ! We weren't able to reproduce, but you could find the source of the warning by appending the following code at the top of your script:
import traceback
import warnings
import sys
def warn_with_traceback(message, category, filename, lineno, file=None, line=None):
    log = file if hasattr(file,'write') else sys.stderr
    traceback.print_stack(file=log)
    log.write(warnings.formatwarning(message, category, filename, lineno, line))
...Hi @<1674226153906245632:profile|PreciousCoral74> !
Sadly, Logger.report_matplotlib_figure(…) doesn't seem to log plots. Only the automatic integration appears to behave.
What do you mean by that?  report_matplotlib_figure  should work. See this example on how to use it:  None .
If it still doesn't work for you, could you please share a code snippet that could help us track down...
Yes, passing custom object between steps should be possible. The only condition is for the objects to be pickleable. What are you returning exactly from  init_experiment ?
Hi  @<1590514584836378624:profile|AmiableSeaturtle81> ! To help us debug this: are you able to simply use the  boto3  python package to interact with your cluster?
If so, how does that code look like? This would give us some insight on how the config should actually look like or what changes need to be made.
hi OutrageousSheep60 ! We didn't release an RC yet, we will a bit later today tho. We will ping you when it's ready, sorry for the delay
or rather than  str(self) , something like:
    def __repr__(self):
        return self.__class__.__name__ + "." + self.name
should work better
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , I think you are right. We will try to look into this asap
Regarding  1. , are you trying to delete the project from the UI? (I can't see an attached image in your message)
