Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
SmugDolphin23
Moderator
0 Questions, 355 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

Yes, you need to call the function every time. The remote run might have some parameters populated which you can use, but the pipeline function needs to be called if you actually want to run the pipeline.

one month ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

If the task is running remotely and the parameters are populated, then the local run parameters will not be used, instead the parameters that are already on the task will be used. This is because we want to allow users to change these parameters in the UI if they want to - so the paramters that are in the code are ignored in the favor of the ones in the UI

one month ago
0 Hi, Working With Clearml 1.6.4 What Is The Correct Way To List All The

Hi OutrageousSheep60 . The list_datasets function is currently broken and will be fixed next release

one year ago
0 Hi, I Am Trying To Use

Hi DrabOwl94 Looks like this is a bug. Strange no one found it until now. Anyway, you can just add a --params-override at the end of the command line and it should work (and --max-iteration-per-job <YOUR_INT> and --total-max-job <YOUR_INT> as Optuna requires this). We will fix this one in the next patch.
Also, could you please open a Github issue? It should contain your command line and this error.
Thank you

one year ago
0 I’M Trying To Understand The Execution Flow Of Pipelines When Translating From Local To Remote Execution. I’Ve Defined A Pipeline Using The

Hi @<1533620191232004096:profile|NuttyLobster9> ! PipelineDecorator.get_current_pipeline will return a PipelineDecorator instance (which inherits from PipelineController ) once the pipeline function has been called. So

pipeline = PipelineDecorator.get_current_pipeline()
pipeline(*args)

doesn't really make sense. You should likely call pipeline = build_pipeline(*args) instead

one month ago
0 Hey, Is There A Way To Set Pipeline Component Return Artifact Compression At A Pipeline Level ? It Would Allow To Make Big Dataframes Flow Across Component Without Having To Resort To Define Temporary Datasets, Currently It'S Generating Only Raw Pickles.

Hi @<1523702000586330112:profile|FierceHamster54> ! This is currently not possible, but I have a workaround in mind. You could use the artifact_serialization_function parameter in your pipeline. The function should return a bytes stream of the zipped content of your data with whichever compression level you have in mind.
If I'm not mistaken, you wouldn't even need to write a deserialization function in your case, because we should be able to unzip your data just fine.
Wdyt?

3 months ago
0 Hi

Hi @<1546303293918023680:profile|MiniatureRobin9> The PipelineController has a property called id , so just doing something like pipeline.id should be enough

one month ago
0 Hi Everyone

Hi @<1546303293918023680:profile|MiniatureRobin9> ! When it comes to pipeline from functions/other tasks, this is not really supported. You could however cut the execution short when your step is being ran by evaluating the return values from other steps.

Note that you should however be able to skip steps if you are using pipeline from decorators

10 months ago
0 Dear Community, I Have Tried To Use

Hi @<1668427963986612224:profile|GracefulCoral77> ! The error is a bit misleading. What it actually means is that you shouldn't attempt to modify a finalized clearml dataset (I suppose that is what you are trying to achieve). Instead, you should create a new dataset that inherits from the finalized one and sync that dataset, or leave the dataset in an unfinalized state

2 months ago
0 Hi, I’M Trying To Integrate Logger In My Pipelinedecorator But I’M Getting This Error -

Yes, passing custom object between steps should be possible. The only condition is for the objects to be pickleable. What are you returning exactly from init_experiment ?

one month ago
0 Hi, I’M Trying To Integrate Logger In My Pipelinedecorator But I’M Getting This Error -

Your object is likely holding some file descriptor or something like that. The pipeline steps are all running in separate processes (they can even run on different machines while running remotely). You need to make sure that the objects that you are returning are thus pickleable and can be passed between these processes. You can try to see that the logger you are passing around is indeed pickalable by calling pickle.dump(s) on it an then loading it in another run.
The best practice would ...

one month ago
0 For Some Reason I Can'T Delete A Pipeline Projet, The Deletion Is Running Indefinitely. Is There A Way To Force The Deletion Of A Project Via The Apiclient?

Actually, I think you want blop now that you renamed the project (instead of custom pipeline logic )

one year ago
0 For Some Reason I Can'T Delete A Pipeline Projet, The Deletion Is Running Indefinitely. Is There A Way To Force The Deletion Of A Project Via The Apiclient?

Try examples/.pipelines/custom pipeline logic instead of pipeline_project/.pipelines/custom pipeline logic

one year ago
0 Hey, We Are Using Clearml 1.9.0 With Transformers 4.25.1… And We Started Getting Errors That Do Not Reproduce In Earlier Versions (Only Works In 1.7.2 All 1.8.X Don’T Work):

Hi @<1523701949617147904:profile|PricklyRaven28> ! We released ClearmlSDK 1.9.1 yesterday. Can you please try it?

one year ago
0 Hello, Is There A Way To Disable Dataset Caching So That When

Hi FreshParrot56 ! This is currently not supported ๐Ÿ™

one year ago
0 Hi All

That's unfortunate. Looks like this is indeed a problem ๐Ÿ˜• We will look into it and get back to you.

one year ago
0 Hi! Is There Any Way To Add Git-Like Ignore File For Versioning Clearml Data? I Saw In Docs A Wildcard Argument When Files Are Added To A Dataset. How Can I Specify Ignoring Of Some File Types? For Example, I Want To Ignore Ipynb Checkpoints. How Can I Do

Hi @<1676038099831885824:profile|BlushingCrocodile88> ! We will soon try to merge a PR submitted via Github that will allow you to specify a list of files to be added to the dataset. So you will then by able to do something like add_files(glob.glob(*) - glob.glob(*.ipynb))

one month ago
0 Hi All, After Upgrading To Sdk 1.8.0 We Are Having Issue Adding External Files To Dataset From Gcs. This Is The Code We Use:

this only affects single files, if you wish to add directories (with wildcards as well) you should be able to

one year ago
0 Hi All, After Upgrading To Sdk 1.8.0 We Are Having Issue Adding External Files To Dataset From Gcs. This Is The Code We Use:

You could try this in the meantime if you don't mind temporary workarounds:
dataset.add_external_files(source_url=" ", wildcard=["file1.csv"], recursive=False)

one year ago
0 Hi, I’M Trying To Upload Output Model Files (Like .Pth) To Clearml Server. Assume My

@<1523721697604145152:profile|YummyWhale40> are you able to manually save models from SageMaker using OutputModel ? None

2 months ago
0 [Issue With Minio] Hi, I Am Using Clearml=1.8.3, But It Seems Still Have Trouble With Minio Connection.

QuaintJellyfish58 We will release later today an RC that adds the region to boto_kwargs . We will ping you when it's ready to try it out

one year ago
Show more results compactanswers