Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ReassuredOwl55
Moderator
12 Questions, 42 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

42 × Eureka!
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
Hey all, We are trying to clone a Task that uses custom pip installed packages and run it via an Agent. When running locally, we simply “ pip install ./path/...
one year ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
[Datasets] is it possible to get an individual file from a dataset? Example would be accessing only a single feature from a feature store Dataset when it cou...
one year ago
0 Votes
23 Answers
1K Views
0 Votes 23 Answers 1K Views
How can I run a new version of a Pipeline, wait for it to finish and then check its completion/failure status? I want to kick off the pipeline and then check...
one year ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
one year ago
0 Votes
7 Answers
902 Views
0 Votes 7 Answers 902 Views
Outputs of final few colab notebook cells not logged Hey, We are creating a ClearML Task within a Google Colab notebook and using the Task, among other thing...
one year ago
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
[Pipeline] Hey, is it possible to specify the output uri for Pipelines and their Components using Pipeline decorators? I would like to store Pipeline artifac...
one year ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hey, I’m thinking of using a ClearML Pipeline to compile a dataset more efficiently. My hope is that I won’t have to run every step for every data point ever...
one year ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
one year ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Trying to follow https://github.com/abiller/events/blob/webinars/videos/the_clear_show/S02/E05/dataset_edit_00.ipynb by GrittyStarfish67 and getting the foll...
one year ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Do pandas pd.DataFrame s work with caching for Pipeline Components? I seem to be coming up against a weird issue where List[pd.DataFrame] is cached properly ...
one year ago
0 Votes
13 Answers
1K Views
0 Votes 13 Answers 1K Views
one year ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
one year ago
0 Do Pandas

Is there a rule whereby only python native datatypes can be used as the “outer” variable?

I have a dict of numpy np.array s elsewhere in my code and that works fine with caching.

one year ago
0 Hey All, We Are Trying To Clone A Task That Uses Custom Pip Installed Packages And Run It Via An Agent. When Running Locally, We Simply “

my colleague, @<1534706830800850944:profile|ZealousCoyote89> has been looking at this – I think he has used the relevant kwarg in the component decorator to specify the packages, and I think it worked but I’m not 100%. Connah?

one year ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

The Dataset object itself is not being passed around. The point of showing you that was to say that the Dataset may change and therefore the number of objects (loaded from the Dataset, eg a number of pandas DataFrames that were CSV’s in the dataset) could change

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Yep, that’s it. Obviously would be nice to not have to go via the shell but that’s by the by (edit: I don’t know of a way to build or run a new version of a pipeline without going via the shell, so this isn’t a big deal).

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Basically, for a bit more context, this is part of an effort to incorporate ClearML Pipelines in a CI/CD framework. Changes to the pipeline script create_pipeline_a.py that are pushed to a GitHub master branch would trigger the build and testing of the pipeline.

And I’d rather the testing/validation etc lived outside of the ClearML Pipeline itself, as stated earlier – and that’s what your pseudo code allows, so if it’s possible that would be great. 🙂

one year ago
0 Trying To Follow

ClearML shows the new (child) dataset as “Uploading”, if that helps…

one year ago
0 Trying To Follow

from tempfile import mkdtemp new_folder = with_feature.get_mutable_local_copy(mkdtemp())It’s this line that causes the issue

one year ago
0 Trying To Follow

Ah ok. I’m guessing the state file is auto uploaded in the background? I haven’t kicked that off “intentionally”

one year ago
Show more results compactanswers