Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
ReassuredOwl55
Moderator
12 Questions, 42 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

42 × Eureka!
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
Hey all, We are trying to clone a Task that uses custom pip installed packages and run it via an Agent. When running locally, we simply “ pip install ./path/...
one year ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
[Datasets] is it possible to get an individual file from a dataset? Example would be accessing only a single feature from a feature store Dataset when it cou...
one year ago
0 Votes
23 Answers
1K Views
0 Votes 23 Answers 1K Views
How can I run a new version of a Pipeline, wait for it to finish and then check its completion/failure status? I want to kick off the pipeline and then check...
one year ago
0 Votes
13 Answers
1K Views
0 Votes 13 Answers 1K Views
one year ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
one year ago
0 Votes
7 Answers
1K Views
0 Votes 7 Answers 1K Views
[Pipeline] Hey, is it possible to specify the output uri for Pipelines and their Components using Pipeline decorators? I would like to store Pipeline artifac...
one year ago
0 Votes
7 Answers
902 Views
0 Votes 7 Answers 902 Views
Outputs of final few colab notebook cells not logged Hey, We are creating a ClearML Task within a Google Colab notebook and using the Task, among other thing...
one year ago
0 Votes
5 Answers
1K Views
0 Votes 5 Answers 1K Views
one year ago
0 Votes
3 Answers
1K Views
0 Votes 3 Answers 1K Views
Do pandas pd.DataFrame s work with caching for Pipeline Components? I seem to be coming up against a weird issue where List[pd.DataFrame] is cached properly ...
one year ago
0 Votes
1 Answers
1K Views
0 Votes 1 Answers 1K Views
one year ago
0 Votes
0 Answers
1K Views
0 Votes 0 Answers 1K Views
Hey, I’m thinking of using a ClearML Pipeline to compile a dataset more efficiently. My hope is that I won’t have to run every step for every data point ever...
one year ago
0 Votes
4 Answers
1K Views
0 Votes 4 Answers 1K Views
Trying to follow https://github.com/abiller/events/blob/webinars/videos/the_clear_show/S02/E05/dataset_edit_00.ipynb by GrittyStarfish67 and getting the foll...
one year ago
one year ago
0 Trying To Follow

from tempfile import mkdtemp new_folder = with_feature.get_mutable_local_copy(mkdtemp())It’s this line that causes the issue

one year ago
0 Trying To Follow

Ah ok. I’m guessing the state file is auto uploaded in the background? I haven’t kicked that off “intentionally”

one year ago
0 Trying To Follow

ClearML shows the new (child) dataset as “Uploading”, if that helps…

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Do notice this will only work for pipelines from Tasks, is this a good fit for you?

The issue with this is that we would then presumably have to run/“build” each of the Tasks (pipeline steps) separately to put them on the ClearML server and then get their Task ID’s in order to even write the code for the Pipeline, which increases the complexity of any automated CI/CD flow. Correct me if I’m wrong.

Essentially, I think the key thing here is we want to be able to build the entire Pipe...

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Thanks, I’ll check out those GitHub Actions examples but as you say, it’s the “template” step that is the key bit for this particular application.

the pipeline from tasks serializes itself to a configuration object that you can edit/create externally

I think if it has to come down to fiddling with lower-level objects, I’ll hold off for now and wait until something a bit more user-friendly comes along. Depends on how easy this is to work with.

This is something that we do need if we a...

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

The Pipeline is defined using PipelineDecorators, so currently to “build and run” it would just involve running the script it is defined in (which enqueues it and runs it etc).

This is not ideal, as I need to access the Task ID and the only methods I can see are for use within the Task/Pipeline ( Task.current_task and PipelineDecorator.get_current_pipeline )

The reason I want to check completion etc outside the Pipeline Task is that I want to run data validation etc once when the pipe...

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

The pseudo-code you wrote previously is what would be required, I believe

be able to get the pipeline’s Task ID back at the end

This is the missing piece. We can’t perform validation without this, afaik

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Sorry, I think something’s got lost in translation here, but thanks for the explanation.

Hopefully this is clearer:

  • Say we have a new ClearML pipeline as code on a new commit in our repo.
  • We want to build and run this new pipeline and have it available on the ClearML Server.
  • We want to run a suite of tests that validate/verify/etc the performance of this entire ClearML Pipeline, e.g. by having it run on a set of predefined inputs and checking the various artifacts that were creat...
one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

The issue here is I don’t have the pipeline ID as it is a new version of the pipeline - i.e. the code has been updated, I want to run the updated pipeline (for the first time), get its ID, and then analyse the run/perform updates to tags (for example)

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Sorry, I don’t understand how this helps with validating the pipeline run.
Where would the validation code sit?
And the ClearML Pipeline run needs to be available on the ClearML Server (at least as a draft) so that it can be marked as in-production and cloned in the future

one year ago
0 How Can I Run A New Version Of A Pipeline, Wait For It To Finish And Then Check Its Completion/Failure Status? I Want To Kick Off The Pipeline And Then Check Completion

Yep, would be happy to run locally, but want to automate this so does running locally help with getting the pipeline run ID (programmatically)?

one year ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

e.g. pseudo for illustration only
` def get_list(dataset_id):
from clearml import Dataset
ds= Dataset.get(dataset_id=dataset_id)
ds_dir=ds.get_local_copy()
etc...
return list_of_objs # one for each file, for example

def pipeline(dataset_id):
list_of_obj = get_list(dataset_id)
list_of_results = []
for obj in list_of_obj:
list_of_results.append(step(obj))
combine(list_of_results) `One benefit is being able to make use of the Pipeline caching so if ne...

one year ago
0 Outputs Of Final Few Colab Notebook Cells Not Logged

Hi John, we are using a self-hosted server with:

WebApp 1.9.2-317
Server: 1.9.2-317
API: 2.23

edit: clearml==1.11.0

one year ago
0 Outputs Of Final Few Colab Notebook Cells Not Logged

Yes, sorry, the final cell has the flush followed by the close

one year ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

I have already tested that the for loop does work, including caching, when spinning out multiple Tasks.

As I say, the issue is grouping the results of the tasks into a list and passing them into another step

one year ago
0 [Pipeline] Am I Right In Saying A Pipeline Controller Can’T Include A Data-Dependent For-Loop? The Issue Is Not Spinning Up The Tasks, It’S Collecting The Results At The End. I Was Trying To Append The Outputs Of Each Iteration Of The For-Loop And Pass Th

Ahh okay.

I’m an absolute numpty.

I had enabled caching on the Pipeline Task that was grabbing a load of ClearML IDs and so it was trying to “get” datasets that had since been deleted.

Thanks for the nudge to minimal test – silly I didn’t do it before asking!

Appreciate your help.

one year ago
0 Outputs Of Final Few Colab Notebook Cells Not Logged

I used task.flush(wait_for_uploads=True) in the final cell of the notebook

one year ago
0 [Pipeline] Hey, Is It Possible To Specify The Output Uri For Pipelines And Their Components Using Pipeline Decorators? I Would Like To Store Pipeline Artifacts And Component Artifacts On S3.

I have added a lot of detail to this, sorry.

The inline comments in the code talk about that specific script/implementation.

I have added a lot of context in the doc string at the top.

one year ago
0 Do Pandas

Is there a rule whereby only python native datatypes can be used as the “outer” variable?

I have a dict of numpy np.array s elsewhere in my code and that works fine with caching.

one year ago
0 Hey All, We Are Trying To Clone A Task That Uses Custom Pip Installed Packages And Run It Via An Agent. When Running Locally, We Simply “

my colleague, @<1534706830800850944:profile|ZealousCoyote89> has been looking at this – I think he has used the relevant kwarg in the component decorator to specify the packages, and I think it worked but I’m not 100%. Connah?

one year ago
0 [Pipeline] Hey, Is It Possible To Specify The Output Uri For Pipelines And Their Components Using Pipeline Decorators? I Would Like To Store Pipeline Artifacts And Component Artifacts On S3.

The return objects were stored to S3 but PipelineDecorator.upload_artifact still uploaded to the file server. Not sure what was up with that but as explained in my next comment it did work when I tried again.

It also seems that PipelineDecorator.upload_artifact is not compatible with caching, sadly, but that is another issue for another thread that I will be starting on Monday.

Have a good weekend

one year ago
Show more results compactanswers