Examples: query, "exact match", wildcard*, wild?ard, wild*rd
Fuzzy search: cake~ (finds cakes, bake)
Term boost: "red velvet"^4, chocolate^2
Field grouping: tags:(+work -"fun-stuff")
Escaping: Escape characters +-&|!(){}[]^"~*?:\ with \, e.g. \+
Range search: properties.timestamp:[1587729413488 TO *] (inclusive), properties.title:{A TO Z}(excluding A and Z)
Combinations: chocolate AND vanilla, chocolate OR vanilla, (chocolate OR vanilla) NOT "vanilla pudding"
Field search: properties.title:"The Title" AND text
Profile picture
GiganticTurtle0
Moderator
46 Questions, 183 Answers
  Active since 10 January 2023
  Last activity one year ago

Reputation

0

Badges 1

183 × Eureka!
0 Votes
2 Answers
527 Views
0 Votes 2 Answers 527 Views
Since PipelineDecorator automatically starts the task for you, is there any way to specify arguments to Task.init in the task created for a function decorate...
2 years ago
0 Votes
6 Answers
538 Views
0 Votes 6 Answers 538 Views
Hi, I have a question regarding the new PipelineDecorator feature and it's about how to access the task created by PipelineDecorator.pipeline through its ID ...
2 years ago
0 Votes
4 Answers
577 Views
0 Votes 4 Answers 577 Views
I'm trying to implement a cleanup service by following this example https://github.com/allegroai/clearml/blob/master/examples/services/cleanup/cleanup_servic...
2 years ago
0 Votes
8 Answers
533 Views
0 Votes 8 Answers 533 Views
2 years ago
0 Votes
1 Answers
660 Views
0 Votes 1 Answers 660 Views
2 years ago
0 Votes
13 Answers
545 Views
0 Votes 13 Answers 545 Views
2 years ago
0 Votes
10 Answers
537 Views
0 Votes 10 Answers 537 Views
Is there any example showing how to work with nested pipelines? In my case I have several functions decorated with PipelineDecorator . In a pipeline I call s...
2 years ago
0 Votes
1 Answers
516 Views
0 Votes 1 Answers 516 Views
It is possible to attach to an OutputModel an object closely related to it (as some product of data preprocessing that has been done specifically for that mo...
2 years ago
0 Votes
13 Answers
629 Views
0 Votes 13 Answers 629 Views
2 years ago
0 Votes
1 Answers
563 Views
0 Votes 1 Answers 563 Views
Is there any way to create a queue from code?
2 years ago
0 Votes
2 Answers
607 Views
0 Votes 2 Answers 607 Views
Hello, I was wondering if clearML offers the option to spin up again the clearml-agent automatically every time the machine where it was being executed as a ...
2 years ago
0 Votes
3 Answers
546 Views
0 Votes 3 Answers 546 Views
I have another question regarding creating a Task with PipelineDecorator.component . Where can I specify the reuse_last_task_id parameter? I need to set it t...
2 years ago
0 Votes
9 Answers
676 Views
0 Votes 9 Answers 676 Views
Hi, I just updated clearml to version v1.1.3. Right after launching a training pipeline, the system crashed due to the following error: Traceback (most recen...
2 years ago
0 Votes
5 Answers
578 Views
0 Votes 5 Answers 578 Views
Hi! From a task created using PipelineDecorator.pipeline , is there any way to get a task ID from the name of the step listed in the table below? My plan is ...
2 years ago
0 Votes
11 Answers
591 Views
0 Votes 11 Answers 591 Views
Let's say that I specify the output_uri parameter in Task.init like this: task = Task.init( project_name="example_project", task_name="example_task", output_...
2 years ago
0 Votes
10 Answers
605 Views
0 Votes 10 Answers 605 Views
Hi, Is there a simple way to make Task.init compatible with Dask.distributed client? When I try to run a script where I want to read concurrently a dataset i...
2 years ago
Show more results questions
0 Hi! If There Are Several Tasks Running Concurrently, Which Task Should

I have tried it and it depends on the context. When I call the method inside a function decorated with PipelineDecorator.component , I get the component task, while if I call it inside PipelineDecorator.pipeline , I get the task corresponding to the pipeline. However, as you said that is not the expected behavior, although I think it makes sense.

2 years ago
0 Hi! If There Are Several Tasks Running Concurrently, Which Task Should

Great, thank you very much for the info! I just spotted the get_logger classmethod. As for the initial question, that's just the behavior I expected!

2 years ago
0 I Have Another Question Regarding Creating A Task With

Ok, so it doesn't follow the exact same rules as Task.init ? I was afraid all the logs and outputs of a hyperparameter optimization task would be deleted just because no artifacts were created.

2 years ago
0 Hi! I Noticed A Bug Related To Reusing The Same Component In A Pipeline. I Have Prepared A Mock Example So That You Can Reproduce It:

Nested pipelines do not depend on each other. You can think of it as several models being trained or doing inference at the same time, but each one delivering results for a different client. So you don't use the output from one nested pipeline to feed another one running concurrently, if that's what you mean.

2 years ago
0 Hi! I Noticed A Bug Related To Reusing The Same Component In A Pipeline. I Have Prepared A Mock Example So That You Can Reproduce It:

They share the same code (i.e. the same decorated functions), but using a different configuration.

2 years ago
0 Hi! I Noticed A Bug Related To Reusing The Same Component In A Pipeline. I Have Prepared A Mock Example So That You Can Reproduce It:

The thing is I don't know in advance how many models there will be in the inference stage. My approach is to read from a database the configurations of the operational models through a for loop, and in that loop all the inference tasks would be enqueued (one task for each deployed model). For this I need the system to be able to run several pipelines at the same time. As you told me for now this is not possible, as pipelines are based on singletons, my alternative is to use components

2 years ago
0 When Clearml Converts A

Sure, I will post a mock example in a while

2 years ago
0 When Clearml Converts A

I have also tried with type hints and it still broadcasts to string. Very weird...

2 years ago
0 When Clearml Converts A

Exactly, when 'extra' has a default value (in this case, 43), the argument preserves its original type. However, when 'extra' is a positional argument then it is transformed to 'str'

2 years ago
0 When Clearml Converts A

Nice, in the meantime as a workaround I will implement a temporary parsing code at the beginning of step functions

2 years ago
0 When Clearml Converts A

Glad to hear that! Thanks!

2 years ago
0 It Is A Good Practice To Call A Function Decorated By

Hi AgitatedDove14 ,
I have already developed a mock test that can be somewhat similar to the pipeline we are developing. The same problem arises. Only the task is created for the first set of parameters in the for loop. Here, only the configuration text file is created for user 1. Can you reproduce it?
` from clearml import Task
from clearml.automation.controller import PipelineDecorator

@PipelineDecorator.component(
return_values=["admin_config_path"], cache=False, task_type=Task.Task...

2 years ago
0 It Is A Good Practice To Call A Function Decorated By

So great! It would be a feature that would make the work much easier instead of having to clone the task and launch it with different parameters. It could even be considered more pythonic. Do you have an immediate solution in mind to keep moving forward before the new release is ready? :)

2 years ago
0 Hi All, I Am Testing The New

How can I tell clearml I will use the same virtual environment in all steps and there is no need to waste time re-installing all packages for each step?

2 years ago
0 Hi All, I Am Testing The New

I am aware of the option to enable virtual environment caching, but that is still very time consuming.

2 years ago
0 Hi All, I Am Testing The New

Okay, so the idea behind the new decorator is not to group all the defined steps under the same script so that they share the same environment, but rather to simplify the process of creating scripts for each step and avoid manually calling Task.init on those scripts.

Regarding virtual environment creation from caching, I will keep running benchmarks (from what you say it might be due to high workload in the servers we use)

So far I've been unlucky in the attempt of clearml recog...

2 years ago
0 Hi All, I Am Testing The New

By the way, where can I change the default artifacts location ( output_uri ) if a have a script similar to this example (I mean, from the code, not agent's config):
https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py

2 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

Or maybe you could bundle some parameters that belongs to PipelineDecorator.component into high-level configuration variable (something like PipelineDecorator.global_config (?))

2 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

Well, I can see the difference here. Using the new pipelines generation the user has the flexibility to play with the returned values of each step. We can process those values before passing them to the next step, so maybe makes little sense to include those callbacks in this case

2 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

I think it could be a convenient approach. The new parameter abort_on_failed_steps could be a list containing the name of the steps for which the pipeline will stop its execution if any of them fail (so that we can ignore other steps that are not crucial to continue the pipeline execution)

2 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

Or perhaps the complementary scenario with a continue_on_failed_steps parameter which may be a list containing only the steps that can be ignored in case of failure.

2 years ago
0 Hi All! I Noticed When A Pipeline Fails, All Its Components Continue Running. Wouldn'T It Make More Sense For The Pipeline To Send An Abort Signal To All Tasks That Depend On The Pipeline? I'M Using Clearml V1.1.3Rc0 And Clearml-Agent 1.1.0

I'm totally agree with the pipelinecontroller/decorator part. Regarding the proposal for the component parameter, I also think it would be a good feature, although it might mislead the fact that there will be times when the pipeline will fail because it is an intrinsically crucial step, so it doesn't matter whether 'continue_pipeline_on_failure' is set to True or False. Anyway, I can't think a better way to deal with that right now.

2 years ago
0 Hi, Is There A Simple Way To Make

I see, but I don't understand the part where you talk about passing the task ID to the child processes. Sorry if it's something trivial. I recently started working with ClearML.

2 years ago
Show more results compactanswers