Reputation
Badges 1
183 × Eureka!Yes, when the parameters that are connected do not have nested dictionaries, everything works fine. The problem comes when I try to do something like this:
` from clearml import Task
task = Task.init(project_name="Examples", task_name="task with connected dict")
args = {}
args["period"] = {"start": "2020-01-01 00:00", "end": "2020-12-31 23:00"}
task.connect(args) `
and the clone task is like this:
` from clearml import Task
template_task = Task.get_task(task_id="<Your template task id>"...
Brilliant, that worked like a charm!
Perfect, that's exactly what I was looking for 🙂 Thanks!
Sure, but I mean, apart from label it as a local path, what's the point of renaming the original path if my goal is to access it later using the name I gave it?
Currently I'm working with v1.0.5. Anyway, I found that it is possible to connect the new argument if I store in a variable the arguments returned by task.connect(args)
. I expected that since it is a mutable object it would not be necessary to overwrite args
, but apparently it is required in this version of ClearML.
Or maybe you could bundle some parameters that belongs to PipelineDecorator.component into high-level configuration variable (something like PipelineDecorator.global_config (?))
That' s right, I don't know why I was trying to make it so complicated 😅
But this path actually does not exist in my system, so how should I fix that?
Now it's okey. I have found a more intuitive way to get around. I was facing the classic 'xy' problem :)
Sure, just by changing a few things from the previous example:
` from clearml import Task
task = Task.init()
task.connect({"metrics": ["nmae", "bias", "r2"]})
metrics_names = task.get_parameter("General/metrics")
print(metrics_names)
print(type(metrics_names)) `
I have found it is not possible to start a pipeline B after a pipeline A. Following the previous example, I have added one more pipeline to the script:
` from clearml import Task
from clearml.automation.controller import PipelineDecorator
@PipelineDecorator.component(return_values=["msg"], execution_queue="model_trainings")
def step_1(msg: str):
msg += "\nI've survived step 1!"
return msg
@PipelineDecorator.component(return_values=["msg"], execution_queue="model_trainings")
def st...
How can I tell clearml
I will use the same virtual environment in all steps and there is no need to waste time re-installing all packages for each step?
Having the ability to clone and modify the same task over and over again, in principle I would no longer need the multi_instance support feature from PipelineDecorator.pipeline. Is this correct, or are they different things?
I don't know if you remember the need I had some time ago to launch the same pipeline through configuration. I've been thinking about it and I think PipelineController fits my needs better than PipelineDecorator in that respect.
Exactly!! That's what I was looking for: create the pipeline but not launching it. Thanks again AgitatedDove14
Sure, it would be very intuitive if the command to stop an agent would be as easy as:clearml-agent daemon --stop AGENT_PID
Sure! That definitely makes sense. Where can I specify callbacks in the PipelineDecorator
API?
From what I understood, ClearML
creates a virtual environment from scratch for each task it runs. To detect the dependencies of each script, apparently it inspects the script for the imports and packages specified in Task.add_requirements
. You mean that's not the convenient way for ClearML
to create the environments for each task? What is the right way to proceed in this case?
I see the point. The reason I'm using PipelineController now is that I've realised that in the code I only send IDs from one step of the pipeline to another, and not artefacts as such. So I think it makes more sense in this case to work with the former.
Mmm I see. So the agent is taking the parameters from the base task registered in the server. Then if I call task.get_parameter_as_dict
for a task that has not been executed by an agent, should I get the original types of the values?
Mmm that's weird. Because I can see the type hints in the function's arguments of the automatically generated script. So, maybe I'm doing something wrong or it's a bug, since they have been passed to the created step (I'm using clearml version 1.1.2 and clearml-agent version 1.1.0).
Hi AnxiousSeal95 !
That's it. My idea is that artifacts can be linked to the model. Typically these artifacts are often links to serialized objects (such as datasets or scalers). They are usually directories or temporary files in mount units that I want to be loaded as artifacts of the task, removed (as they are temporary) and later I can get a new local path via task.artifacts["scalers"].get_local_copy()
. I think this way the model's dependence on the task that created it could be re...
I mean the agent that will run the function (which represents a pipeline step) should clone the repo in order to find the location of the project modules that are required for the function to be executed. Also, I have found that clearml
does not automatically detect the imports specified within the function decorated with PipelineDecorator.component
(despite I followed a similar scheme to the one in the example https://github.com/allegroai/clearml/blob/master/examples/pipeline/pi...
I'm using the last commit. I'm just fitting a scikit-learn MinMaxScaler
object to a dataset of type tf.data.Dataset
inside a function (which represents the model training step) decorated with PipelineDecorator.component
. The function does not even return the scaler object as an artifact. However, the scaler object is logged as an artifact of the task, as shown in the image below.
Sure, it's already enabled. I noticed in the ClearML agent configuration another parameter related to environment caching, named as venv_update
(I believe it's still in beta). Do you think enabling this parameter significantly helps to build environments faster?
Yes, I guess. Since pipelines are designed to be executed remotely it may be pointless to enable an output_uri
parameter in the PipelineDecorator.component
. Anyway, could another task be initialized in the same scr...