Reputation
Badges 1
183 × Eureka!Sure, just by changing a few things from the previous example:
` from clearml import Task
task = Task.init()
task.connect({"metrics": ["nmae", "bias", "r2"]})
metrics_names = task.get_parameter("General/metrics")
print(metrics_names)
print(type(metrics_names)) `
Perfect, that's exactly what I was looking for 🙂 Thanks!
Hi ExasperatedCrab78 ,
Sure! Sorry for the delay. I'm using Chrome Version 98.0.4758.102 (Official Build) (64-bit)
Hi AgitatedDove14
Using task.get_parameters
I get the parameters in a dictionary, but the values are still of type 'string'. The quickest solution I can think of is parsing with eval
built-in. wdyt?
Where can I find this documentation?
But what is the name of that API library in order to have access to those commands from Python SDK?
My guess is to manually read and parse the string that clearml-agent list
returns, but I'm pretty sure there's a cleaner way to do it, isn't there?
So I assume that you mean to report not only the agent's memory usage, but also of all the subprocesses the agent spawns (?)
Hi AgitatedDove14 , gotcha. So how can I temporarily fix it? I'm not able to find something like task.set_output_uri() in the official docs. Or maybe do you plan to solve this problem in the very short term?
But I cannot go back to version v1.1.3 because there is another bug related to the Dataset tags
If I try to connect a dictionary of type dict[str, list]
with task.connect
, when retrieving this dictionary with task.get_parameter
I get another dictionary dict[str, str]
. Therefore, I see the same behavior using task.connect
:/
Ok, so it doesn't follow the exact same rules as Task.init
? I was afraid all the logs and outputs of a hyperparameter optimization task would be deleted just because no artifacts were created.
My idea is to take advantage of the capability of getting parameters connected to a task from another task to read the path where the artifacts are stored locally, so I don't have to define it again in each script corresponding to a different task.
Sure, but I mean, apart from label it as a local path, what's the point of renaming the original path if my goal is to access it later using the name I gave it?
But this path actually does not exist in my system, so how should I fix that?
Well, I am thinking in the case that there are several pipelines in the system and that when filtering a task by its name and project I can get several tasks. How could I build a filter for Task.get_task(task_filter=...)
that returns only the task whose parent task is the pipeline task?
That' s right, I don't know why I was trying to make it so complicated 😅
Hi AgitatedDove14 , just one last thing before closing the thread. I was wondering what is the use of PipelineController.create_draft
if you can't use it to clone and run tasks, as we have seen
Great, thank you very much TimelyPenguin76
Mmm that's weird. Because I can see the type hints in the function's arguments of the automatically generated script. So, maybe I'm doing something wrong or it's a bug, since they have been passed to the created step (I'm using clearml version 1.1.2 and clearml-agent version 1.1.0).
AgitatedDove14 Oops, something still seems to be wrong. When trying to retrieve the dataset using get_local_copy() I get the following error:
` Traceback (most recent call last):
File "/home/user/myproject/lab.py", line 27, in <module>
print(dataset.get_local_copy())
File "/home/user/.conda/envs/myenv/lib/python3.9/site-packages/clearml/datasets/dataset.py", line 554, in get_local_copy
target_folder = self._merge_datasets(
File "/home/user/.conda/envs/myenv/lib/python3.9/site-p...
By adding the slash I have been able to see that indeed the dataset is stored in output_url
. However, when calling finalize
, I get the same error. And yes, I have installed the version corresponding to the last commit :/
Mmm but what if the dataset size is too large to be stored in the .cache path? It will be stored there anyway?
Well I tried several things but none of them have worked. I'm a bit lost
AgitatedDove14 In the 'status.json' file I could see the 'is_dirty' flag is set to True
Yeah, but after doing that a message pops up showing a list of artifacts from the task that could not be deleted
Makes sense, thanks!
Now it's okey. I have found a more intuitive way to get around. I was facing the classic 'xy' problem :)