Reputation
Badges 1
11 × Eureka!Yes it is a common case but I think the pipe.start_locally(run_pipeline_steps_locally=False)
solved it. I started running it again and it seems to have passed the phase where it failed last time
Hi Martin,
I upgraded the ClearML version to 1.1.1 and I updated the pipeline code according to v2 as you wrote here and I got a new error which I haven't got before.
Just noting that I did a git push before.
Do you know what can cause this error?
Thanks!
` version_num = 1c4beae41a70c526d0efd064e65afabbc689c429
tag =
docker_cmd = ubuntu:18.04
entry_point = tasks/pipelines/monthly_predictions.py
working_dir = .
Warning: could not locate requested Python version 3.8, reverting to version 3.6
c...
And I'm getting this in the command line when the process fails:/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 73 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '
I'm not sure how to extract this information.
I'm trying to get the "completed at" information (in my case is Mar 3 2022 12:14 as it is shown in the pic I attached) using the API to my python script.
The closest think I found is by doing the following:task = clearml.Task.get_task(task_id=<task_id>) task.comment
which gives me the result:'Auto-generated at 2022-03-01 06:27:52 UTC by havi@dgroup'
This solution can be good too but if there is a way to extract the specific information I...
Hi SuccessfulKoala55 CostlyOstrich36 ,
Thanks for your answers!
The code you write here was very helpful, I used it and succeeded to extract the relevant table from its output directly to my notebook:). There are cases that I want to investigate the outputs I saved in ClearML. In my case it was a table that I wanted to filter and get specific values from it, so this functionality helps me to analyze the results (I know I can download the json to my computer and then upload it back to my no...
I would like to run a pipeline controller from a ClearML draft. One of the argument I want to define is a flag that tells me if to include an external data or not. I know I can use a str argument for it too but I think that a boolean argument would be more correct to use here.
It works, thanks!
` Collecting jupyter-core==4.7.1
Using cached jupyter_core-4.7.1-py3-none-any.whl (82 kB)
Collecting jupyter-highlight-selected-word==0.2.0
Using cached jupyter_highlight_selected_word-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting jupyter-latex-envs==1.4.6
Using cached jupyter_latex_envs-1.4.6.tar.gz (861 kB)
Collecting jupyter-nbextensions-configurator==0.4.1
Using cached jupyter_nbextensions_configurator-0.4.1.tar.gz (479 kB)
Collecting jupyterlab-pygments==0.1.2
Using cached jupy...
Yes, I meant to jupyter-notebook. For example, I can download an artifact from ClearML as a local copy using :preprocess_task = Task.get_task(task_id='preprocessing_task_id') local_csv = preprocess_task.artifacts['data'].get_local_copy()
and then I can load it again to my notebook.
Is there a similar code line for downloading also plots and scalars information that exist in the results ?
I just changed the project name with XXXXX when I copied and pasted the error here, but the original project name is the same as it is written in the name below [tool.poetry] in pyproject.toml
Hi, we succeed to solve this issue. The problem was that the src main folder of the project was defined in the poetry requirement file and clearML was looking for this "package" .
I'm using Poetry in order to install the packages and it installs them from a pyproject.toml file