
Reputation
Badges 1
129 × Eureka!for https cloning, deploy token is needed
where is it in the docs?
More specifically, there are 2 tasks with almost identical docker commands. The only difference is the image itself. The task with one image works, and with another image it fails. Both images are valid images that lauch nicely on my laptop. Both images exist in the registry. Maybe you have some ideas what could possibly be wrong here?
AgitatedDove14 I run into this problem again. Are there any known issues about it? I don't remember what helped the last time
Solved. The problem was a trailing space before the image name in the Image
section in web UI. I think you should probably strip the string before proceeding to environment building step, to avoid this annoying stuff to happen. Of course, users could check twice before launching, but this thing will come up every once in a while regardless
of course, I use custom images all the time, the question was how to do it for a pipeline 😆 setting private attributes directly doesn't look as good practice
Maybe displaying 9 or 10 by default would be enough + clearly visible and thick scrollbar to the right
SuccessfulKoala55 sorry, that was a bug on my side. It was just referring to another class named Model
It doesn't install anything with pip during launch, I'm assuming it should take everything from the container itself (otherwise there would be a huge overhead). It simply fails trying to import things in the script
File "preprocess.py", line 4, in <module> from easydict import EasyDict as edict ModuleNotFoundError: No module named 'easydict'
When I launch tasks with a pipeline, they keep complaining about missing pip packages. I run it inside a docker container, and I'm sure these packages are present inside it (when I launch the container locally, run python3 and import them, it works like charm). Any ideas how to fix this?
I have a base task for each pipeline step. When I initialize a pipeline, for each step I clone the corresponding task, modify it and add it as a step. Tasks are launched from a pipeline, not cli. I'm absolutely sure docker argument is not empty (I specify it with export_data['container']['image'] = '
http://registry.gitlab.com/cherrylabs/ml/clearml-demo:clearml '
, and it shows on Web UI)
Sorry for the delay
Not reproduced, but caught another error when running pipeline_from_tasks.py
` Traceback (most recent call last):
File "pipeline_from_tasks.py", line 31, in <module>
pipe.add_step(name='stage_data', base_task_project='examples', base_task_name='pipeline step 1 dataset artifact')
File "/home/kirillfish/.local/lib/python3.6/site-packages/clearml/automation/controller.py", line 276, in add_step
base_task_project, base_task_name))
ValueError: Could not find ...
OK, I managed to launch the example and it works
pipeline controller itself is stuck at running mode forever all step tasks are created but never enqueued
I can share some code
The pipeline is initialized like thispipe = PipelineController(project=cfg['pipe']['project_name'], name='pipeline-{}'.format(name_postfix), version='1.0.0', add_pipeline_tags=True) pipe.set_default_execution_queue('my-queue')
Then for each step I have a base task which I want to clone
` step_base_task = Task.get_task(project_name=cfg[name]['base_project'],
task_name=...
pipeline launches on the server anyway (appears on the web UI)
You are right, I had [None]
as parents in one of the tasks. Now this error is gone
creates all the step tasks in draft mode and then stucks
AgitatedDove14 I still have name my_name
, but the project name my_project/.datasets/my_name
rather than my_project/.datasets
Refactoring is to account for the new project names. And also to resolve the project name depending on the version of a client