Reputation
Badges 1
149 × Eureka!if fails during add_step stage for the very first step, because task_overrides contains invalid keys
It doesn't install anything with pip during launch, I'm assuming it should take everything from the container itself (otherwise there would be a huge overhead). It simply fails trying to import things in the script
File "preprocess.py", line 4, in <module> from easydict import EasyDict as edict ModuleNotFoundError: No module named 'easydict'
For datasets it's easily done with a dedicated project, a separate task per dataset, and Artifacts tab within it
@<1523701435869433856:profile|SmugDolphin23> about ignore_parent_datasets ? I renamed it the same day you added that comment. Please let me know if there is anything else I need to pay attention to
Refactoring is to account for the new project names. And also to resolve the project name depending on the version of a client
creates all the step tasks in draft mode and then stucks
Before the code I shared, there were some lines like this
step_base_task = Task.get_task(project_name=cfg[name]['base_project'],
task_name=cfg[name]['base_name'])
export_data = step_base_task.export_task()
... modify export_data in-place ...
task = Task.import_task(export_data)
pipe.add_step(base_task_id=task.id, clone_base_task=False, ...)
of course, I use custom images all the time, the question was how to do it for a pipeline 😆 setting private attributes directly doesn't look as good practice
add_files . There is no upload call, because add_files uploads files by itself, if I got it correctly
AgitatedDove14 yeah, makes sense, that would require some refactoring in our projects though...
But why is my_name a subproject? Why not just my_project/.datasets ?
Maybe displaying 9 or 10 by default would be enough + clearly visible and thick scrollbar to the right
Yes, it works, thank you! The question remains though: why docker containers won't launch on services
pipeline launches on the server anyway (appears on the web UI)
You can try to spin the "services" queue without docker support, if there is no need for containers it will accelerate the process.
With pipe.start(queue='services') , it still tries to run some docker for some reason1633799714110 kirillfish-ROG-Strix-G512LW-G512LW info ClearML Task: created new task id=a4b0fbc6a1454947a06be4e48eda6740 ClearML results page: `
1633799714974 kirillfish-ROG-Strix-G512LW-G512LW info ClearML new version available: upgrade to v1.1.2 is recommended!
...
I initialize tasks not as functions, but as scripts from different repositories, with different images
There are some questions in this channel already regarding pipeline V2. Is there any tutorial or changelog or examples I can refer to?
SparklingElephant70 Try specifying full path to the script (relative to working dir)
The pipeline is initialized like thispipe = PipelineController(project=cfg['pipe']['project_name'], name='pipeline-{}'.format(name_postfix), version='1.0.0', add_pipeline_tags=True) pipe.set_default_execution_queue('my-queue')
Then for each step I have a base task which I want to clone
` step_base_task = Task.get_task(project_name=cfg[name]['base_project'],
task_name=...
What exactly we need to copy? I believe we have already copied everything, but it keeps throwing "Fetch experiment failed" error
I still haven't figured out how to make files downloaded this way visible for future get_local_copy calls though
@<1523701435869433856:profile|SmugDolphin23> could you please give me a link to it? I can't find it on github... Here I see only one comment
None
CostlyOstrich36 hi! yes, as I expected, it doesn't see any files unless I call add_files first
But add_files has no output_url parameter and tries to upload to the default place. This returns 413 Request Entity Too Large error because there are too many files, so using the default location is not an option. Could you please help with this?

