Reputation
Badges 1
149 × Eureka!it has the same effect as start/wait/stop, kinda weird
creates all the step tasks in draft mode and then stucks
For datasets it's easily done with a dedicated project, a separate task per dataset, and Artifacts tab within it
We digressed a bit from the original thread topic though 😆 About clone_base_task=False .
I ended up using task_overrides for every change, and this way I only need 2 tasks (a base task and a step task, thus I use clone_base_task=True and it works as expected - yay!)
So, the problem I described in the beginning can be reproduced only this way:
- to have a base task
- export_data - modify - import_data - have a second task
- pass the second task to
add_stepwith `cl...
CostlyOstrich36 so it's the same problem as https://clearml.slack.com/archives/CTK20V944/p1636373198353700?thread_ts=1635950908.285900&cid=CTK20V944
In short, what helped isgitlab+deploy-tokenin gitlab url
You can try to spin the "services" queue without docker support, if there is no need for containers it will accelerate the process.
With pipe.start(queue='services') , it still tries to run some docker for some reason1633799714110 kirillfish-ROG-Strix-G512LW-G512LW info ClearML Task: created new task id=a4b0fbc6a1454947a06be4e48eda6740 ClearML results page: `
1633799714974 kirillfish-ROG-Strix-G512LW-G512LW info ClearML new version available: upgrade to v1.1.2 is recommended!
...
yeah, I mean I need to get the model to get its ID, but I need to get ID to get the model
SparklingElephant70 in WebUI Execution/SCRIPT PATH
@<1523701070390366208:profile|CostlyOstrich36> on a remote agent, yes, running the task from the interface
SparklingElephant70 Try specifying full path to the script (relative to working dir)
I see the task on the web UI, but get Fetch experiment failed when I click on it, as I described. It even fetches the correct ID by it's name. I'm almost sure it will be present in mongodb
I have a base task for each pipeline step. When I initialize a pipeline, for each step I clone the corresponding task, modify it and add it as a step. Tasks are launched from a pipeline, not cli. I'm absolutely sure docker argument is not empty (I specify it with export_data['container']['image'] = ' http://registry.gitlab.com/cherrylabs/ml/clearml-demo:clearml ' , and it shows on Web UI)
task = Task.import_task(export_data)
pipe.add_step(
name=name,
base_task_id=task.id,
parents=parents,
task_overrides={'script.branch': 'main', 'script.version_num': '', },
execution_queue=pipe_cfg['step_queue'],
cache_executed_step=True,
clone_base_task=False
)
The lower task is created during import_task , the upper one - during actual execution of the step, several minutes after
for https cloning, deploy token is needed
I think it's still an issue, not critical though, because we have another way to do it and it works
maybe being able to change 100% of things with task_overrides would be the most convenient way
In principle, I can modify almost everything with task_overrides , omitting export part, and it's fine. But seems that by exporting I can change more things, for example project_name
Before the code I shared, there were some lines like this
step_base_task = Task.get_task(project_name=cfg[name]['base_project'],
task_name=cfg[name]['base_name'])
export_data = step_base_task.export_task()
... modify export_data in-place ...
task = Task.import_task(export_data)
pipe.add_step(base_task_id=task.id, clone_base_task=False, ...)
I suspect this has something to do with task.output_model
this is the same thing as in the previous thread. I suggest that we move there