Reputation
Badges 1
149 × Eureka!I found out this happens with any other image except the default one, regardless of whether I set it with pipe._task.set_base_docker
The image is not needed to run the pipeline logic, I do it just to reduce overhead. Otherwise it would take too long to just build the default image on every launch
You can try to spin the "services" queue without docker support, if there is no need for containers it will accelerate the process.
With pipe.start(queue='services')
, it still tries to run some docker for some reason1633799714110 kirillfish-ROG-Strix-G512LW-G512LW info ClearML Task: created new task id=a4b0fbc6a1454947a06be4e48eda6740 ClearML results page:
`
1633799714974 kirillfish-ROG-Strix-G512LW-G512LW info ClearML new version available: upgrade to v1.1.2 is recommended!
...
AgitatedDove14 thank you. Maybe you know about OutputModel.remove
method or something like that?
AgitatedDove14
`
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
error: Could not fetch origin
Repository cloning failed: Command '['git', 'fetch', '--all', '--recurse-submodules']' returned non-zero exit status 1.
clearml_agent: ERROR: Failed cloning repository.
- Make sure you pushed the requested commit:
(repository='git@...', branch='main', commit_id='...', tag='', docker_cmd='registry.gitlab.com/...:...', en...
in order to work with ssh cloning, one has to manually install openssh-client to the docker image, looks like that
@<1523701205467926528:profile|AgitatedDove14> clearml 1.1.1
Yeah, of course it is in draft mode ( Task.import_task
creates a task in draft mode, it is the lower task on the screenshot)
pipeline launches on the server anyway (appears on the web UI)
I initialize tasks not as functions, but as scripts from different repositories, with different images
hm, not quite clear how it is implemented. For example, this is how I do it now (explicitly)
AgitatedDove14 by task you mean the training task or the separate task corresponding to the model itself? The former won't work since I don't want to delete the training task, only the models
You are right, I had [None]
as parents in one of the tasks. Now this error is gone
CostlyOstrich36 hi! yes, as I expected, it doesn't see any files unless I call add_files
first
But add_files
has no output_url
parameter and tries to upload to the default place. This returns 413 Request Entity Too Large
error because there are too many files, so using the default location is not an option. Could you please help with this?
The lower task is created during import_task
, the upper one - during actual execution of the step, several minutes after
I have a base task for each pipeline step. When I initialize a pipeline, for each step I clone the corresponding task, modify it and add it as a step. Tasks are launched from a pipeline, not cli. I'm absolutely sure docker argument is not empty (I specify it with export_data['container']['image'] = '
http://registry.gitlab.com/cherrylabs/ml/clearml-demo:clearml '
, and it shows on Web UI)
@<1523701070390366208:profile|CostlyOstrich36> Yes, I'm self deployed, and the company I want to share it with is also self deployed
AnxiousSeal95 We can make a nested pipeline, right? Like if the top pipeline calls add_step
to create steps from tasks, and then we decompose any single step further and create a sub-pipeline from decorators there. We should be able to do that, because PipelineController is itself a task, right?
Also, is there a way to unfold such nested pipeline into a flat pipeline? So that only a single pipeline task is created, and it draws a single detailed DAG in PLOTS
tab?
@<1523701435869433856:profile|SmugDolphin23> could you please give me a link to it? I can't find it on github... Here I see only one comment
None
Very nice for small pipelines (where every step could be put into a function in a single repository)
But in the same time, it contains some keys that cannot be modified with task_overrides
, for example project_name
SparklingElephant70 in WebUI Execution/SCRIPT PATH
I see the task on the web UI, but get Fetch experiment failed
when I click on it, as I described. It even fetches the correct ID by it's name. I'm almost sure it will be present in mongodb
And I don't see any new projects / subprojects where that dataset creation Task is stored
Previously I had a separate, manually created project where I stored all newly created datasets for my main project. Very neat
Now the task is visible only in the "All experiment" section, but there is no separate project in the web ui where I could see it...
CostlyOstrich36 idk, I need to share it to see
how do I share it?