Reputation
Badges 1
24 × Eureka!CostlyOstrich36 Sorry, I donโt understand what did you mean when mentioning with same file
Additional information
When I leave packages
argument as default None value and use debug_pipeline
to run the pipeline, everything works as expected,
AgitatedDove14 This make sense. Hope we can have this argument in the next ClearML version ๐
So, I have to package all the modules first and find a way to install that package at the beginning of pipeline execution to be about to use these module, am I right?
CostlyOstrich36 I havenโt tried.
The error above occurs when I trying to build a pipeline with decorator.
Iโll try to reproduce this scenario to confirm no problem occurs during the generation of these datasets
@<1523701205467926528:profile|AgitatedDove14> The only reason I want to change the destination
is because of an unforeseeable mistake in the past. Now I must change the old destination
(private IP address) of my past datasets to the new alias (labserver-2) to be able to download using the python script.
@<1523701205467926528:profile|AgitatedDove14> do you have any documents and/or instruction to do so?
AgitatedDove14 GreasyPenguin14 Awesome!
I publish the task with UI interface
CostlyOstrich36
Great to hear that!
In short, we hope ClearML server can act as the bridge to connect local servers and Cloud infrastructure. (Local servers for development and Cloud for deployment and monitoring.)
For example,
- We want to deploy ClearML somewhere on the Internet.
- Then use this service to track experiments, orchestrate workflow, etc. in our local servers.
- After finished experiments, we get returned artifacts and save them somewhere, local disk or cloud for instance.
-...
CostlyOstrich36 Maybe because I donโt clearly understand how ClearML works.
When I use PipelineV1 (using Task
), by default, all artifact will be uploaded to ClearML-fileserver if I configure output_uri = True
. If I want to use S3 bucket instead, I must โpointโ output_uri
to the URI of that bucket.
Back to PipelineV2, I cannot find โa placeโ on where I could put my S3 bucket URI.
Nice, Iโll try this out
AgitatedDove14 Nice! Iโll try this out
I face the same problem.
When running the pipeline, some tasks that use multiprocessing would never be completed.
... @PipelineDecorator.component(parents=['step_1'], packages='../requirements.txt', cache=False, task_type=TaskTypes.data_processing, repo='.') def step_2(): import os ...
CostlyOstrich36
Awesome! Thanks! SuccessfulKoala55
AgitatedDove14 Sorry for the confusing question. I mean I cannot use relative imports inside the โwrappingโ function.
In detail, my project have this directory structure
โโโ project
โโโ package1
โ โโโ build_pipeline.py
โ โโโ module1.py
โ โโโ module2.py
โโโ package2
โโโ init.py
โโโ module3.py
โโโ module4.py
โโโ subpackage1
โโโ module5.py
From build_pipeline.py, inside each โwrappingโ function, I cannot import module...
TimelyPenguin76 As I remember, Iโve closed all dataset right after upload the data to ClearML-server
Yes, basically like this