Hi UnevenDolphin73 , when you say pipeline itself you mean the controller? The controller is only in charge of handling the components. Lets say you have a pipeline with many parts. If you have a global environment then it will force a lot of redundant installations through the pipeline. What is your use case?
Well the individual tasks do not seem to have the expected environment.
We have an internal mono-repo and some of the packages are required - they’re all available correctly for the controller, only some are required for the individual tasks, but the “magic” doesn’t happen 😞
That is, the controller does not identify them as a requirement, so they’re not installed in the tasks environment.
We also wanted this, we preferred to create a docker image with all we need, and let the pipeline steps use that docker image
That way you don’t rely on clearml capturing the local env, and you can control what exists in the env
PricklyRaven28 That would be my fallback, it would make development much slower (having to build containers with every small change)
It’s just that for the packages
argument, ClearML says:
If not provided, packages are automatically added based on the imports used inside the wrapped function.
So… 🤔
not sure about this, we really like being in control of reproducibility and not depend on the invoking machine… maybe that’s not what you intend
We’d be happy if ClearML captures that (since it uses e.g. pip, then we have the git + commit hash for reproducibility), as it claims it would 😅
Any thoughts CostlyOstrich36 ?
Pinging about this still, unresolved 🤔
ClearML does not capture our internal libraries and so our functions (pipeline steps) crash with missing modules.
Still; anyone? 🥹 @<1523701070390366208:profile|CostlyOstrich36> @<1523701205467926528:profile|AgitatedDove14>
Hi @<1523701083040387072:profile|UnevenDolphin73>
How can I ensure tasks in a pipeline have the same environment as the pipeline itself?
...
but the tasks (executed remotely) do not use that same environment?
Just verifying, we are talking about pipeline decorators?
We also wanted this, we preferred to create a docker image with all we need, and let the pipeline steps use that docker image
You can specify the docker on the decorator itself:
None
Regrading capturing the packages, if you import them inside the decorated package, they will be captured based on what is installed in the local (i.e. initial) environment. The idea is that the components are Not the same as the logic, basically the logic of the pipeline should not have any real package requirement, only the components (actually doing something), should. What am I missing ?
Hey @<1523701205467926528:profile|AgitatedDove14> , thanks for the reply!
We would like to avoid dockerizing all our repositories. And for the time being we have not used the decorators, but we can do that too.
The pipeline is instead built dynamically at the moment.
The issue is that the components do not have their dependency. For example:
def step_one(...):
from internal.repo import private
# do stuff
When step_one
is added as a component to the pipeline, it does not include the “internal.repo” as a package dependency, so it crashes.
it does
not
include the “internal.repo” as a package dependency, so it crashes.
understood
And for the time being we have not used the decorators,
So how are you building the pipeline component ?
Using the PipelineController with add_function_step
If you use this one for example, will the component have pandas as part of the requirement
None
def step_two(...):
import pandas as pd
# do stuff
If so (and it should), what's the difference, where is "internal.repo " different from pandas ?
I have no idea what’s the difference, but it does not log the internal repository 😞 If I knew why, I would be able to solve it myself… hehe
The only thing I could think of is that the output of pip freeze would be a URL?
is this repo installed on the machine creating the pipeline ?
You can also manually add it here `packages={"link_to_internal_python_package",]
None
It is. In what format should I specify it? Would this enforce that package on various components? Would it then no longer capture import statements?
what format should I specify it
requirements.txt format e.g. ["package >= 1.2.3"]
Would this enforce that package on various components
This is a per component control, so you can have different packages / containers based on the componnent
Would it then no longer capture import statements?
This is replacing the auto detected packages, but obviously this fails to detect your internal repo package, which is the main issue here.
How is "internal package" installed, in other words can you send the pip freeze of th machine creating the pipeline ? because this is where the packages are detected (if packages are not installed you cannot infer the actual package name nor the version just from the import statement)
And is this repo installed on the pipeline creating machine ?
Basically I'm asking how come it did not automatically detect it?
It is installed on the pipeline creating the machine.
I have no idea why it did not automatically detect it 😞
I think this is the main issue, is this reproducible ? How can we test that?
How or why is this the issue? I great something is getting lost in translation :D
On the local machine, we have all the packages needed. The code gets sent for remote execution, and all the local packages are frozen correctly with pip.
The pipeline controller task is then generated and executed remotely, and it has all the relevant packages.
Each component it launches, however, is missing the internal packages available earlier :(
How or why is this the issue?
The main issue is a missing requirement on the Task component, and this is why it is failing.
You can however manually specify package (and I'm assuming this will solve the issue), but it should have autodetected, no?
Exactly, it should have auto-detected the package.
I’d like to refrain from manually specifying the dependencies, since it adds a lot of overhead to extend