Reputation
Badges 1
662 × Eureka!Yes. Though again, just highlighting the naming of foo-mod
is arbitrary. The actual module simply has a folder structured with an implicit namespace:
foo/
mod/
__init__.py
# stuff
FWIW, for the time being Iโm just setting the packages to all the packages the pipeline tasks sees with:
packages = get_installed_pkgs_detail()
packages = [f"{name}=={version}" if version else name for name, version in packages.values()]
packages = task.data.script.require...
It should store it on the fileserver, perhaps you're missing a configuration option somewhere?
Iโd like to refrain from manually specifying the dependencies, since it adds a lot of overhead to extend
Still; anyone? ๐ฅน @<1523701070390366208:profile|CostlyOstrich36> @<1523701205467926528:profile|AgitatedDove14>
We have the following, works fine (we also use internal zip packaging for our models):
model = OutputModel(task=self.task, name=self.job_name, tags=kwargs.get('tags', self.task.get_tags()), framework=framework)
model.connect(task=self.task, name=self.job_name)
model.update_weights(weights_filename=cc_model.save())
We have an internal mono-repo and some of the packages are required - theyโre all available correctly for the controller, only some are required for the individual tasks, but the โmagicโ doesnโt happen ๐
That is, the controller does not identify them as a requirement, so theyโre not installed in the tasks environment.
I have seen this quite frequently as well tbh!
I can navigate through the projects, but selecting one task in one project, then navigating to another project and selecting a different task -> there is no suggestion to compare the tasks.
In the projects page if I show all - I just see the projects. If I search for a task of similar name, I get results, but I can't compare them via the UI.
The only way I managed so far was to create a pseudo-comparison between unrelated tasks in the same project, then remove one task from comparion, and u...
I can't seem to manage the first way around. If I select tasks in different projects, I don't get the bottom bar offering to compare between them
Unfortunately I can't take a photo of not being able to compare tasks by navigating around the WebUI...
Does that clarify the issue CostlyOstrich36 ?
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
I'm not sure how the decorators achieve that; from the available examples and trials I've done, it seems that:
- Components anyway need to be available when you define the pipeline controller/decorator, i.e. same codebase
- The component code still needs to be self-composed (or, function component can also be quite complex)
- Decorators do not allow any dynamic build, because you must know how the component are connected at decoration time
With that said, it could be that the provided example...
You could probably either:
Start the task first (using Task.init
), and then set the parameters if needed Attach the dataset to the task itself
I've updated my feature request to describe that as well. A textual description is not necessarily a preview ๐ For now I'll use the debug samples.
These kind of things definitely show how ClearML was designed originally only for neural networks tbh, where images are almost always only part of the dataset. Same goes for the consistent use of iteration
everywhere ๐
Using the PipelineController with add_function_step
SuccessfulKoala55 The changelog wrongly cites https://github.com/allegroai/clearml/issues/400 btw. It is not implemented and is not related to being able to save CSVs ๐
Sure SuccessfulKoala55 , and thanks for looking into it.
As an alternative (for now, or in general), we could consider reverting back to pip. The issue we encounter is that we have a monorepo, so frozen requirements should specify relative paths, but pip freeze
does not seem to do that, so ClearML also fails in pip
mode
If everything is managed with a git repo, does this also mean PRs will have a messy metadata file attached to them?
I guess it depends on what you'd like to configure.
Since we let the user choose parents, component name, etc - we cannot use the decorators. We also infer required packages at runtime (the autodetection based on import statements fails with a non-trivial namespace) and need to set that to all components, so the decorators do not work for us.
There's code that strips the type hints from the component function, just think it should be applied to the helper functions too :)
Heh, my bad, the term "user" is very much ingrained in our internal way of working. You can think of it as basically any technically-inclined person in your team or company.
Indeed the options in the WebUI are too limited for our use case, so we're developed "apps" that take a yaml configuration file and build a matching pipeline.
With that, our users do not need to code directly, and we can offer much more fine control over the pipeline.
As for the imports, what I meant is that I encounter...
Okay trying again without detached
Ah okay ๐ Was confused by what you quoted haha ๐
Also, creating from functions allows dynamic pipeline creation without requiring the tasks to pre-exist in ClearML, which is IMO the strongest point to make about it
So basically what I'm looking for and what I have now is something like the following:
(Local) I have a well-defined aws_autoscaler.yaml
that is used to run the AWS autoscaler. That same autoscaler is also run with CLEARML_CONFIG_FILE=....
(Remotely) The autoscaler launches, listens to the predefined queue, and is able to launch instances as needed. I would run a remote execution task object that's appended to the autoscaler queue. The autoscaler picks it up, launches a new instanc...
The deferred_init
input argument to Task.init
is bool
by default, so checking type(deferred_init) == int
makes no sense to begin with, and is altering the flow.
@<1523701205467926528:profile|AgitatedDove14> this
SuccessfulKoala55 CostlyOstrich36 actually it is the import
statement, just finally got around to the traceback:
` File "/home/.../ccmlp/configs/mlops.py", line 4, in <module>
from clearml import Task
File "/home/.../.venv/lib/python3.8/site-packages/clearml/init.py", line 4, in <module>
from .task import Task
File "/home/.../.venv/lib/python3.8/site-packages/clearml/task.py", line 31, in <module>
from .backend_interface.metrics import Metrics
File "/home/......
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)