Reputation
Badges 1
662 × Eureka!Or is just integrated in the ClearML slack space and for some reason it's showing the clearml address then?
Also full disclosure - I'm not part of the ClearML team and have only recently started using pipelines myself, so all of the above is just learnings from my own trials 😅
Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
My current approach with pipelines basically looks like a GH CICD yaml config btw, so I give the user a lot of control on which steps to run, why, and how, and the default simply caches all results so as to minimize the number of reruns.
The user can then override and choose exactly what to do (or not do).
I think -
- Creating a pipeline from tasks is useful when you already ran some of these tasks in a given format, and you want to replicate the exact behaviour (ignoring any new code changes for example), while potentially changing some parameters.
- From decorators - when the pipeline logic is very straightforward and you'd like to mostly leverage pipelines for parallel execution of computation graphs
- From functions - as I described earlier :)
Sounds like incorrect parsing on ClearML side then, doesn't it? At least, it does not fully support MinIO then
I don't imagine AWS users get a new folder named aws-key-region-xyz-bucket-hostname
when they download_folder(...)
from an AWS S3 bucket, or do they? 🤔
I'm not sure, I'm not getting anything (this is the only thing I could fin that's weird about this project).
It has a space in the name, has no subprojects, and it just doesn't show up anywhere 🤔
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
I'm aware, but it would be much cleaner to define them in the worker's clearml.conf
and let ClearML expose them locally to running tasks.
EDIT: Also the above is specifically about serving, which is not the target here 🤔 At least not yet 😄
AgitatedDove14 I will try! I remember there were some issues with it, where I had to resort to this method first, but maybe things have changed since :)
Any simple ways around this for now? @<1523701070390366208:profile|CostlyOstrich36>
But... Which queue does it listen to, and which type of instances will it use etc
So a missing bit of information that I see I forgot to mention, is that we named our packages as foo-mod
in pyproject.toml
. That hyphen then get’s rewritten as foo_mod.x.y.z-distinfo
.
foo-mod @ git+
I think you're interested in the Monitor
class:)
Thanks SuccessfulKoala55 , I made https://github.com/allegroai/clearml-agent/issues/126 as a suggestion.
Do you have any thoughts on how to expose these... manually?
It does so already for environment variables that prefixed with CLEARML_
, so it would be nice to have some control over that.
Following up on that (I don't think the K8s helm chart for 1.7.0 is out yet SlimyDove85 , is it?) - but what's the recommended way to backup the mongodb before upgrading on K8s?
IIRC, get_local_copy()
downloads a local copy and returns the path to the downloaded file. So you might be interested in e.g.local_csv = pd.read_csv(a_task.artifacts['train_data'].get_local_copy())
With the models, you're looking for get_weights()
. It acts the same as get_local_copy()
, so it returns a path.
EDIT: I think also get_local_copy()
for a model should work 👍
You can use logger.report_scalar
and pass a single value.
And task = Task.init(project_name=conf.get("project_name"), ...)
is basically a no-op in remote execution so it does not matter if conf
is empty, right?
Yes that's what I thought, thanks for confirming.
Maybe they shouldn't be placed under /tmp
if they're mission critical, but rather the clearml cache folder? 🤔
BTW AgitatedDove14 following this discussion I ended up doing the regex way myself to sync these, so our code has something like the following. We abuse the object description here to store the desired file path.
` config_path = task.connect_configuration(configuration=config_path, name=config_fname)
included_files = find_included_files_in_source(config_path)
while included_files:
file_to_include = included_files.pop()
sub_config = task.connect_configuration(
configurat...
Thanks SuccessfulKoala55 ! Is this listed anywhere in the documentation?
Could I set an environment variable there and then refer to it internally in the config with the ${...}
notation?
I see https://github.com/allegroai/clearml-agent/blob/d2f3614ab06be763ca145bd6e4ba50d4799a1bb2/clearml_agent/backend_config/utils.py#L23 but not where it's called 🤔
Now, the original pyhocon does support include statements as you mentioned - https://github.com/chimpler/pyhocon