BTW AgitatedDove14 following this discussion I ended up doing the regex way myself to sync these, so our code has something like the following. We abuse the object description here to store the desired file path.
` config_path = task.connect_configuration(configuration=config_path, name=config_fname)
included_files = find_included_files_in_source(config_path)
while included_files:
file_to_include = included_files.pop()
sub_config = task.connect_configuration(
configurat...
I'll try upgrading to 1.1.5, one moment
Yeah that works fine π I just fetch it once to map argparse users to their IDs for later filtering.
The Slack Monitoring example should be updated btw, as they now use slack_sdk
instead of slack
(in the import statements)
That's fine as well - the code simply shows the name of the environment variable, not it's value, since that's taken directly from the agent listening to the services queue (and who's then running the scaler)
I think this is about maybe the credential.helper
used
Maybe it's the missing .bashrc
file actually. I'll look into it.
Holy crap this was a light-bulb moment, is this listed somewhere in the docs?
It solves so much of my issues xD
I think you're interested in the Monitor
class:)
This seems to be fine for now, if any future lookups finds this thread, btwwith mock.patch('clearml.datasets.dataset.Dataset.create'): ...
I'm using 1.1.6 (upgraded from 1.1.6rc0) - should I try 1.1.7rc0 or smth?
Sorry AgitatedDove14 , forgot to get back to this.
I've been trying to convince my team to drop poetry π
I think so, it was just missing from the official documentation π Thanks!
Honestly I wouldn't mind building the image myself, but the glue-k8s setup is missing some documentation so I'm not sure how to proceed
Yes it would be π
Visualization is always a difficult topic... I'm not sure about that, but a callback would be nice.
One idea that comes to mind (this is of course limited to DataFrames), but think the git diff
, where I imagine 3 independent section:
Removed columns (+ truncated preview of removed values) (see below) Added columns (+ truncated preview of removed values)
The middle column is then a bit complicated, but I would see some kind of "shared columns" dataframe, where each ...
Just because it's handy to compare differences and see how the data changed between iterations, but I guess we'll work with that π
We'll probably do something like:
When creating a new dataset with a parent (or parents), look at immediate parents for identically-named files If those exist, load those with matching framework (pyarrow, pandas, etc), and log differences to the new dataset π
Right so this is checksum based? Are there plans to only store delta changes for files (i.e. store the changed byte instead of the entire file)?
I opened a GH issue shortly after posting here. @<1523701312477204480:profile|FrothyDog40> replied (hoping I tagged the right person).
We need to close the task. This is part of our unittests for a framework built on top of ClearML, so every test creates and closes a task.
Any follow up thoughts SuccessfulKoala55 or CostlyOstrich36 ?
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? π
I'm guessing that's not on pypi yet?
The instance that took a while to terminate (or has taken a while to disappear from the idle workers)
Should this be under the clearml
or clearml-agent
repo?
I'm working on the config object references π
From the log you shared, the task is picked up by theΒ
worker_d1bd92a3b039400cbafc60a7a5b1e52b_4e831c4cbaf64e02925b918e9a3a1cf6_<hostname>:gpu0,1
Β worker
I can try and target the default one if it helps..?