
Reputation
Badges 1
53 × Eureka!Not sure how to check that tbh. Does this help:root@aea5d96a8ed3:/usr/agent# clearml-agent --version CLEARML-AGENT version 1.0.0
Would be nice to display this info maybe somewhere inhere:
Hi Martin, to expand on my previous comments: the template for _Driver
already exists; I'm suggesting to make it public. Consequently, StorageHelper
should accept a driver
parameter to __init__
, defaulting to None
. Only when its value is not provided by the user should the library go out of its way to do the right thing and check all the known storage providers, fetch credentials, what not - stuff that will not work for most users, most of the time (even if you ...
Hi Jake, thanks for the reply. I've tried the account key method, works fine - but unfortunately clearml expects an old version of azure-storage-blob
(<2.1), which is incompatible with the recent versions (^12.). Any clues of how we could work around this one? Thanks again.
Try updating to 1.1.0?
Hi AgitatedDove14
this is how our calls look like:
` from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger(save_dir=".", name="debug plotting", 1)
logger.experiment.add_histogram(f"A", data[data.by == 0])
logger.experiment.add_histogram(f"B", data[data.by == 1]) `the result of which is shown in my post above.
This is some test data, and how we'd like things to look:
` def make_data(size: int=10000, n: int=5) -> pd.DataFrame:
x = np.abs(np.random.normal(siz...
'scikit' worked nicely, thanks again
If we decide go forward with clearml we'll probably do just that 🙂
The UI shows the log as is (and as pasted above). In the console I'm getting correct output (a single tqdm progress line):
` [2021-09-17 13:29:51,860][pytorch_lightning.utilities.distributed][INFO] - GPU available: True, used: True
[2021-09-17 13:29:51,862][pytorch_lightning.utilities.distributed][INFO] - TPU available: False, using: 0 TPU cores
[2021-09-17 13:29:51,862][pytorch_lightning.utilities.distributed][INFO] - IPU available: False, using: 0 IPUs
[2021-09-17 13:29:51,866][pytorch_ligh...
radu on vm-aimbrain-01 in experiments/runners/all via :snake: v3.8.5 via C volt ❯ grep flush ~/clearml.conf # Carriage return (\r) support. If zero (0) \r treated as \n and flushed to backend # Carriage return flush support in seconds, flush consecutive line feeds (\r) every X (default: 10) seconds console_cr_flush_period: 600
Hi Martin, it is a tqdm parameter (the default ProgressBar
in pytorch lightning is unfortunately relying on tqdm). This is from the tqdm docs:dynamic_ncols : bool, optional If set, constantly alters
ncolsand
nrows` to the
environment (allowing for window resizes) [default: False].
nrows : int, optional
The screen height. If specified, hides nested bars outside this
bound. If unspecified, attempts to use environment...
I don't control tqdm, (otherwise I would have already gone for Stef's suggestion) - pytorch-lightning does in this particular script 😞 .
This is adapted from one of the methods in their ProgressBar
classfrom tqdm import tqdm bar = tqdm( desc="Training", initial=1, position=1, disable=False, leave=False, dynamic_ncols=True, file=sys.stderr, smoothing=0) with bar: for i in range(10): time.sleep(0.1) bar.update() print('done')
In the console this works as expected, but in a jupyter notebook this produces a scrolling log (because of the position=1 argument, which happens whenever the bar is not th...
Interesting, I don't get newlines in any of my consoles:ClearML Task: overwriting (reusing) task id=38cc10401fcc43cfa432b7ceed7df0cc 2021-10-08 14:57:53,704 - clearml.Task - INFO - No repository found, storing script code instead ClearML results page:
`
...
` # Development mode worker
worker {
# Status report period in seconds
report_period_sec: 2
# ping to the server - check connectivity
ping_period_sec: 30
# Log all stdout & stderr
log_stdout: true
# Carriage return (\r) support. If zero (0) \r treated as \n and flushed to backend
# Carriage return flush support in seconds, flush consecutive line feeds (\r) every X (default: 10) s...
This is how a configuration item looks like:<tasks.ConfigurationItem: { "name": "filter", "value": "inference = [{\n type = \"StreamFilter\"\n params {\n context = \"full\"\n op = \"or\"\n lower_bounds {\n key = 16\n mouse = 32\n }\n }\n }]\ntrain {\n users {\n op = \"and\"\n lower_bounds {\n min_sessions = 32\n }\n }\n}", "type": "dictionary" }>
The value
is a string that prints pretty but I'm not sure how to p...
This is how the links to the artifacts looks like (the part I blurred out is is the last part of the secret, which is working fine since the task was able to upload those correctly to storage, I can check that):
Hi SweetBadger76 , thanks, I think I've made it work. The main point of confusion was between dealing with different type of Task
objects (i.e. clearml.backend_api.services.v2_13.tasks.Task
returned by get_all
, which don't have any of those methods).
Interestingly, set_parameters
didn't just work as expected, I had to flatten the dicts myself (which clearml apparently does on its own when I call set_parameters
on a new task.
Thank you all. 🙏
Thanks Natan, I was pretty much expecting that. Is there any way to change the value of user without generating new credentials? I'm guessing no 🙂 .
ah nice, I'll try auto_connect_frameworks
(probably with {'joblib': False}
? - we don't use scikit-learn)
I forgot to say I've set up a local server - we are still testing phase. I've created credentials for them because they couldn't generate them for themselves (they did clearml-init, and have eacha clearml.conf file but the ADD CRENDENTIALS part didn't show up for them).