Reputation
Badges 1
53 × Eureka!Yes, I find myself trying to select "points" on the overview tab. And I find myself wanting to see more interesting info in the tootip.
✦2 ❯ git remote show
github
Hi again. After looking into the matter a little bit, I realise I'd have liked having the option of using a StoreManager
ABC which I would implement myself using whatever storage provider I happen to use and whatever package versions happened to support it. To put it differently, instead of you implementing managers for gcs, azure, aws, etc, it would be a much nicer alternative (for me, and I suspect eventually for you too) for clearml's store manager to wrap whatever object the user pr...
We can't really know (possibly ever 🙂 ), but if the bug happens again I'll be sure to report it here.
That works fine:1631895370729 vm-aimbrain-01 info ClearML Task: created new task id=cfed3ea8512d4d9f858d085bd79e62e8 2021-09-17 16:16:10,744 - clearml.Task - INFO - No repository found, storing script code instead ClearML results page:
`
1631895370892 vm-aimbrain-01 info start
1631895370896 vm-aimbrain-01 error 0%| | 0/100 [00:00<?, ?it/s]
1631895471026 vm-aimbrain-01 error 100%|████...
` radu on vm-aimbrain-01 in volt on rg/dev [$] is 📦 v7.0.1 via 🐍 v3.8.5 via C volt
✦2 ❯ git status
On branch rg/dev
nothing to commit, working tree clean
radu on vm-aimbrain-01 in volt on rg/dev [$] is 📦 v7.0.1 via 🐍 v3.8.5 via C volt
✦ ❯ du -sh .
35M . `
I'll let you know asap
✦ ❯ git remote -v github git@github.com:biocatchltd/volt.git (fetch) github git@github.com:biocatchltd/volt.git (push)
AgitatedDove14 Yes! That would be exactly what I want (i.e. get_configuration_as_dict
.) Alas, no such thing exists in 1.4.1. Is that supposed to come in a next version?
CostlyOstrich36
` {"meta":{"id":"3cceedbbc65d480096ebb02b5aba5902","trx":"3cceedbbc65d480096ebb02b5aba5902","endpoint":{"name":"tasks.get_configurations","requested_version":"2.17","actual_version":"1.0"},"result_code":200,"result_subcode":0,"result_msg":"OK","error_stack":"","error_data":{}},"data":{"configurations"...
I don't control tqdm, (otherwise I would have already gone for Stef's suggestion) - pytorch-lightning does in this particular script 😞 .
I found out that the lightning trainer has a progress_bar_refresh_rate
argument (default set to 1) which produces the spamming logs. If I set that to 10, I get 1/10th of the spam (but a janky progress bar in the console). I could set it to 0 to disable it, but that's not really a fix. What I'd really want is the same behaviour in the console (one smooth progress bar) and one line per epoch in the logs; high hopes, right? 😊
ah nice, I'll try auto_connect_frameworks
(probably with {'joblib': False}
? - we don't use scikit-learn)
` # Development mode worker
worker {
# Status report period in seconds
report_period_sec: 2
# ping to the server - check connectivity
ping_period_sec: 30
# Log all stdout & stderr
log_stdout: true
# Carriage return (\r) support. If zero (0) \r treated as \n and flushed to backend
# Carriage return flush support in seconds, flush consecutive line feeds (\r) every X (default: 10) s...
'scikit' worked nicely, thanks again
Hi Jake, thanks for the reply. I've tried the account key method, works fine - but unfortunately clearml expects an old version of azure-storage-blob
(<2.1), which is incompatible with the recent versions (^12.). Any clues of how we could work around this one? Thanks again.
In case anyone is interested, the minimum effort workaround I found is to edit pytorch_lightning/callbacks/progress.py
and change all occurrences of dynamic_ncols=True
to dynamic_ncols=False
in the calls to tqdm
. One could of course implement a custom callback inheriting from their ProgressBar
class.
The template appears to be <alias> <url> <fetch|push>
.
The .git/config
file has sections for each remote too. Example:[remote "github"] url = git@github.com:biocatchltd/volt.git fetch = +refs/heads/:refs/remotes/github/
Would be nice to report which remote the checked out branch actually tracks.
radu on vm-aimbrain-01 in experiments/runners/all via :snake: v3.8.5 via C volt ❯ grep flush ~/clearml.conf # Carriage return (\r) support. If zero (0) \r treated as \n and flushed to backend # Carriage return flush support in seconds, flush consecutive line feeds (\r) every X (default: 10) seconds console_cr_flush_period: 600
Hi SweetBadger76 , thanks, I think I've made it work. The main point of confusion was between dealing with different type of Task
objects (i.e. clearml.backend_api.services.v2_13.tasks.Task
returned by get_all
, which don't have any of those methods).
Interestingly, set_parameters
didn't just work as expected, I had to flatten the dicts myself (which clearml apparently does on its own when I call set_parameters
on a new task.
Thank you all. 🙏
This is adapted from one of the methods in their ProgressBar
classfrom tqdm import tqdm bar = tqdm( desc="Training", initial=1, position=1, disable=False, leave=False, dynamic_ncols=True, file=sys.stderr, smoothing=0) with bar: for i in range(10): time.sleep(0.1) bar.update() print('done')
In the console this works as expected, but in a jupyter notebook this produces a scrolling log (because of the position=1 argument, which happens whenever the bar is not th...
Unfortunately it still happens 😞 :
` Epoch 51: 100%|███████████████████████████████████████████████████████████| 361/361 [02:52<00:00, 2.10it/s, loss=0.169, v_num=9-29]
2021-09-17 09:58:22,253 - clearml.Task - INFO - Waiting for repository detection and full package requirement analysis
2021-09-17 10:03:22,254 - clearml.Task - INFO - Repository and package analysis timed out (300.0 sec), giving up
2021-09-17 10:03:22,313 - clearml.Task - WARNING - Failed auto-det...
The UI shows the log as is (and as pasted above). In the console I'm getting correct output (a single tqdm progress line):
` [2021-09-17 13:29:51,860][pytorch_lightning.utilities.distributed][INFO] - GPU available: True, used: True
[2021-09-17 13:29:51,862][pytorch_lightning.utilities.distributed][INFO] - TPU available: False, using: 0 TPU cores
[2021-09-17 13:29:51,862][pytorch_lightning.utilities.distributed][INFO] - IPU available: False, using: 0 IPUs
[2021-09-17 13:29:51,866][pytorch_ligh...