Reputation
Badges 1
9 × Eureka!Well its not just the dash in front of queue, but also ml missing in clearml-agent
- the first dash infront of queue is to long
Indeed, does what stated in the docu, however I think its a bit odd, as .report_scalar() works quite different in this case compared to the normal case and iteration is not an optional param but will be ignored anyway
Self hosted
Not much more:
Uploading compressed dataset changes (1 files, total 7.56 KB) to None
Upload completed (7.56 KB)
Traceback (most recent call last):
File "/mnt/c/Users/tbudras/Documents/local-development-code/test_data.py", line 11, in <module>
dataset.close()
AttributeError: 'Dataset' object has no attribute 'close'
That means if the experiments is too short there might not be a report?
clearml 1.0.2
Thanks for your help Martin!
Sure!
Before:
{'model': {'accuracy': {'name': 'accuracy', 'x': [0, 1], 'y': [0.5789473652839661, 1.0]}}
After:
{'model': {'accuracy': {'name': 'accuracy', 'x': [0, 1, 2 ], 'y': [0.5789473652839661, 1.0, 2.0 ]}}
Expected:
{'model': {'accuracy': {'name': 'accuracy', 'x': [ 0, 1], 'y': [ 2.0 , 1.0]}}
Well I figured it out, thats what your -
was for :D
Is this still the best way? Is not working for me 😞
But now you're talking about the case two Tasks have the same metric, right?
I mean in general, whether the task with the largest metric is first, or smallest, because I'd need largest metric, but to me it seems like smallest metric is first. Can the order be inversed?
I'll try to test in a new project. Will they get ordered ascending or descending? Does documentation beside your example exist?
Still feels super hacky tho, think it would be nice to have a simplier way or atleast some nice documentation 🙂
Thanks for your help!