Reputation
Badges 1
9 × Eureka!Thanks for your help Martin!
- the first dash infront of queue is to long
Sure!
Before:
{'model': {'accuracy': {'name': 'accuracy', 'x': [0, 1], 'y': [0.5789473652839661, 1.0]}}
After:
{'model': {'accuracy': {'name': 'accuracy', 'x': [0, 1, 2 ], 'y': [0.5789473652839661, 1.0, 2.0 ]}}
Expected:
{'model': {'accuracy': {'name': 'accuracy', 'x': [ 0, 1], 'y': [ 2.0 , 1.0]}}
Self hosted
Is this still the best way? Is not working for me 😞
Indeed, does what stated in the docu, however I think its a bit odd, as .report_scalar() works quite different in this case compared to the normal case and iteration is not an optional param but will be ignored anyway
Well its not just the dash in front of queue, but also ml missing in clearml-agent
That means if the experiments is too short there might not be a report?
Not much more:
Uploading compressed dataset changes (1 files, total 7.56 KB) to None
Upload completed (7.56 KB)
Traceback (most recent call last):
File "/mnt/c/Users/tbudras/Documents/local-development-code/test_data.py", line 11, in <module>
dataset.close()
AttributeError: 'Dataset' object has no attribute 'close'
clearml 1.0.2
Still feels super hacky tho, think it would be nice to have a simplier way or atleast some nice documentation 🙂
Thanks for your help!
I'll try to test in a new project. Will they get ordered ascending or descending? Does documentation beside your example exist?
Well I figured it out, thats what your -
was for :D
But now you're talking about the case two Tasks have the same metric, right?
I mean in general, whether the task with the largest metric is first, or smallest, because I'd need largest metric, but to me it seems like smallest metric is first. Can the order be inversed?