Reputation
Badges 1
16 × Eureka!Sounds like a better solution. Less clicks required in order to get to my page. Thanks 🙂
I think I’d expect to be able to view the page I’m uploading inline, or at least a thumbnail of it.
To clarify - the html I’m creating is an experiment summary which includes metrics/scalars from my experiment, as well as dataset statistics. I’m not sure it should go to the “debug images” tab - that’s just where it went when I used report_media
Thanks for the prompt response SuccessfulKoala55 ! exactly what I was looking for 🙂
AgitatedDove14 - using your suggested solution looks a bit more generic, and a proper .edit
method is a good idea for that. However - it doesn’t resolve the API version issue I had (that is, unless you also suggest to add a method to add execution.script
, which isn’t initialized when I use Task.create
)
oh, it is indeed misleading.
what should I assign to internal_task_representation.execution.script
then (per your example snippet)? a dictionary?
Thanks AgitatedDove14
Using your snippet caused the following error:TypeError: __init__() missing 1 required positional argument: 'metrics'
when I used client.events.debug_images(task='aabbcc', metrics=None)
I got the following: ValueError: Unsupported keyword arguments: task
also - should the value passed to task
be the task id?
Exactly what I was looking for!
So just report scalars as I would anyways and then set the dispay to wall-time, right?
AgitatedDove14 My use-case was a bit different. I only populate the repository url, entrypoint, and commit SHA in the Script
object.
We wanted to have something informative enough in our task on one hand, but not to load with redundant data on the other - so a commit SHA made perfect sense.
I ended up using from trains.backend_api.services import tasks
and then initialize with tasks.Script(…)
found it.
` fpr, tpr, thresholds = my_roc_calc(my_input)
t = Task.get_task(my_task_id)
t.get_logger().report_scatter2d("ROC", "MyModel",iteration=1, scatter=list(zip(fpr, tpr))) `
AgitatedDove14 I was trying to get all tasks that hold a specific value/pattern in a hyperparameter. e.g. - after performing grid-search, get all training tasks with a specific learning rate
found a reasonable solution. I had to specify http://execution.parameters.my _hyperparameter
in the fields
list.
Thanks SuccessfulKoala55 , dropping irrelevant fields did improve the response time.
Sweet! Will check it out and try to adopt 🙂
I have a template which I populate on another process I wrote, which runs once my experiment is over.
A report generation functionality sounds great. add my vote for it 🙂