none of my pipeline tasks are reporting these graphs, regardless of runtime. I guess this line would also fix that?
Same issue, that said, good point, maybe with pipeline we should somehow make that a default ?
I see, ok!
I will try that out.
Another thing I noticed: none of my pipeline tasks are reporting these graphs, regardless of runtime. I guess this line would also fix that?
Hi ElegantCoyote26
is there a way to get a Task's docker container id/name?
you mean like Task.get_task("task_id_here").get_base_docker()
?
ow a Task's results page also has a plot for this, but I guess it's at the machine level and not the task level?
This is actually on the container level, meaning checked from inside the container. It should be what you are looking for
um, this line is not doing anything for me 🤔controller_clearml_task = Task.current_task() controller_clearml_task.set_resource_monitor_iteration_timeout( seconds_from_start=10 )
Oh, yes, that might be (threshold is 3 minutes if no reports) but you can change that:task.set_resource_monitor_iteration_timeout(seconds_from_start=10)
For a hacky way you can do docker ps
and see the docker run command. I believe it contains the task id, so you can grep by task id
I have this inside my pipeline defined with decorator
AgitatedDove14 I noticed a lot of my tasks don't contain these graphs though...
they are taking longer than 30 secs, but admittedly not much longer: 1-3 minutes
ElegantCoyote26 could be, if the Task run is under 30sec?!