Reputation
Badges 1
151 × Eureka!Cool! Will have a look at the fix when it is done. Thanks a lot AgitatedDove14
AgitatedDove14 Yes, as I found as Kedro's pipeline start running, the log will not be sent to the UI Console anymore. I tried both Task.init before/after the start of kedro pipeline and the result is the same. The log is missing, but the Kedro logger is print to sys.stdout in my local terminal.
Sorry for late reply AgitatedDove14
The code that init Task is put inside the first node. https://github.com/noklam/allegro_test/blob/6be26323c7d4f3d7e510e19601b34cde220beb90/src/allegro_test/pipelines/data_engineering/nodes.py#L51-L52
repo: https://github.com/noklam/allegro_test
commit: https://github.com/noklam/allegro_test/commit/6be26323c7d4f3d7e51...
I have tried adding the line to conf but seems not working as well... are u able to run with proper logging?
I tried to step in the debugger, I can't quite see the clearml handlers in logging._handlers, the dict is empty, where is the clearml handler stored? AgitatedDove14
I think it's related to the fix that use "incremental: true", this seems to fix 1 problem, but at the same time it will ignore all other handlers.
let me know how can I provide better debug message
This log does not always show up, even tho it is logged when I run it on Machine B. In contrast, I run it on Machine A, this message did show up, but nothing is logged.
` 2020-09-10 09:15:06,914 - trains.Task - INFO - Waiting for repository detection and full package requirement analysis
======> WARNING! UNCOMMITTED CHANGES IN REPOSITORY origin <======
2020-09-10 09:15:10,378 - trains.Task - INFO - Finished repository detection and package ...
that can't be done easily, I have no control for that
one does record the package, the other does not
Thanks for your help. I will stick with task.connect() first. I have submit a Github Issue, thanks again AgitatedDove14
as I have wandb docker set up on the same VM for teting
hmmm... you mention plt.show() or plt.savefig() will both trigger Trains to log it.
plt.savefig does not trigger logging for me. Only plt.show() does. If you run plt.show() in a python script, it will pop out a new window for matplotlib object and block the entire program, unless you manually close it.
(On Window Machine at least)
Do you know what is the "dataset management" for the open-source version?
Not sure why my elasticsearch & mongodb crashed. I have to remove and recreate all the dockers. Then clearml-agent works fine too
I couldn't report it to demo server, since this involve internal stuff...
Digest: sha256:407714e5459e82157f7c64e95bf2d6ececa751cca983fdc94cb797d9adccbb2f Status: Downloaded newer image for nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
Those charts are saved locally, so I am sure they are not empty charts
And the plotting area is completely empty, only some chart titles show up on the left.
I am abusing the "hyperparameters" to have a "summary" dictionary to store my key metrics, due to the nicer behaviour of diff-ing across experiments.
It would be nice if there is an "export" function to just export all/selected experiment table view
task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_something
Instead of get_last_scalar_metrics()
, I am using t._data.hyperparams['summary'] to get the metrics I needed