Reputation
Badges 1
151 × Eureka!hmmm... you mention plt.show() or plt.savefig() will both trigger Trains to log it.
plt.savefig does not trigger logging for me. Only plt.show() does. If you run plt.show() in a python script, it will pop out a new window for matplotlib object and block the entire program, unless you manually close it.
(On Window Machine at least)
we will have a dedicate vm to hold trains related docker, do I need to setup some file server? (i saw earlier thread mention minio)
oh, this is a bit different from my expectation. I thought I can use artifact for dataset or model version control.
It would be nice if there is an "export" function to just export all/selected experiment table view
i.e. some files in a shared drive, then someone silently updated the files and all the experiments become invalid and no one knows when did that happened.
conda create -n trains python==3.7.5
pip install trains==0.16.2.rc0
hmmmm, maybe I missed some UI Element, I can't locate any ID
The "incremental" config seems does not work well if I add handlers in the config. This snippets will fail with the incremental flag.
` import logging
from clearml import Task
conf_logging = {
"version": 1,
"incremental": True,
"formatters": {
"simple": {"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "s...
TimelyPenguin76 No, I didn't see it.
I think it's related to the fix that use "incremental: true", this seems to fix 1 problem, but at the same time it will ignore all other handlers.
I am not sure what's the difference of logging with "configuration" and "hyperparameters", for now , I am only using it as logging, I guess hyperparmeters has special meaning if I want to use "trains" for some other features.
this is a bit weird, I have two Window machine, and both point to the public server.
EnviousStarfish54 quick update, regardless of the
logging.config.dictConfigissue, I will make sure that even when the logger is removed, the clearml logging will continue to function 🙂
The commit will be synced after the weekend
Will the new fix avoid this issue and does it still requires the incremental flag?
From the logging documentation
Thus, when the incremental key of a configuration dict is present and is True,
the system will completely igno...
repository detection is fine
Sorry for late reply, you mention there will be built-in way to version data. May I asked is there a release date for it?
I just need a way to check if the web/app host is configured.
If yes, go ahead. If not, offline/throw an error
Sorry for late reply AgitatedDove14
The code that init Task is put inside the first node. https://github.com/noklam/allegro_test/blob/6be26323c7d4f3d7e510e19601b34cde220beb90/src/allegro_test/pipelines/data_engineering/nodes.py#L51-L52
repo: https://github.com/noklam/allegro_test
commit: https://github.com/noklam/allegro_test/commit/6be26323c7d4f3d7e51...
my workaround is making it into string before hand -> but it will breaks if I used trains-agent too since it will accept a string parameters instead of datetime
I can confirm this seems to fix this issue, and I have reported this issue to kedro team see what's their view on this. So it seems like it did remove the TaskHandler from the _handler_lists
is it possible to overwrite if trains.conf did exist
Digest: sha256:407714e5459e82157f7c64e95bf2d6ececa751cca983fdc94cb797d9adccbb2f Status: Downloaded newer image for nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown.
AgitatedDove14 Yes, as I found as Kedro's pipeline start running, the log will not be sent to the UI Console anymore. I tried both Task.init before/after the start of kedro pipeline and the result is the same. The log is missing, but the Kedro logger is print to sys.stdout in my local terminal.
Not sure why my elasticsearch & mongodb crashed. I have to remove and recreate all the dockers. Then clearml-agent works fine too
it seems that if I don't use plt.show() it won't show up in Allegro, is this a must?
Could u give me some pointers where ClearML auto-capture log/stdout? I suspect as Kedro has configuration on logging and somehow ClearML fail to catch it.
