Reputation
Badges 1
151 × Eureka!ok, it makes sense. Is there a way to let trains save it without blocking the program ?
I couldn't report it to demo server, since this involve internal stuff...
AgitatedDove14 I believe you mean plt.savefig? I used this function to save my charts, but it does not show up as well.
I create a fresh conda env and install python for both machine
I use Yaml config for data and model. each of them would be a nested yaml (could be more than 2 layers), so it won't be a flexible solution and I need to manually flatten the dictionary
It would be nice if there is an "export" function to just export all/selected experiment table view
TimelyPenguin76 No, I didn't see it.
Oh I did not realize I asked this in a old thread, sorry about that.
one does record the package, the other does not
Those charts are saved locally, so I am sure they are not empty charts
conda create -n trains python==3.7.5
pip install trains==0.16.2.rc0
hmmm... you mention plt.show() or plt.savefig() will both trigger Trains to log it.
plt.savefig does not trigger logging for me. Only plt.show() does. If you run plt.show() in a python script, it will pop out a new window for matplotlib object and block the entire program, unless you manually close it.
(On Window Machine at least)
Thanks for your help. I will stick with task.connect() first. I have submit a Github Issue, thanks again AgitatedDove14
SuccessfulKoala55 task.connect()
lol...... mine is best_model_20210611_v1.pkl
and better_model_20210611_v2.pkl
or best_baseline_model_with_more_features.pkl
matplotlib.version
'3.1.3'
AgitatedDove14
I get this log but there is nothing show up in the UI.
2020-09-10 09:15:06,914 - trains.Task - INFO - Waiting for repository detection and full package requirement analysis ======> WARNING! UNCOMMITTED CHANGES IN REPOSITORY origin <====== 2020-09-10 09:15:10,378 - trains.Task - INFO - Finished repository detection and package analysis
task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_something
Instead of get_last_scalar_metrics()
, I am using t._data.hyperparams['summary'] to get the metrics I needed
In this case, I would rather use task.connect(), diff line by line is probably not useful for my data config. As shown in the example, shifting 1 line would result all remaining line different.
But this also mean I have to first load all the configuration to a dictionary first.
AnxiousSeal95 At first sight, the pipeline logic of ClearML seems binding with ClearML quite a bit. Back then I was considering I need something that can convert to Production pipeline (e.g. Airflow DAGs) easily, as we need pipelines not just for Experiments, Airflow seems to be the default one.
Also, clearml-data was not available when we started the development of internal framework. As for clear-agent, from my previous experience, it seems not working great with Window sometimes, and als...
Cool, versioning the difference is useful. It also depends on what kind of data. For example, for tabular data, database might be a natural choice, however, how to integrate it and keep track of the metadata could be tricky. While for images, it probably more suitable for blob storage or per file basis.
Great, as long as it will continue to work with S3(Minio), it's good for me. I am already using MinIO with Trains (older version).
Was planning to do a upgrade soon.
So I found that if I change this plot, it seems changes across all the expeiment too? How did it cache this setting? could this be shared among users or only per user or it is actually cached by the browser only?
i.e. some files in a shared drive, then someone silently updated the files and all the experiments become invalid and no one knows when did that happened.
https://github.com/quantumblacklabs/kedro-examples/blob/master/kedro-tutorial/conf/base/catalog.yml
I am actually using Kedro (a pipeline library), you can check out the yaml config here. There will be a lot of cases that I need to insert a new argument or dataset in between
repository detection is fine
the order is reset.
this is a bit weird, I have two Window machine, and both point to the public server.
No, I mean it capture the plot somehow, as you can see the left side there are a list of plot, but it does not show up.