
Reputation
Badges 1
151 × Eureka!task_reporting = Task.init(project_name='project', task_name='report') tasks = Task.get_tasks(project_name='project', task_name='partial_task_name_here') for t in tasks: t.get_last_scalar_metrics() task_reporting.get_logger().report_something
Instead of get_last_scalar_metrics()
, I am using t._data.hyperparams['summary'] to get the metrics I needed
I use Yaml config for data and model. each of them would be a nested yaml (could be more than 2 layers), so it won't be a flexible solution and I need to manually flatten the dictionary
that can't be done easily, I have no control for that
So I found that if I change this plot, it seems changes across all the expeiment too? How did it cache this setting? could this be shared among users or only per user or it is actually cached by the browser only?
oh, this is a bit different from my expectation. I thought I can use artifact for dataset or model version control.
Oh I did not realize I asked this in a old thread, sorry about that.
ok, it makes sense. Is there a way to let trains save it without blocking the program ?
Those charts are saved locally, so I am sure they are not empty charts
for the open source version, if I use artifact, if I already have a local file, does it knows to skip downloading it or it will always replace the file? As my dataset is large (~100GBs), I cannot afford it to be re-downloaded everytime
VivaciousPenguin66 What's your thought on Prefect? There are so many pipeline library and I wasn't so sure how different are they. I have experience with Airflow. With Kedro, we were in hope that data scientist will write the pipeline themselves with minimal effort to handover to another engineer to work on. For serious production (need to scale), we consider convert Kedro pipeline to Airflow, there are plugin to do that, tho I am not sure how mature they are.
hmmmm, maybe I missed some UI Element, I can't locate any ID
Sorry, let me get back to you tomorrow. Maybe I did something wrong now the entire UI crash
Yup, I am only more familiar with the experiment tracking part, so I don't know if I have a good understanding before I have reasonable knowledge of the entire ClearML system.
VivaciousPenguin66 How are you using the dataset tool? Love to hear more about that.
No, I mean it capture the plot somehow, as you can see the left side there are a list of plot, but it does not show up.
It's good that you have version your dataset with name, I have seen many trained model that people just replace the dataset directly.
matplotlib.version
'3.1.3'
may I ask is there a planned release date?
let me know how can I provide better debug message
This log does not always show up, even tho it is logged when I run it on Machine B. In contrast, I run it on Machine A, this message did show up, but nothing is logged.
` 2020-09-10 09:15:06,914 - trains.Task - INFO - Waiting for repository detection and full package requirement analysis
======> WARNING! UNCOMMITTED CHANGES IN REPOSITORY origin <======
2020-09-10 09:15:10,378 - trains.Task - INFO - Finished repository detection and package ...
lol...... mine is best_model_20210611_v1.pkl
and better_model_20210611_v2.pkl
or best_baseline_model_with_more_features.pkl
really appreciate the help along the way... I have taken way too many of your time
the order is reset.
seems not all settings are stored? for example if I add a custom column in hyperparameters and do a refresh
AgitatedDove14 Git is fine, I just create a local repository for this. The code is two line
from trains import Task
task = Task.init(project_name="my project", task_name="my task3")
Hi, I think I can confirm this is a bug of Trains. Is that ok if I submit a PR to fix this?