Reputation
Badges 1
151 × Eureka!Ok, sorry, this is my mistake, it's actually inside a loop, so this make sense.
it seems that if I don't use plt.show() it won't show up in Allegro, is this a must?
I am not sure what are those example/1/2/3 are, I only have one chart
Cool, versioning the difference is useful. It also depends on what kind of data. For example, for tabular data, database might be a natural choice, however, how to integrate it and keep track of the metadata could be tricky. While for images, it probably more suitable for blob storage or per file basis.
It's good that you have version your dataset with name, I have seen many trained model that people just replace the dataset directly.
Oh I did not realize I asked this in a old thread, sorry about that.
Sorry for late reply, you mention there will be built-in way to version data. May I asked is there a release date for it?
I am interested in machine learning experiment mangament tools.
I understand Trains already handle a lot of things on the model side, i.e. hyperparameters, logging, metrics, compare two experiments.
I also want it to help reproducible. To achieve that, I need code/data/configuration all tracked.
For code and configuration I am happy with current Trains solution, but I am not sure about the data versioning.
So if you have more details about the dataset versioning with the enterprise offer...
Btw, I am able to isolate the code that causing the problem. It maybe easier for you to debug.
` import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import seaborn as sns
from trains import Task
task = Task.init(project_name='examples', task_name='Matplotlib example')
x = [ 1,2,3]
y = [1,2,3]
x = list(x)
y = list(y)
f, ax = plt.subplots(figsize=(50, 0.7 * len(x)))
sns.barplot(y, x, orient="h", ax=ax)
plt.show() `
one does record the package, the other does not
No, I mean it capture the plot somehow, as you can see the left side there are a list of plot, but it does not show up.
lol...... mine is best_model_20210611_v1.pkl
and better_model_20210611_v2.pkl
or best_baseline_model_with_more_features.pkl
this is a bit weird, I have two Window machine, and both point to the public server.
The "incremental" config seems does not work well if I add handlers in the config. This snippets will fail with the incremental
flag.
` import logging
from clearml import Task
conf_logging = {
"version": 1,
"incremental": True,
"formatters": {
"simple": {"format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s"}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "s...
I see, I will look into the documentation of it, thanks Jake.
SuccessfulKoala55 task.connect()
I just need a way to check if the web/app host is configured.
If yes, go ahead. If not, offline/throw an error
AgitatedDove14 Thanks! This seems to be a more elegant solution
I need this as I want to write a wrapper for internal use.
I need to block the default behavior that link to public server automatically when user has no configuration file.
and the 8 charts are actually identical
I use Yaml config for data and model. each of them would be a nested yaml (could be more than 2 layers), so it won't be a flexible solution and I need to manually flatten the dictionary
And the plotting area is completely empty, only some chart titles show up on the left.
AgitatedDove14 I believe you mean plt.savefig? I used this function to save my charts, but it does not show up as well.