Reputation
Badges 1
151 × Eureka!it seems that if I don't use plt.show() it won't show up in Allegro, is this a must?
Btw, I am able to isolate the code that causing the problem. It maybe easier for you to debug.
` import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import seaborn as sns
from trains import Task
task = Task.init(project_name='examples', task_name='Matplotlib example')
x = [ 1,2,3]
y = [1,2,3]
x = list(x)
y = list(y)
f, ax = plt.subplots(figsize=(50, 0.7 * len(x)))
sns.barplot(y, x, orient="h", ax=ax)
plt.show() `
Great discussion, I agree with you both. For me, we are not using clearml-data, so I am a bit curious how does a "published experiment" locked everything (including input? I assume someone can still just go inside the S3 bucket and delete the file without Clearml noticing).
From my experience, absolute reproducibility is code + data + parameter + execution sequence. For example, random seed or some parallelism can cause different result and could be tricky to deal with sometimes. We did bu...
for workaround, I write a function to recursive cast my config dictionary into string if needed.
I have been using this line to prevent experiments won't accidentally sent to the public server (I have my custom self-hosted server)Task.set_credentials("PLACEHOLDER", "PLACEHOLDER","PLACEHOLDER")
However, when I upgraded from 0.17.5 -> > 1.0.0. Weird stuff happen.
Since upgrade from v0.17.5 -> > 1.0.0, it has issue replacing the credentials.
Expected Behavior:
Conf should replace the "PLACEHOLDER" is the conf file exist. Else it should fails the experiment.
What happened:
The ...
my workaround is making it into string before hand -> but it will breaks if I used trains-agent too since it will accept a string parameters instead of datetime
This will make the plotting fail
I tried pass the dictionary but the output is not ideal. I would want to have some nested dict like the "execution" > "Source" layout.
As number of parameters can be large, having some hierarchy in the UI will be much easier for comparison
It's for addition filtering only right? My use case is to prevent user accidentally querying the entire database.
I want to achieve something similar we would do in SQL
select * from user_query limit 100;
I want the support for click as well, or is there any adhoc solution?
@<1523701087100473344:profile|SuccessfulKoala55> Thx! I swear I saw it before, but somehow I just overlooked it.
AgitatedDove14
are the data versioning completely different from the Trains Artifact/storage solution? or it's some enhanced feature.
Ok, will prepare a PR and script to reproduce the error
GrumpyPenguin23 yes, those features seems to related to other infrastructure, not Trains (ML experiment management)
for the most common workflow, I may have some csv, which may be updated from time to time
let me know how can I provide better debug message
This log does not always show up, even tho it is logged when I run it on Machine B. In contrast, I run it on Machine A, this message did show up, but nothing is logged.
` 2020-09-10 09:15:06,914 - trains.Task - INFO - Waiting for repository detection and full package requirement analysis
======> WARNING! UNCOMMITTED CHANGES IN REPOSITORY origin <======
2020-09-10 09:15:10,378 - trains.Task - INFO - Finished repository detection and package ...
Ok, then maybe it can be still used as a data versioning solution. Except that I have to manually track the task id (those generate artifact) for versioning myself.
I am not sure what's the difference of logging with "configuration" and "hyperparameters", for now , I am only using it as logging, I guess hyperparmeters has special meaning if I want to use "trains" for some other features.
Yup, I am only more familiar with the experiment tracking part, so I don't know if I have a good understanding before I have reasonable knowledge of the entire ClearML system.
VivaciousPenguin66 How are you using the dataset tool? Love to hear more about that.
and the 8 charts are actually identical
It will pop up a window like this, and the program only continues when I close this window.
AgitatedDove14 No, unless I close the window manually.
Disable the matplotlib GUI does work.
It's good that you have version your dataset with name, I have seen many trained model that people just replace the dataset directly.
Cool! Will have a look at the fix when it is done. Thanks a lot AgitatedDove14