Reputation
Badges 1
25 × Eureka!The upload itself is in the background.
It should not take long to prepare the plot for sending. Are you experiencing a major delay ?
I'm looking into the savefig issue, meanwhile you can disable the popup by adding at the top of your code the following:import matplotlib matplotlib.rcParams['backend'] = 'agg' import matplotlib.pyplot matplotlib.pyplot.switch_backend('agg')
Hi EnviousStarfish54
After the pop up do you see the plot on the web UI?
EnviousStarfish54 whats your matplotlib version ?
EnviousStarfish54 thanks again for the reproducible code, it seems this is a Web UI bug, I'll keep you updated.
EnviousStarfish54
and the 8 charts are actually identical
Are you plotting the same plot 8 times?
Thanks EnviousStarfish54 !
Hi JitteryCoyote63 ,
When you shutdown the task (manually with close() or when the process finish) it wait for the uploads...
Why do you need to specifically wait for all the artifacts upload? (currently you can stop the artifacts upload thread and wait for all the artifacts, but that seems like a bad hack)
I see. If you are creating the task externally (i.e. from the controller), you should probably call. task.close() it will return when everything is in order (including artifacts uploaded, and other async stuff).
Will that work?
Hmm, not a bad idea 🙂
Could you please open a Git Issue, so it will not get forgotten ?
(btw: I'm not sure how trivial it is to implement, nonetheless obviously possible 😉
okay, wait I'll see if I can come up with something .
Ohh, the controller task itself holds the artifacts ?
HandsomeCrow5 check the latest RC, I just run the same code and it worked 🙂
Basically try with the latest RC 🙂
pip install trains 0.15.2rc0
Maybe different API version...
What's the trains-server version?
Hi FranticCormorant35
So Tasks have parent field, that would link one to another.
Unfortunately there is no visual representation for it.
What we did with the hyper-parameter for example, was also to add a tag with the ID of the "parent" Task. This would make sense if you have multiple tasks all generated from the same "parent", like in hyper-parameter optimization.
What's your use case ? Is it a single evaluation Task per training, or multiple or con job alike ?
Hi BroadMole98 ,
what's the current setup you have? And how do you launch jobs to Snakemake?
BTW: seems like conda doesn't support git+git:// packages
How about switching to pip ? you can still run the entire thing from conda env, it will just use pip & venv to install everything, other than that it should work as expected.
This is why we recommend using pip and not conda ...
PunySquid88 after removing the "//gihub" package is it working ?
I'll make sure we have conda ignore git:// packages, and pass them to the second pip stage.
PunySquid88 RC1 is out with a fix:pip install trains-agent==0.14.2rc1
force_analyze_entire_repo to to True 🙂
(false is the default)
I think task.init flag would be great!
👍
Hi TrickyRaccoon92 , TB is automatically collected and converted into data stored on the system The UI uses plotly to display the data itself (on your web browser).
You still have the original TB protobuf file, if you want to dive deeper and debug the data (it is not automatically uploaded, but some users do upload it as additional artifact on the experiment)
Make sense ?
I guess I got confused since the color choices in
One of the most beloved features we added 🙂
TrickyRaccoon92 I'm not sure I follow, TB do show? and you want to add additional plotly plot ?
Hi LazyLeopard18 ,
So long story short, yes it does.
Longer version, to really accomplish full federated learning with control over data at "compute points" you need some data abstraction layer. Without data abstraction layer, federated learning is just averaging derivatives from different location, this can be easily done with any distributed learning framework, such as horovod pr pytorch distributed or TF distributed.
If what you are after is, can I launch multiple experiments with the sam...
Okay now let's try: EDITdocker run -t --rm nvidia/cuda:10.1-base-ubuntu18.04 bash -c "echo 'Binary::apt::APT::Keep-Downloaded-Packages \"true\";' > /etc/apt/apt.conf.d/docker-clean && apt-get update && apt-get install -y git python3-pip && python3 -m pip install trains-agent && python3 -m trains-agent --help"