
Reputation
Badges 1
41 × Eureka!As there are quite some hparams, which also change depending on the experiment, I was hoping there was some automatic way of doing it?
For example that it will try to find all dict entries that match "yet_another_property_name": "some value"
, and ignore those that don't.
The value has to be converted to a string btw?
Exactly, so that remapping of port 8080
should not be the reason for this issue
The only change I made in the .yml file was:
` ports:
- "8080:80"
to
ports: - "8082:80" `
I already had something running on 8080, but since it's the trains-apiserver and not the webserver, this shouldn't be an issue.
/opt/trains/
:
` $ ls -al
total 120
drwxrwsrwx 7 root miniconda 4096 Nov 2 18:15 .
drwxr-xr-x 15 root root 4096 Oct 5 15:12 ..
drwxrwxrwx 38 root miniconda 4096 Nov 2 18:15 agent
drwxrwxrwx 2 root miniconda 4096 Jun 19 14:43 config
drwxrwxrwx 8 root miniconda 4096 Nov 2 18:11 data
-rwxrwxrwx 1 root miniconda 4383 Jun 19 14:46 docker-compose_0.15.0.yml
-rwxrwxrwx 1 root miniconda 4375 Jun 26 15:06 docker-compose_0.15.1.yml
-rwxrwxrwx 1 root miniconda 4324 Nov 2 18:...
Is it possible it's not just about the root user, but also the root group?
AppetizingMouse58 If I:sudo chmod 771 -R /opt/trains/
(taking all permission away from other except execution)
The file permission error comes back, even though everything is under the root user.
Ok, it was indeed something with permission. When I chown everything to root (1000) and chmod 777 it worked. 777 is of course not desirable, so I'm going to narrow it down now.
Thank you for the reply! The migration indeed created this elastic_7 folder.
Hmm, after connecting with the VPN again and using ctrl + F5, there is no complaint anymore. Although a colleague uploaded a Seaborn plot, but it's still not showing up, which I thought was fixed in the new version?
The plots page is pure white of that experiment, and not the usual "No chart data" if no plot was uploaded.
Same problem with 775
First I tried without build, but same problem. --build
just means that it will re-download all layers instead of using the ones already cached.
TimelyPenguin76 The colleague is actually a her, but she replied that how it's looking now is correct? We're actually both already passed our work time (weekend :D), so we'll take a look at it after the weekend. If there is still something wrong, I'll get back to you. Thanks for offering help though :)
Would have been nice if they would have reached out to you guys/gals before removing Trains 😅
Aah, I couldn't find it under PLOTS, but indeed it's there under DEBUG SAMPLES.
Ah my bad, it seems I had to rundocker-compose -f /opt/trains/docker-compose.yml pull
once. I quickly tried trains like half a year ago, so maybe it was using the old images? However, I thought --build
would take care of that.
Now it's working 🙂
With PyTorch Lightning, I only use this line at the beginning of a Jup Notebook:Task.init(project_name=project_name, task_name=task_name)
The code to log the confusion matrix is in some .py file though that does not have any Trains code.
Is it possible to log it in a TB compatible way, that will be automatically picked up by Trains? I prefer to keep the .py Trains free.
That's useful to know! But actually in this case I want to just test if the code works (run 2 epochs and see if it works). I don't want this to be logged, so I don't Task.init
in those cases.
I don't want the code to crash on Trains in those cases.
I see that Task.current_task()
returns None if no task is running, so I can use that with an if statement 🙂
AgitatedDove14 TB has the confusion matrix like this:
After a while I get the message:
New version available
Click the reload button below to reload the web page
I click the "RELOAD" button and the "newer version" message disappear. However, some plots still don't show up (fixed in 0.15.1). If I refresh the TRAINS webinterface, the "newer version" message appears again.
It seems to be related to trains-apiserver
, based on the log inside the Docker compose:
` trains-apiserver | [2020-11-10 04:40:14,133] [8] [ERROR] [trains.service_repo] Returned 500 for queues.get_next_task in 20ms, msg=General data error: err=('1 document(s) failed to index.', [{'index': {'_index': 'queue_metrics_d1bd92a3b039400cbafc60a7a5b1e52b_2020-11', '_type': '_doc', '_id': 'rkh0sHUBwyiZSyeZUAov', 'status': 403, 'error': {'type': 'cluster_block_exception', 'reason': 'index [queu...
AgitatedDove14 Done!
Ok it's that the user group also has to be root. I ran the following:sudo chmod 775 -R /opt/trains/ sudo chown -R root:root /opt/trains
and it works.
It seems that it has to be 775
with both user and group as root. E.g. 771
does not work, because than the docker
command has to be used with sudo
(if I want to use my default sudo-user account)
Ah I see, it's based on a naming scheme, thanks. Sorry I forgot to link the tutorial I was looking at: https://allegro.ai/docs/examples/frameworks/pytorch/pytorch_tensorboard/
AgitatedDove14 There is only a events.out.tfevents.1604567610.system.30991.0
file.
If I open this with a text editor, most is unreadable, but I do find a the letters "PNG" close to the name of the confusion matrix. So it looks like the image is encoded inside the TB log file?
Thank you 😉
So if I want it under plots, I would need to call e.g. report_confusion_matrix
right?
It's my colleague's experiment (with scikit-learn), so I'm not sure about the details.
Even when I do a "clean install" (renamed the /opt/trains
) folder and followed the instructions to setup TRAINS, the error appears.
trains ( 0.15.1-367 )
appears to be the version, same as you. Thank you. Appears Trains is up to date.
Apparently there should be 6 of them:
Hi AgitatedDove14
Not using trains-agent yet. Just using PyTorch Lightning in Jupyter Notebook with as Logger Trains.
So I'm talking about runtime and GPU usage in experiments.
What's the abc issue
? Something Lightning team is responsible for?