Reputation
Badges 1
37 × Eureka!The log is produced after i go to the web ui and reproduce the error of "Failed to get Plot Charts"
Hi SuccessfulKoala55 :
I have make sure that all my data are roud to 4, but i still found my plotly data json is so large. And after checking the json ,i found there are many data with many digits, maybe those are info of plotly?
Here is my code:
` from plotly.subplots import make_subplots
import plotly.graph_objects as go
def draw_pr(self,precisions,recalls,score,distance,dataset):
score = np.round(score,4)
for i in range(4):
pre = np.around(precisions[i], 4)
recall...
SuccessfulKoala55 AppetizingMouse58 I delete logs/apiserver.log, and restart the server , and here is the log. It show cannot connect to ElasticSearch
SuccessfulKoala55
Even if with Logger.report_scatter2d() the result is still so large ,and i found where the digits change: https://github.com/allegroai/trains/blob/master/trains/utilities/plotly_reporter.py#L122Tolist will change the digits , but i haven't figure out why.
I want to use clearml, the old one's mongo is 3.65...
HI: AgitatedDove14
2. I mean if my server break down, and i start a new server in another machine, can i migrate my backup experiments to the new server?
3. Not only change the info in web ui, can i connect to the old experiment , and report a new graph to that?
SuccessfulKoala55 I get all the trains server experiments record in new clearml server. Maybe it's due to vm.max_map_count or access of /opt/clearml, or the failed ES upgrade...
Ok , i will start a new experiment to see if the error will be still there? Sorry i dont really get how to show the trains-apiserver log
Can i backup my experiments?
AppetizingMouse58 Great, Thanks so much! You have done a great work.
Another question, how to configure elasticsearch to run as a cluster with 2 or more nodes on the same or different machine 😅
AppetizingMouse58 Thanks so much!, Could u tell why does this happen? If it happen next time , is there any other solution?
Cam you perhaps send the docker-compose in the current server? (已編輯)
Sorry I don't undestand what' your meaning...
Sorry , my poor english, it means upgrade by script.
Good news is it's fine now, I try to upgrade ES (although it fails), and i try to go through the necessary steps in this: https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_linux_mac
SuccessfulKoala55 And i try to create ~/trains.conf with verify_certificate = False, but i still cannot init task, it seems doesn't work for the version i'm using.
I tried as below, it can works fine:
copy(scp) /opt/trains to target machine upgrade mongo by this: https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_mongo44_migration/ mv /opt/trains /opt/clearml Increase vm.max_map_count ;Grant access to the Dockers; get lastest docker-compose.yml and pull/upBut the web ui dashboard is still weird, anyway it can work now
SuccessfulKoala55 Ok, thanks, i will have a try.
And actually the problem here is round doesn't work before tolist
AgitatedDove14 OK, i see, thanks so much!
@<1523701205467926528:profile|AgitatedDove14>
Hi, after rounding down numbers, the plot size decrease to 300Kb from 11M.This really works, thanks!
SuccessfulKoala55 Thanks for your reply, How can i fix this?
I found plotly dash cannot be exported to html file, so it may cannot be used here
SuccessfulKoala55 Thanks, If restarting server wont stop running experiments, then what i say is not necessary !
My docker-compose is lateset, download by this:curl https://raw.githubusercontent.com/allegroai/clearml-server/master/docker/docker-compose.yml -o /opt/clearml/docker-compose.yml
OK i will try again, this time i won't upgrade ES, But do increase vm.max_map_count and open access of /opt/clearm. Why i do again is i found the web ui is fine in deployed machine, but it's weird in my local machine:
SuccessfulKoala55
I have manually control the number of data under 800K, because i found the budget would be 0 if len(series_sizes) = 1, https://github.com/allegroai/trains/blob/master/trains/utilities/plotly_reporter.py#L101


