Reputation
Badges 1
59 × Eureka!SuccessfulKoala55 , here is the output of "docker inspect trains-webserver" (attached).
Here is a snapshot of the blank screen:
Here is the developer tool Network screen capture after refreshing the page and trying to login.
I get an empty list for the 'XHR' filter.
Many errors :white_frowning_face: . Any idea what they mean?
In file docker-compose.yml I replaced all the strings /opt/clearml/data/elastic_7 into /home/orpat/clearml/data/elastic_7.
Where do I see the agent print outs?
I am using an old version. It's a trains server of version 0.16.3.
No other error messages but the dashboard screen is blank.
However, there is a breakthrough: I can run the dashboard from Safari (Mac browser). So the problem is only in Chrome.
The upgrade is from /home/orpat/trains/data/elastic into /home/orpat/trains/data/elastic_7. Do you different paths in the log? Where?
Yes, I am using the trains server. We never took the time to update it to clearml.
The version (according to pip freeze) is 0.16.3.
AgitatedDove14 SuccessfulKoala55 , after I ran elastic_update.py (stage 5 as described above), I saw there was a new folder named data/mongo_4. Doesn't it mean mongodb was already migrated?
As you suggested, I tried with a git repository. Got a completely different error. Attached is the log file. Any idea what's wrong?
I clicked Fetch/XHR and got the following (after another reboot)
I am running my own server. Those are not example experiments.
Woohoo! 🎉
The instructions in the https://superuser.com/questions/278948/clear-cache-for-specific-domain-name-in-chrome/444881#444881 were not accurate, but they brought me close enough.
Here is the exact sequence of operations:
F12 --> Applications tab --> Storage --> Clear site data --> refresh login screen
Thanks everyone for your help!
AgitatedDove14 , thank you so much for your help.
I had a long video session today with the Israeli clearml engineers. There were plenty of things I had to do, and the two major ones were to define the environment variable CLEARML_AGENT_SKIP_PIP_VENV_INSTALL so it points to my conda environment python, and to call 'import clearml' from the top of my file (it was called from inside a method).
So now I can clone 🎉
No. I put a break point in my python script, and examined os.environ. The only environment variable with 'CLEARML' in its name is CLEARML_PROC_MASTER_ID, whose value is '16188:' (maybe it means something to you?)
ok, so ~/clearml.conf points to ~/.clearml/cache, and such a file does not exist.
CostlyOstrich36 , I cleared the local cache and everything turned black (I guess it's not related to the cache). So I can't even see the list of experiments now.
I am not sure it matters for the following output, but anyway please note that the clearml dockers are down right now.
sigalr@momo : ~ $ curl -XGET http://localhost:9200/_cat/indices
yellow open queue_metrics_d1bd92a3b039400cbafc60a7a5b1e52b_2022-06 2F6APbQWSvajTZQ5JxXY1Q 1 1 59 0 26.2kb 26.2kb
yellow open events-plot-d1bd92a3b039400cbafc60a7a5b1e52b bZMKKCaKRXCys6VD_9oDDw 1 1 8556 0 4.1mb 4.1mb
yellow open worker_stats_d1bd92a3b039400cbafc60a7a5b1e52b_2022-06 c85DhB...
I don't see a cache related to clearml:(base) sigalr@rack-bermano-g03:~$ find . -name *cache* -not -name __pycache* ./.pycharm_helpers/python_stubs/cache ./.cache ./.conda/pkgs/cache
The 1st and last are obviously unrelated, and the middle one contains files related to python:(base) sigalr@rack-bermano-g03:~$ ls .cache/ matplotlib motd.legal-displayed pip
I don't get the error any longer and the experiments get deleted as expected. So no complains on my side...
TimelyPenguin76 , it possible I tried to compare more than 10 experiments. The issue at the server is that it got very slow, and did not show the 'console' and 'scalars' results any longer, even for a single experiment.
CostlyOstrich36 , I don't have the ClearmlML RAM estimate. My machine is running many processes in addition to ClearML.
The clearml dockers are down right now because I started a new ES migration (elastic_upgrade.py). I started it before you contacted me and I don't want to break it now. So I cannot look at the console right now.
It will probably finish 30 hours from now. If the same problems repeat, we will continue this chat then.