I see that some other people that have had the same issue recently but there does not seem to be a definitive answer/solution.
When this issue cropped up it coincided with me doing some work on my vault/shake and a failed PoE injector (data stopped passing through it). I’ve been chasing my tail on this for a couple days
I now have a new (and better) PoE injector and have reloaded Shake OS (I thought the issue I was having might of been due to a corrupt SD card) but the Sake is still reporting that it’s “not connected.”
I’ve looked through the log files but I can’t find anything definitive in them. The interesting thing is that I took the Shake to a different location to work on and after I got the PoE injector issue sorted, it connected right up at that location.
Back here at home however, “not connected.”
I’ve attached the log files in case they are helpful. You can ignore the IP warning in postboot.log.old, the location it was at temporarily uses what are technically routable (but not) IPs on the LAN (legacy thing, don’t ask… )
I ran a packet capture for a bit and I do see that the shake is trying to communicate with (raspberryshakedata.com) 84.16.249.51 on port 55555 but not very much. I’m not sure what the internal process is on the shake but it seems like it tries to establish communication with the server, can’t for some reason and so stops trying.
17:57:03.819398 IP 10.10.0.42.58052 > 84.16.249.51.55555: tcp 0
17:57:04.003170 IP 84.16.249.51.55555 > 10.10.0.42.58052: tcp 0
17:57:04.003304 IP 10.10.0.42.58052 > 84.16.249.51.55555: tcp 0
17:57:04.003781 IP 10.10.0.42.58052 > 84.16.249.51.55555: tcp 0
17:57:04.015369 IP 10.10.0.42.49533 > 10.10.0.1.53: UDP, length 40
17:57:04.015374 IP 10.10.0.42.49533 > 10.10.0.1.53: UDP, length 40
17:57:04.015510 IP 10.10.0.1.53 > 10.10.0.42.49533: UDP, length 56
17:57:04.015512 IP 10.10.0.1.53 > 10.10.0.42.49533: UDP, length 98
17:57:04.015759 IP 10.10.0.42.33376 > 84.16.249.51.55556: tcp 0
17:57:04.174284 IP 84.16.249.51.55555 > 10.10.0.42.58052: tcp 0
17:57:04.174408 IP 10.10.0.42.58052 > 84.16.249.51.55555: tcp 0
The same thing happened to me a few days ago, the same when I gave maintenance to my Shake, after connecting it and ensuring that there was no problem it came out that it was not connected to the server, 6 or 12 hours had to pass, when I restarted it again it could be reconnected. (By the way, I’m curious about what your vault is like, I want to build one but it’s difficult for me how to implement it) Greetings.
Yeah, I saw your post as well… It so happens that the PoE splitter/injector failed (stopped passing data) at the same time this issue cropped up, so it’s had me chasing my tail thinking the injector took out something else along with itself.
Looks like this is just a wait it out issue, maybe. I am seeing this in the odf_SL_plugin.err log:
Unable to process configuration file '/opt/settings/user/UDP-data-streams.conf', format is invalid, cannot register UDP destinations.
Searching here (and Google) does not result in anything definitive. For what it’s worth, from the Shake I can connect to 55556 and 55555:
and it does seem like it is making periodic connection attempts:
I’m just about done with my vault, it’s very loosely based off the USGS design. I’ll have a video on my YouTube channel about it and I was asked by Raspberry Shake to do a post about it for their blog. I do have a post here on the forums showing the “DIY” enclosure I made up.
I see, I think you are closer to solving the error, than I see this coming up every time we turn off the shake, since that happened to a friend from my country with his shake, he still chose to wait to see if the server would be restored.
I will still be aware of your video because it does catch my attention to know how others have installed their shake, to give me more ideas and improve the place of my station.
apologies for the inconvenience. the problem you are experiencing is a result of a glitch on the server, not anything to do with your Shake. it is again resolved, please reboot and your connection issues will resolve.
we will be checking our side more deeply to identify how / why this temporary hang on connection requests is occurring, to make sure it gets permanently resolved.
and happy to report that all reasons for the intermittent server hang have been identified and are now being gracefully handled. of course, it is difficult to be aware of unknown unknowns, so we remain vigilant in monitoring the services so that they remain up and available at all times.