Streamline logging and show alerts on the screen

SD card wearout is the issue here. The gotchies create a lot of events and many of those are in parallel logged in a multitude of logs. This gets even worse when things do not work. I.e. the pwngrid-peer.log on my gotchi inflates to multi-megabytes over night with

t
2019-11-06 12:31:04 inf pwngrid v1.10.1 starting in peer mode ...
2019-11-06 12:31:06 inf /etc/pwnagotchi/id_rsa found
2019-11-06 12:31:06 inf started beacon discovery and message routing (0 known peers)
2019-11-06 12:31:06 inf peer [email protected]9ddcc8e0e5cdd signaling is ready
2019-11-06 12:31:06 inf pwngrid api starting on 127.0.0.1:8666 ...
2019-11-06 12:36:49 err POST https://api.pwnagotchi.ai/api/v1/unit/enroll (20.05739s) Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:51061->9.9.9.9:53: i/o timeout
2019-11-06 12:36:49 err error while refreshing token: Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:51061->9.9.9.9:53: i/o timeout
2019-11-06 12:36:49 inf peer advertisement enabled
2019-11-06 12:37:12 err POST https://api.pwnagotchi.ai/api/v1/unit/enroll (20.015222s) Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:58016->9.9.9.9:53: i/o timeout
2019-11-06 12:37:12 err error while refreshing token: Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:58016->9.9.9.9:53: i/o timeout
2019-11-06 12:37:35 err POST https://api.pwnagotchi.ai/api/v1/unit/enroll (20.019963s) Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:50104->9.9.9.9:53: i/o timeout
2019-11-06 12:37:35 err error while refreshing token: Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:50104->9.9.9.9:53: i/o timeout
2019-11-06 12:37:58 err POST https://api.pwnagotchi.ai/api/v1/unit/enroll (20.010916s) Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:42292->9.9.9.9:53: i/o timeout
2019-11-06 12:37:58 err error while refreshing token: Post https://api.pwnagotchi.ai/api/v1/unit/enroll: dial tcp: lookup api.pwnagotchi.ai on 9.9.9.9:53: read udp 10.0.0.2:42292->9.9.9.9:53: i/o timeout

Similarly kern.log inflates when the system has the brcmfmac-pest.

The wearout can be limited by mounting a ram drive at /var/log https://github.com/azlux/log2ram/blob/master/install.sh
but that is not enough. RAM is tight, too. With, say, 60MB ram drive for the logs the space could get filled in less than a week.

Stingy settings for logrotation, cut out some of the redundant logging, watchdogs that kill processes which run into the same error for hours, something on the screen to alert when that happens …

Especially with the brcmfmac-pest - my gotchi was super mad at me last night, even when I took it for a walk and there were hundreds of networks to feed on. Later at home I discovered it had the pest and needed a reboot. It had helped me if the gotchi had shown me a “broken” face instead, or actually a skull.

1 Like