I have a Raspberry Pi 5 running the latest version of Bookworm and using Labwc as the GUI.
I have Grafana displaying the Internet Speed installed as here.
https://dev.to/benji377/grafana-speed-monitor-setting-up-an-internet-monitor-with-raspberry-pi-50jk
I boot straight into Grafana using Kiosk mode by editing the /etc/xdg/labwc/autostart file and adding this line to the end
chromium = /usr/bin/chromium-browser --start-fullscreen
--start-maximized --kiosk --hide-scrollbars --noerrdialogs --disable-default-apps --disable-single-click-autofill --disable-translate-new-ux --disable-translate --disable-cache --disk-cache-dir=/dev/null --disk-cache-size=1
--reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk
This works perfectly - however when I first boot Grafana takes a little while to start up and for the first minute or so I get a "Site Not Found Page" which eventually clears and Grafana is shown.
No biggy but looks a little messy!
Is there anyway to put a pause in the autostart sequence to allow
Grafana to load??
#!/bin/shBrilliant.
sleep 120
/usr/bin/chromium-browser --start-fullscreen \
--start-maximized --kiosk --hide-scrollbars --noerrdialogs \
--disable-default-apps --disable-single-click-autofill \
--disable-translate-new-ux --disable-translate --disable-cache \
--disk-cache-dir=/dev/null --disk-cache-size=1 \
--reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk
chromium = /usr/bin/chromium-browser --start-fullscreen
--start-maximized --kiosk --hide-scrollbars --noerrdialogs --disable-default-apps --disable-single-click-autofill --disable-translate-new-ux --disable-translate --disable-cache --disk-cache-dir=/dev/null --disk-cache-size=1
--reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk
This works perfectly - however when I first boot Grafana takes a
little while to start up and for the first minute or so I get a
"Site Not Found Page" which eventually clears and Grafana is shown.
Instead of waiting for some fixed interval, you could add a prior command >using wget or something to repeatedly try accessing that URL, say at 5 >second intervals or whatever, until it becomes accessible, before allowing >the startup to proceed.
In article <101itmk$2mr8t$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Instead of waiting for some fixed interval, you could add a prior command
using wget or something to repeatedly try accessing that URL, say at 5
second intervals or whatever, until it becomes accessible, before allowing >> the startup to proceed.
For inspiration, I made a script to 'etherwake' a device and wait for it
to get ready using wget in combination with the 'timeout' command. I run 'timeout 1 wget <url>' which returns an error if wget does not respond
in 1 second, or wget returns an error itself. I use this in a while
loop that repeats this until the wget succeeds:
etherwake -D -i ${IFACE} ${MACADDR}
while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
do
echo -n .
sleep 1
done
The OP could replace the 'sleep 120' in the other script with this loop.
On 03/06/2025 at 14:00, Oscar wrote:
In article <101itmk$2mr8t$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Instead of waiting for some fixed interval, you could add a prior command >> using wget or something to repeatedly try accessing that URL, say at 5
second intervals or whatever, until it becomes accessible, before allowing >> the startup to proceed.
For inspiration, I made a script to 'etherwake' a device and wait for it
to get ready using wget in combination with the 'timeout' command. I run 'timeout 1 wget <url>' which returns an error if wget does not respond
in 1 second, or wget returns an error itself. I use this in a while
loop that repeats this until the wget succeeds:
etherwake -D -i ${IFACE} ${MACADDR}
while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
do
echo -n .
sleep 1
done
The OP could replace the 'sleep 120' in the other script with this loop.
Why waste a curl call when ping 8.8.8.8 would work with less overhead?
On 03/06/2025 at 14:00, Oscar wrote:
In article <101itmk$2mr8t$1@dont-email.me>,
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
Instead of waiting for some fixed interval, you could add a prior
command
using wget or something to repeatedly try accessing that URL, say at 5
second intervals or whatever, until it becomes accessible, before
allowing
the startup to proceed.
For inspiration, I made a script to 'etherwake' a device and wait for it
to get ready using wget in combination with the 'timeout' command. I run
'timeout 1 wget <url>' which returns an error if wget does not respond
in 1 second, or wget returns an error itself. I use this in a while
loop that repeats this until the wget succeeds:
etherwake -D -i ${IFACE} ${MACADDR}
while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
do
echo -n .
sleep 1
done
The OP could replace the 'sleep 120' in the other script with this loop.
Why waste a curl call when ping 8.8.8.8 would work with less overhead?
while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
do
echo -n .
sleep 1
done
The OP could replace the 'sleep 120' in the other script with this loop.
Why waste a curl call when ping 8.8.8.8 would work with less overhead?
I was going to suggest something similar like that too. It is worth
checking what the webserver is giving you - some services give a generic >'please wait while I start up' web page which may not be what you want. >Maybe you need to ask for a specific page and count a redirect (to the >'please wait') page as a failure.
Why waste a curl call when ping 8.8.8.8 would work with less overhead?The purpose is to test a specific service *on this machine* has started up, >not generic internet connectivity.
+1. Assuming that pings to the Wide World are not blocked by the network
Yeah. And curl is not *that* expensive to run. Maybe even less expensive
than ping, as it does not have the setuid overhead. But who's counting
clock cycles anyway?
In article <101n27u$20jc$5@dont-email.me>,
The Natural Philosopher <tnp@invalid.invalid> wrote:
+1. Assuming that pings to the Wide World are not blocked by the network
-1 for assuming that internet connectivity is the only requirement.
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 148:14:58 |
Calls: | 13,691 |
Calls today: | 1 |
Files: | 186,936 |
D/L today: |
33 files (6,120K bytes) |
Messages: | 2,410,932 |