• Delaying Autostart

    From DrStevenStrange@kubmw2ce@duck.com to comp.sys.raspberry-pi on Sun Jun 1 07:08:34 2025
    From Newsgroup: comp.sys.raspberry-pi

    I have a Raspberry Pi 5 running the latest version of Bookworm and using
    Labwc as the GUI.

    I have Grafana displaying the Internet Speed installed as here.

    https://dev.to/benji377/grafana-speed-monitor-setting-up-an-internet-monitor-with-raspberry-pi-50jk

    I boot straight into Grafana using Kiosk mode by editing the /etc/xdg/labwc/autostart file and adding this line to the end


    chromium = /usr/bin/chromium-browser --start-fullscreen
    --start-maximized --kiosk --hide-scrollbars --noerrdialogs --disable-default-apps --disable-single-click-autofill --disable-translate-new-ux --disable-translate --disable-cache --disk-cache-dir=/dev/null --disk-cache-size=1
    --reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk

    This works perfectly - however when I first boot Grafana takes a little
    while to start up and for the first minute or so I get a "Site Not Found
    Page" which eventually clears and Grafana is shown.

    No biggy but looks a little messy!

    Is there anyway to put a pause in the autostart sequence to allow
    Grafana to load??

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Theo@theom+news@chiark.greenend.org.uk to comp.sys.raspberry-pi on Sun Jun 1 11:31:15 2025
    From Newsgroup: comp.sys.raspberry-pi

    DrStevenStrange <kubmw2ce@duck.com> wrote:
    I have a Raspberry Pi 5 running the latest version of Bookworm and using Labwc as the GUI.

    I have Grafana displaying the Internet Speed installed as here.

    https://dev.to/benji377/grafana-speed-monitor-setting-up-an-internet-monitor-with-raspberry-pi-50jk

    I boot straight into Grafana using Kiosk mode by editing the /etc/xdg/labwc/autostart file and adding this line to the end


    chromium = /usr/bin/chromium-browser --start-fullscreen
    --start-maximized --kiosk --hide-scrollbars --noerrdialogs --disable-default-apps --disable-single-click-autofill --disable-translate-new-ux --disable-translate --disable-cache --disk-cache-dir=/dev/null --disk-cache-size=1
    --reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk

    This works perfectly - however when I first boot Grafana takes a little while to start up and for the first minute or so I get a "Site Not Found Page" which eventually clears and Grafana is shown.

    No biggy but looks a little messy!

    Is there anyway to put a pause in the autostart sequence to allow
    Grafana to load??

    You could make a script something like:

    #!/bin/sh

    sleep 120
    /usr/bin/chromium-browser --start-fullscreen \
    --start-maximized --kiosk --hide-scrollbars --noerrdialogs \
    --disable-default-apps --disable-single-click-autofill \
    --disable-translate-new-ux --disable-translate --disable-cache \
    --disk-cache-dir=/dev/null --disk-cache-size=1 \
    --reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk


    and then put the name of that script to be your autostart. This will delay starting Chromium for 120 seconds when xdg autostarts it.

    Theo
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From DrStevenStrange@kubmw2ce@duck.com to comp.sys.raspberry-pi on Sun Jun 1 12:34:10 2025
    From Newsgroup: comp.sys.raspberry-pi

    Theo wrote:
    #!/bin/sh

    sleep 120
    /usr/bin/chromium-browser --start-fullscreen \
    --start-maximized --kiosk --hide-scrollbars --noerrdialogs \
    --disable-default-apps --disable-single-click-autofill \
    --disable-translate-new-ux --disable-translate --disable-cache \
    --disk-cache-dir=/dev/null --disk-cache-size=1 \
    --reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk
    Brilliant.

    Works perfectly

    Many thanks
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Lawrence D'Oliveiro@ldo@nz.invalid to comp.sys.raspberry-pi on Mon Jun 2 01:10:44 2025
    From Newsgroup: comp.sys.raspberry-pi

    On Sun, 1 Jun 2025 07:08:34 +0100, DrStevenStrange wrote:

    chromium = /usr/bin/chromium-browser --start-fullscreen
    --start-maximized --kiosk --hide-scrollbars --noerrdialogs --disable-default-apps --disable-single-click-autofill --disable-translate-new-ux --disable-translate --disable-cache --disk-cache-dir=/dev/null --disk-cache-size=1
    --reduce-security-for-testing --app=http:///127.0.0.1:3030&kiosk

    This works perfectly - however when I first boot Grafana takes a
    little while to start up and for the first minute or so I get a
    "Site Not Found Page" which eventually clears and Grafana is shown.

    Instead of waiting for some fixed interval, you could add a prior command using wget or something to repeatedly try accessing that URL, say at 5
    second intervals or whatever, until it becomes accessible, before allowing
    the startup to proceed.
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jornws200602@jornws200602@xs4all.nl (Oscar) to comp.sys.raspberry-pi on Tue Jun 3 13:00:54 2025
    From Newsgroup: comp.sys.raspberry-pi

    In article <101itmk$2mr8t$1@dont-email.me>,
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Instead of waiting for some fixed interval, you could add a prior command >using wget or something to repeatedly try accessing that URL, say at 5 >second intervals or whatever, until it becomes accessible, before allowing >the startup to proceed.

    For inspiration, I made a script to 'etherwake' a device and wait for it
    to get ready using wget in combination with the 'timeout' command. I run 'timeout 1 wget <url>' which returns an error if wget does not respond
    in 1 second, or wget returns an error itself. I use this in a while
    loop that repeats this until the wget succeeds:

    etherwake -D -i ${IFACE} ${MACADDR}

    while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
    do
    echo -n .
    sleep 1
    done

    The OP could replace the 'sleep 120' in the other script with this loop.
    --
    [J|O|R] <- .signature.gz
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Chris Elvidge@chris@internal.net to comp.sys.raspberry-pi on Tue Jun 3 15:17:38 2025
    From Newsgroup: comp.sys.raspberry-pi

    On 03/06/2025 at 14:00, Oscar wrote:
    In article <101itmk$2mr8t$1@dont-email.me>,
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Instead of waiting for some fixed interval, you could add a prior command
    using wget or something to repeatedly try accessing that URL, say at 5
    second intervals or whatever, until it becomes accessible, before allowing >> the startup to proceed.

    For inspiration, I made a script to 'etherwake' a device and wait for it
    to get ready using wget in combination with the 'timeout' command. I run 'timeout 1 wget <url>' which returns an error if wget does not respond
    in 1 second, or wget returns an error itself. I use this in a while
    loop that repeats this until the wget succeeds:

    etherwake -D -i ${IFACE} ${MACADDR}

    while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
    do
    echo -n .
    sleep 1
    done

    The OP could replace the 'sleep 120' in the other script with this loop.


    Why waste a curl call when ping 8.8.8.8 would work with less overhead?
    --
    Chris Elvidge, England
    I WILL NOT MESS WITH THE OPENING CREDITS

    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From Theo@theom+news@chiark.greenend.org.uk to comp.sys.raspberry-pi on Tue Jun 3 15:32:19 2025
    From Newsgroup: comp.sys.raspberry-pi

    Chris Elvidge <chris@internal.net> wrote:
    On 03/06/2025 at 14:00, Oscar wrote:
    In article <101itmk$2mr8t$1@dont-email.me>,
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    Instead of waiting for some fixed interval, you could add a prior command >> using wget or something to repeatedly try accessing that URL, say at 5
    second intervals or whatever, until it becomes accessible, before allowing >> the startup to proceed.

    For inspiration, I made a script to 'etherwake' a device and wait for it
    to get ready using wget in combination with the 'timeout' command. I run 'timeout 1 wget <url>' which returns an error if wget does not respond
    in 1 second, or wget returns an error itself. I use this in a while
    loop that repeats this until the wget succeeds:

    etherwake -D -i ${IFACE} ${MACADDR}

    while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
    do
    echo -n .
    sleep 1
    done

    The OP could replace the 'sleep 120' in the other script with this loop.

    I was going to suggest something similar like that too. It is worth
    checking what the webserver is giving you - some services give a generic 'please wait while I start up' web page which may not be what you want.
    Maybe you need to ask for a specific page and count a redirect (to the
    'please wait') page as a failure.

    Why waste a curl call when ping 8.8.8.8 would work with less overhead?

    The purpose is to test a specific service *on this machine* has started up,
    not generic internet connectivity.

    Theo
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Tue Jun 3 15:52:46 2025
    From Newsgroup: comp.sys.raspberry-pi

    On 03/06/2025 15:17, Chris Elvidge wrote:
    On 03/06/2025 at 14:00, Oscar wrote:
    In article <101itmk$2mr8t$1@dont-email.me>,
    Lawrence D'Oliveiro  <ldo@nz.invalid> wrote:
    Instead of waiting for some fixed interval, you could add a prior
    command
    using wget or something to repeatedly try accessing that URL, say at 5
    second intervals or whatever, until it becomes accessible, before
    allowing
    the startup to proceed.

    For inspiration, I made a script to 'etherwake' a device and wait for it
    to get ready using wget in combination with the 'timeout' command. I run
    'timeout 1 wget <url>' which returns an error if wget does not respond
    in 1 second, or wget returns an error itself. I use this in a while
    loop that repeats this until the wget succeeds:

         etherwake -D -i ${IFACE} ${MACADDR}

         while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
         do
             echo -n .
             sleep 1
         done

    The OP could replace the 'sleep 120' in the other script with this loop.


    Why waste a curl call when ping 8.8.8.8 would work with less overhead?


    +1. Assuming that pings to the Wide World are not blocked by the network
    --
    "Strange as it seems, no amount of learning can cure stupidity, and
    higher education positively fortifies it."

    - Stephen Vizinczey


    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jornws200602@jornws200602@xs4all.nl (Oscar) to comp.sys.raspberry-pi on Tue Jun 3 19:29:27 2025
    From Newsgroup: comp.sys.raspberry-pi

    In article <101n06i$264r$1@dont-email.me>,
    Chris Elvidge <chris@internal.net> wrote:
    while ! timeout 1 curl --noproxy \* "${URL}" &> /dev/null
    do
    echo -n .
    sleep 1
    done

    The OP could replace the 'sleep 120' in the other script with this loop.


    Why waste a curl call when ping 8.8.8.8 would work with less overhead?

    Will 'ping 8.8.8.8' return quickly (within 2 seconds) after my device
    has woken up? No. My device does not respond to 8.8.8.8. You are just
    testing internet connectivity, not the wakeup of my device.

    And why waste a ping call (which is setuid) if a curl call will suffice
    AND tells if the port is responding to http(s) requests?

    In other words: why waste a post when you don't understand the problem?
    --
    [J|O|R] <- .signature.gz
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jornws200602@jornws200602@xs4all.nl (Oscar) to comp.sys.raspberry-pi on Tue Jun 3 19:33:41 2025
    From Newsgroup: comp.sys.raspberry-pi

    In article <lKm*Ch8dA@news.chiark.greenend.org.uk>,
    Theo <theom+news@chiark.greenend.org.uk> wrote:
    I was going to suggest something similar like that too. It is worth
    checking what the webserver is giving you - some services give a generic >'please wait while I start up' web page which may not be what you want. >Maybe you need to ask for a specific page and count a redirect (to the >'please wait') page as a failure.

    In my use case this was enough. You can also direct curl output to grep
    and use that as an indicator of your device's desired state.


    Why waste a curl call when ping 8.8.8.8 would work with less overhead?
    The purpose is to test a specific service *on this machine* has started up, >not generic internet connectivity.

    Yeah. And curl is not *that* expensive to run. Maybe even less expensive
    than ping, as it does not have the setuid overhead. But who's counting
    clock cycles anyway?
    --
    [J|O|R] <- .signature.gz
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jornws200602@jornws200602@xs4all.nl (Oscar) to comp.sys.raspberry-pi on Tue Jun 3 19:34:43 2025
    From Newsgroup: comp.sys.raspberry-pi

    In article <101n27u$20jc$5@dont-email.me>,
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    +1. Assuming that pings to the Wide World are not blocked by the network

    -1 for assuming that internet connectivity is the only requirement.
    --
    [J|O|R] <- .signature.gz
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From jornws200602@jornws200602@xs4all.nl (Oscar) to comp.sys.raspberry-pi on Tue Jun 3 19:56:40 2025
    From Newsgroup: comp.sys.raspberry-pi

    In article <101niml$7k6m$2@dont-email.me>,
    Oscar <jornws200602@xs4all.nl> wrote:
    Yeah. And curl is not *that* expensive to run. Maybe even less expensive
    than ping, as it does not have the setuid overhead. But who's counting
    clock cycles anyway?

    Following up on myself: Okay, ping loads a bit quicker as it's
    executable is smaller and it loads less libraries, but it still
    is a setuid binary.

    I must admit I still don't understand why you thing the overhead of
    calling curl is a waste. Both are external programs and we're in a wait
    loop anyway. Why hurry in between to sleep calls?

    In this case curl beats ping, as OP's question was related to a slow
    staring Grafana server. Ping can't tell if Grafana is running. With curl
    you can see if a string is in the output.

    Say for examplje you're waiting for a specific dasboard to load, with
    the title "Hootenmany". You could run this slightly modified version:


    while ! timeout 1 curl -s ${URL} | grep -qs Hootenanny
    do
    echo -n .
    sleep 1
    done

    This will loop until curl returns something with the string 'Hootenanny'
    within 1 second and fail if there's no network, no dns, no route to the
    Grafana server or while Grafana server is still starting up.

    And *that's* what the OP asked.
    --
    [J|O|R] <- .signature.gz
    --- Synchronet 3.21a-Linux NewsLink 1.2
  • From The Natural Philosopher@tnp@invalid.invalid to comp.sys.raspberry-pi on Wed Jun 4 13:09:13 2025
    From Newsgroup: comp.sys.raspberry-pi

    On 03/06/2025 20:34, Oscar wrote:
    In article <101n27u$20jc$5@dont-email.me>,
    The Natural Philosopher <tnp@invalid.invalid> wrote:
    +1. Assuming that pings to the Wide World are not blocked by the network

    -1 for assuming that internet connectivity is the only requirement.

    Accepted.
    --
    It is the folly of too many to mistake the echo of a London coffee-house
    for the voice of the kingdom.

    Jonathan Swift


    --- Synchronet 3.21a-Linux NewsLink 1.2