So, it looks like ::http::geturl is operating asynchronously, despite my program NOT using -command.
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
So, it looks like ::http::geturl is operating asynchronously, despite my
program NOT using -command.
It does. It is documented as such:
man n http:
Note: The event queue is even used without the -command option. As a
side effect, arbitrary commands may be processed while http::geturl is
running.
The code snippets below are from http-2.9.5.tm which was distributed
(at least) with 8.6.12:
Buried deep in http::geturl:
# geturl does EVERYTHING asynchronously, so if the user
# calls it synchronously, we just do a wait here.
http::wait $token
And the implementation of http::wait is:
proc http::wait {token} {
variable $token
upvar 0 $token state
if {![info exists state(status)] || $state(status) eq ""} {
# We must wait on the original variable name, not the upvar alias
vwait ${token}(status)
}
return [status $token]
}
And the 'vwait' there reenters the event loop and allows other events
to be processed.
On 24/6/25 14:21, Rich wrote:
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
So, it looks like ::http::geturl is operating asynchronously, despite my >>> program NOT using -command.
It does. It is documented as such:
man n http:
Note: The event queue is even used without the -command option. As a
side effect, arbitrary commands may be processed while http::geturl is >> running.
The code snippets below are from http-2.9.5.tm which was distributed
(at least) with 8.6.12:
Buried deep in http::geturl:
# geturl does EVERYTHING asynchronously, so if the user
# calls it synchronously, we just do a wait here.
http::wait $token
And the implementation of http::wait is:
proc http::wait {token} {
variable $token
upvar 0 $token state
if {![info exists state(status)] || $state(status) eq ""} { >> # We must wait on the original variable name, not the upvar alias
vwait ${token}(status)
}
return [status $token]
}
And the 'vwait' there reenters the event loop and allows other events
to be processed.
OK. Is there a way to ACTUALLY get geturl to block, or equivalent? I need the geturl to finish before anything else happens.
On 6/24/2025 1:01 AM, Jonathan Kelly wrote:
OK. Is there a way to ACTUALLY get geturl to block, or equivalent? I need the geturl to finish before anything else happens.
I would think you can use this option on the geturl call:
-command callback
On 6/24/2025 5:19 PM, et99 wrote:
On 6/25/2025 12:03 AM, et99 wrote:
On 6/24/2025 5:19 PM, et99 wrote:
... snip ....
It has just now occurred to me that you are running your [test1] proc as
a fileevent script. Read the vwait manual under the section:
"NESTED VWAITS BY EXAMPLE"
I use geturl synchronously with no issues. But I do a single url request
and wait for it, in the main line code - NOT inside an event.
The code I presented in the prior posting is how you could use -command
and get a synchronous result. It is only really useful if you were going
to do something between the geturl and the wait for it to be done. Otherwise, you could just call it synchronously - but NOT inside an
event, if another fileevent might trigger before the first one is done.
As you will see with the example in the manual, things have to unwind,
so if your fileevents occur fast enough, they may have triggered before earlier geturl calls will have had time to unwind. The event loop works
like a stack.
That's why the timestamps are output in reverse order of when the geturl
was called.
I'm not sure exactly what you want to accomplish. But is sounds to me
like you need to do some queuing or co-routines. I have code I wrote
that does single queue with 1 or more servers using threads. I sometimes
use it for just a single server to get my own queuing of events.
Unfortunately, I can't use it with tcl 9.0 because of a race condition
bug with respect to package requires inside threads that has been
ticketed but not yet looked into.
(sorry for so many postings :)
-e
proc queue {} {
set ::input [open "|cat test.txt" r]
fconfigure $::input -blocking 0 -buffering line
fileevent $::input readable [list check $::input]
}
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
proc queue {} {
set ::input [open "|cat test.txt" r]
fconfigure $::input -blocking 0 -buffering line
fileevent $::input readable [list check $::input]
}
Curious why you are opening a pipe to cat, having cat read and print
the contents, and then consuming that, when you can just open text.txt directly:
set ::input [open test.txt r]
And achieve the same result.
On 6/25/2025 2:32 PM, Rich wrote:
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
proc queue {} {
set ::input [open "|cat test.txt" r]
fconfigure $::input -blocking 0 -buffering line
fileevent $::input readable [list check $::input]
}
Curious why you are opening a pipe to cat, having cat read and print
the contents, and then consuming that, when you can just open text.txt
directly:
set ::input [open test.txt r]
And achieve the same result.
I was also curious about this. But I'm also wondering why this is even
event driven at all? Why not simply, in pseudo code:
while 1 {
read...a line
if end of file, break
geturl
do something with the url results
}
If there's also a gui that the OP wants to keep alive, it should not be starved, since the synchronous form of geturl is calling vwait, and that would allow gui events to get processed while waiting for the url
request to complete.
-e
On 6/25/2025 2:32 PM, Rich wrote:
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
proc queue {} {
set ::input [open "|cat test.txt" r]
fconfigure $::input -blocking 0 -buffering line
fileevent $::input readable [list check $::input]
}
Curious why you are opening a pipe to cat, having cat read and print
the contents, and then consuming that, when you can just open
text.txt directly:
set ::input [open test.txt r]
And achieve the same result.
I was also curious about this. But I'm also wondering why this is
even event driven at all? Why not simply, in pseudo code:
et99 <et99@rocketship1.me> wrote:
On 6/25/2025 2:32 PM, Rich wrote:
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
proc queue {} {
set ::input [open "|cat test.txt" r]
fconfigure $::input -blocking 0 -buffering line
fileevent $::input readable [list check $::input]
}
Curious why you are opening a pipe to cat, having cat read and print
the contents, and then consuming that, when you can just open
text.txt directly:
set ::input [open test.txt r]
And achieve the same result.
I was also curious about this. But I'm also wondering why this is
even event driven at all? Why not simply, in pseudo code:
My guess: the above was OP's "test case" code. The real code is
reading an Apache log file as Apache logs to the file, so 'event
driven' in that senario does make some sense.
On 27/6/25 03:08, Rich wrote:
et99 <et99@rocketship1.me> wrote:What Rich said. Before I realised geturl is *always* asynchronous, I had read the man for geturl where it said geturl "blocked". I needed to simplify my program as a test case to prove something was broken. Turned out, the problem was my understanding, though I still think the manual page is mis-leading. The relevant
On 6/25/2025 2:32 PM, Rich wrote:
Jonathan Kelly <jonkelly@fastmail.fm> wrote:
proc queue {} {
set ::input [open "|cat test.txt" r]
fconfigure $::input -blocking 0 -buffering line
fileevent $::input readable [list check $::input]
}
Curious why you are opening a pipe to cat, having cat read and print
the contents, and then consuming that, when you can just open
text.txt directly:
set ::input [open test.txt r]
And achieve the same result.
I was also curious about this. But I'm also wondering why this is
even event driven at all? Why not simply, in pseudo code:
My guess: the above was OP's "test case" code. The real code is
reading an Apache log file as Apache logs to the file, so 'event
driven' in that senario does make some sense.
"Note: The event queue is even used without the -command option. As a side effect, arbitrary commands may be processed while http::geturl is running."
is in the general description at the top, and I had just been reading the geturl function description.
I wonder, if you are reading a file that is being written from another process, sort of like a "tail" program, doesn't tcl's [fileevent
<channel> readable <script>] trigger constantly? Isn't this in effect a tight polling loop?
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,064 |
Nodes: | 10 (0 / 10) |
Uptime: | 170:42:30 |
Calls: | 13,692 |
Files: | 186,936 |
D/L today: |
100 files (20,246K bytes) |
Messages: | 2,411,676 |