12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028 |
- @c -*-texinfo-*-
- @c This is part of the GNU Guile Reference Manual.
- @c Copyright (C) 1996, 1997, 2000, 2001, 2002, 2003, 2004, 2007, 2009, 2010
- @c Free Software Foundation, Inc.
- @c See the file guile.texi for copying conditions.
- @node Scheduling
- @section Threads, Mutexes, Asyncs and Dynamic Roots
- @menu
- * Arbiters:: Synchronization primitives.
- * Asyncs:: Asynchronous procedure invocation.
- * Threads:: Multiple threads of execution.
- * Mutexes and Condition Variables:: Synchronization primitives.
- * Blocking:: How to block properly in guile mode.
- * Critical Sections:: Avoiding concurrency and reentries.
- * Fluids and Dynamic States:: Thread-local variables, etc.
- * Futures:: Fine-grain parallelism.
- * Parallel Forms:: Parallel execution of forms.
- @end menu
- @node Arbiters
- @subsection Arbiters
- @cindex arbiters
- Arbiters are synchronization objects, they can be used by threads to
- control access to a shared resource. An arbiter can be locked to
- indicate a resource is in use, and unlocked when done.
- An arbiter is like a light-weight mutex (@pxref{Mutexes and Condition
- Variables}). It uses less memory and may be faster, but there's no
- way for a thread to block waiting on an arbiter, it can only test and
- get the status returned.
- @deffn {Scheme Procedure} make-arbiter name
- @deffnx {C Function} scm_make_arbiter (name)
- Return an object of type arbiter and name @var{name}. Its
- state is initially unlocked. Arbiters are a way to achieve
- process synchronization.
- @end deffn
- @deffn {Scheme Procedure} try-arbiter arb
- @deffnx {C Function} scm_try_arbiter (arb)
- If @var{arb} is unlocked, then lock it and return @code{#t}.
- If @var{arb} is already locked, then do nothing and return
- @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} release-arbiter arb
- @deffnx {C Function} scm_release_arbiter (arb)
- If @var{arb} is locked, then unlock it and return @code{#t}. If
- @var{arb} is already unlocked, then do nothing and return @code{#f}.
- Typical usage is for the thread which locked an arbiter to later
- release it, but that's not required, any thread can release it.
- @end deffn
- @node Asyncs
- @subsection Asyncs
- @cindex asyncs
- @cindex user asyncs
- @cindex system asyncs
- Asyncs are a means of deferring the execution of Scheme code until it is
- safe to do so.
- Guile provides two kinds of asyncs that share the basic concept but are
- otherwise quite different: system asyncs and user asyncs. System asyncs
- are integrated into the core of Guile and are executed automatically
- when the system is in a state to allow the execution of Scheme code.
- For example, it is not possible to execute Scheme code in a POSIX signal
- handler, but such a signal handler can queue a system async to be
- executed in the near future, when it is safe to do so.
- System asyncs can also be queued for threads other than the current one.
- This way, you can cause threads to asynchronously execute arbitrary
- code.
- User asyncs offer a convenient means of queuing procedures for future
- execution and triggering this execution. They will not be executed
- automatically.
- @menu
- * System asyncs::
- * User asyncs::
- @end menu
- @node System asyncs
- @subsubsection System asyncs
- To cause the future asynchronous execution of a procedure in a given
- thread, use @code{system-async-mark}.
- Automatic invocation of system asyncs can be temporarily disabled by
- calling @code{call-with-blocked-asyncs}. This function works by
- temporarily increasing the @emph{async blocking level} of the current
- thread while a given procedure is running. The blocking level starts
- out at zero, and whenever a safe point is reached, a blocking level
- greater than zero will prevent the execution of queued asyncs.
- Analogously, the procedure @code{call-with-unblocked-asyncs} will
- temporarily decrease the blocking level of the current thread. You
- can use it when you want to disable asyncs by default and only allow
- them temporarily.
- In addition to the C versions of @code{call-with-blocked-asyncs} and
- @code{call-with-unblocked-asyncs}, C code can use
- @code{scm_dynwind_block_asyncs} and @code{scm_dynwind_unblock_asyncs}
- inside a @dfn{dynamic context} (@pxref{Dynamic Wind}) to block or
- unblock system asyncs temporarily.
- @deffn {Scheme Procedure} system-async-mark proc [thread]
- @deffnx {C Function} scm_system_async_mark (proc)
- @deffnx {C Function} scm_system_async_mark_for_thread (proc, thread)
- Mark @var{proc} (a procedure with zero arguments) for future execution
- in @var{thread}. When @var{proc} has already been marked for
- @var{thread} but has not been executed yet, this call has no effect.
- When @var{thread} is omitted, the thread that called
- @code{system-async-mark} is used.
- This procedure is not safe to be called from signal handlers. Use
- @code{scm_sigaction} or @code{scm_sigaction_for_thread} to install
- signal handlers.
- @end deffn
- @deffn {Scheme Procedure} call-with-blocked-asyncs proc
- @deffnx {C Function} scm_call_with_blocked_asyncs (proc)
- Call @var{proc} and block the execution of system asyncs by one level
- for the current thread while it is running. Return the value returned
- by @var{proc}. For the first two variants, call @var{proc} with no
- arguments; for the third, call it with @var{data}.
- @end deffn
- @deftypefn {C Function} {void *} scm_c_call_with_blocked_asyncs (void * (*proc) (void *data), void *data)
- The same but with a C function @var{proc} instead of a Scheme thunk.
- @end deftypefn
- @deffn {Scheme Procedure} call-with-unblocked-asyncs proc
- @deffnx {C Function} scm_call_with_unblocked_asyncs (proc)
- Call @var{proc} and unblock the execution of system asyncs by one
- level for the current thread while it is running. Return the value
- returned by @var{proc}. For the first two variants, call @var{proc}
- with no arguments; for the third, call it with @var{data}.
- @end deffn
- @deftypefn {C Function} {void *} scm_c_call_with_unblocked_asyncs (void *(*proc) (void *data), void *data)
- The same but with a C function @var{proc} instead of a Scheme thunk.
- @end deftypefn
- @deftypefn {C Function} void scm_dynwind_block_asyncs ()
- During the current dynwind context, increase the blocking of asyncs by
- one level. This function must be used inside a pair of calls to
- @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
- Wind}).
- @end deftypefn
- @deftypefn {C Function} void scm_dynwind_unblock_asyncs ()
- During the current dynwind context, decrease the blocking of asyncs by
- one level. This function must be used inside a pair of calls to
- @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
- Wind}).
- @end deftypefn
- @node User asyncs
- @subsubsection User asyncs
- A user async is a pair of a thunk (a parameterless procedure) and a
- mark. Setting the mark on a user async will cause the thunk to be
- executed when the user async is passed to @code{run-asyncs}. Setting
- the mark more than once is satisfied by one execution of the thunk.
- User asyncs are created with @code{async}. They are marked with
- @code{async-mark}.
- @deffn {Scheme Procedure} async thunk
- @deffnx {C Function} scm_async (thunk)
- Create a new user async for the procedure @var{thunk}.
- @end deffn
- @deffn {Scheme Procedure} async-mark a
- @deffnx {C Function} scm_async_mark (a)
- Mark the user async @var{a} for future execution.
- @end deffn
- @deffn {Scheme Procedure} run-asyncs list_of_a
- @deffnx {C Function} scm_run_asyncs (list_of_a)
- Execute all thunks from the marked asyncs of the list @var{list_of_a}.
- @end deffn
- @node Threads
- @subsection Threads
- @cindex threads
- @cindex Guile threads
- @cindex POSIX threads
- Guile supports POSIX threads, unless it was configured with
- @code{--without-threads} or the host lacks POSIX thread support. When
- thread support is available, the @code{threads} feature is provided
- (@pxref{Feature Manipulation, @code{provided?}}).
- The procedures below manipulate Guile threads, which are wrappers around
- the system's POSIX threads. For application-level parallelism, using
- higher-level constructs, such as futures, is recommended
- (@pxref{Futures}).
- @deffn {Scheme Procedure} all-threads
- @deffnx {C Function} scm_all_threads ()
- Return a list of all threads.
- @end deffn
- @deffn {Scheme Procedure} current-thread
- @deffnx {C Function} scm_current_thread ()
- Return the thread that called this function.
- @end deffn
- @c begin (texi-doc-string "guile" "call-with-new-thread")
- @deffn {Scheme Procedure} call-with-new-thread thunk [handler]
- Call @code{thunk} in a new thread and with a new dynamic state,
- returning the new thread. The procedure @var{thunk} is called via
- @code{with-continuation-barrier}.
- When @var{handler} is specified, then @var{thunk} is called from
- within a @code{catch} with tag @code{#t} that has @var{handler} as its
- handler. This catch is established inside the continuation barrier.
- Once @var{thunk} or @var{handler} returns, the return value is made
- the @emph{exit value} of the thread and the thread is terminated.
- @end deffn
- @deftypefn {C Function} SCM scm_spawn_thread (scm_t_catch_body body, void *body_data, scm_t_catch_handler handler, void *handler_data)
- Call @var{body} in a new thread, passing it @var{body_data}, returning
- the new thread. The function @var{body} is called via
- @code{scm_c_with_continuation_barrier}.
- When @var{handler} is non-@code{NULL}, @var{body} is called via
- @code{scm_internal_catch} with tag @code{SCM_BOOL_T} that has
- @var{handler} and @var{handler_data} as the handler and its data. This
- catch is established inside the continuation barrier.
- Once @var{body} or @var{handler} returns, the return value is made the
- @emph{exit value} of the thread and the thread is terminated.
- @end deftypefn
- @deffn {Scheme Procedure} thread? obj
- @deffnx {C Function} scm_thread_p (obj)
- Return @code{#t} iff @var{obj} is a thread; otherwise, return
- @code{#f}.
- @end deffn
- @c begin (texi-doc-string "guile" "join-thread")
- @deffn {Scheme Procedure} join-thread thread [timeout [timeoutval]]
- @deffnx {C Function} scm_join_thread (thread)
- @deffnx {C Function} scm_join_thread_timed (thread, timeout, timeoutval)
- Wait for @var{thread} to terminate and return its exit value. Threads
- that have not been created with @code{call-with-new-thread} or
- @code{scm_spawn_thread} have an exit value of @code{#f}. When
- @var{timeout} is given, it specifies a point in time where the waiting
- should be aborted. It can be either an integer as returned by
- @code{current-time} or a pair as returned by @code{gettimeofday}.
- When the waiting is aborted, @var{timeoutval} is returned (if it is
- specified; @code{#f} is returned otherwise).
- @end deffn
- @deffn {Scheme Procedure} thread-exited? thread
- @deffnx {C Function} scm_thread_exited_p (thread)
- Return @code{#t} iff @var{thread} has exited.
- @end deffn
- @c begin (texi-doc-string "guile" "yield")
- @deffn {Scheme Procedure} yield
- If one or more threads are waiting to execute, calling yield forces an
- immediate context switch to one of them. Otherwise, yield has no effect.
- @end deffn
- @deffn {Scheme Procedure} cancel-thread thread
- @deffnx {C Function} scm_cancel_thread (thread)
- Asynchronously notify @var{thread} to exit. Immediately after
- receiving this notification, @var{thread} will call its cleanup handler
- (if one has been set) and then terminate, aborting any evaluation that
- is in progress.
- Because Guile threads are isomorphic with POSIX threads, @var{thread}
- will not receive its cancellation signal until it reaches a cancellation
- point. See your operating system's POSIX threading documentation for
- more information on cancellation points; note that in Guile, unlike
- native POSIX threads, a thread can receive a cancellation notification
- while attempting to lock a mutex.
- @end deffn
- @deffn {Scheme Procedure} set-thread-cleanup! thread proc
- @deffnx {C Function} scm_set_thread_cleanup_x (thread, proc)
- Set @var{proc} as the cleanup handler for the thread @var{thread}.
- @var{proc}, which must be a thunk, will be called when @var{thread}
- exits, either normally or by being canceled. Thread cleanup handlers
- can be used to perform useful tasks like releasing resources, such as
- locked mutexes, when thread exit cannot be predicted.
- The return value of @var{proc} will be set as the @emph{exit value} of
- @var{thread}.
- To remove a cleanup handler, pass @code{#f} for @var{proc}.
- @end deffn
- @deffn {Scheme Procedure} thread-cleanup thread
- @deffnx {C Function} scm_thread_cleanup (thread)
- Return the cleanup handler currently installed for the thread
- @var{thread}. If no cleanup handler is currently installed,
- thread-cleanup returns @code{#f}.
- @end deffn
- Higher level thread procedures are available by loading the
- @code{(ice-9 threads)} module. These provide standardized
- thread creation.
- @deffn macro make-thread proc [args@dots{}]
- Apply @var{proc} to @var{args} in a new thread formed by
- @code{call-with-new-thread} using a default error handler that display
- the error to the current error port. The @var{args@dots{}}
- expressions are evaluated in the new thread.
- @end deffn
- @deffn macro begin-thread first [rest@dots{}]
- Evaluate forms @var{first} and @var{rest} in a new thread formed by
- @code{call-with-new-thread} using a default error handler that display
- the error to the current error port.
- @end deffn
- @node Mutexes and Condition Variables
- @subsection Mutexes and Condition Variables
- @cindex mutex
- @cindex condition variable
- A mutex is a thread synchronization object, it can be used by threads
- to control access to a shared resource. A mutex can be locked to
- indicate a resource is in use, and other threads can then block on the
- mutex to wait for the resource (or can just test and do something else
- if not available). ``Mutex'' is short for ``mutual exclusion''.
- There are two types of mutexes in Guile, ``standard'' and
- ``recursive''. They're created by @code{make-mutex} and
- @code{make-recursive-mutex} respectively, the operation functions are
- then common to both.
- Note that for both types of mutex there's no protection against a
- ``deadly embrace''. For instance if one thread has locked mutex A and
- is waiting on mutex B, but another thread owns B and is waiting on A,
- then an endless wait will occur (in the current implementation).
- Acquiring requisite mutexes in a fixed order (like always A before B)
- in all threads is one way to avoid such problems.
- @sp 1
- @deffn {Scheme Procedure} make-mutex . flags
- @deffnx {C Function} scm_make_mutex ()
- @deffnx {C Function} scm_make_mutex_with_flags (SCM flags)
- Return a new mutex. It is initially unlocked. If @var{flags} is
- specified, it must be a list of symbols specifying configuration flags
- for the newly-created mutex. The supported flags are:
- @table @code
- @item unchecked-unlock
- Unless this flag is present, a call to `unlock-mutex' on the returned
- mutex when it is already unlocked will cause an error to be signalled.
- @item allow-external-unlock
- Allow the returned mutex to be unlocked by the calling thread even if
- it was originally locked by a different thread.
- @item recursive
- The returned mutex will be recursive.
- @end table
- @end deffn
- @deffn {Scheme Procedure} mutex? obj
- @deffnx {C Function} scm_mutex_p (obj)
- Return @code{#t} iff @var{obj} is a mutex; otherwise, return
- @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} make-recursive-mutex
- @deffnx {C Function} scm_make_recursive_mutex ()
- Create a new recursive mutex. It is initially unlocked. Calling this
- function is equivalent to calling `make-mutex' and specifying the
- @code{recursive} flag.
- @end deffn
- @deffn {Scheme Procedure} lock-mutex mutex [timeout [owner]]
- @deffnx {C Function} scm_lock_mutex (mutex)
- @deffnx {C Function} scm_lock_mutex_timed (mutex, timeout, owner)
- Lock @var{mutex}. If the mutex is already locked, then block and
- return only when @var{mutex} has been acquired.
- When @var{timeout} is given, it specifies a point in time where the
- waiting should be aborted. It can be either an integer as returned
- by @code{current-time} or a pair as returned by @code{gettimeofday}.
- When the waiting is aborted, @code{#f} is returned.
- When @var{owner} is given, it specifies an owner for @var{mutex} other
- than the calling thread. @var{owner} may also be @code{#f},
- indicating that the mutex should be locked but left unowned.
- For standard mutexes (@code{make-mutex}), and error is signalled if
- the thread has itself already locked @var{mutex}.
- For a recursive mutex (@code{make-recursive-mutex}), if the thread has
- itself already locked @var{mutex}, then a further @code{lock-mutex}
- call increments the lock count. An additional @code{unlock-mutex}
- will be required to finally release.
- If @var{mutex} was locked by a thread that exited before unlocking it,
- the next attempt to lock @var{mutex} will succeed, but
- @code{abandoned-mutex-error} will be signalled.
- When a system async (@pxref{System asyncs}) is activated for a thread
- blocked in @code{lock-mutex}, the wait is interrupted and the async is
- executed. When the async returns, the wait resumes.
- @end deffn
- @deftypefn {C Function} void scm_dynwind_lock_mutex (SCM mutex)
- Arrange for @var{mutex} to be locked whenever the current dynwind
- context is entered and to be unlocked when it is exited.
- @end deftypefn
- @deffn {Scheme Procedure} try-mutex mx
- @deffnx {C Function} scm_try_mutex (mx)
- Try to lock @var{mutex} as per @code{lock-mutex}. If @var{mutex} can
- be acquired immediately then this is done and the return is @code{#t}.
- If @var{mutex} is locked by some other thread then nothing is done and
- the return is @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} unlock-mutex mutex [condvar [timeout]]
- @deffnx {C Function} scm_unlock_mutex (mutex)
- @deffnx {C Function} scm_unlock_mutex_timed (mutex, condvar, timeout)
- Unlock @var{mutex}. An error is signalled if @var{mutex} is not locked
- and was not created with the @code{unchecked-unlock} flag set, or if
- @var{mutex} is locked by a thread other than the calling thread and was
- not created with the @code{allow-external-unlock} flag set.
- If @var{condvar} is given, it specifies a condition variable upon
- which the calling thread will wait to be signalled before returning.
- (This behavior is very similar to that of
- @code{wait-condition-variable}, except that the mutex is left in an
- unlocked state when the function returns.)
- When @var{timeout} is also given, it specifies a point in time where
- the waiting should be aborted. It can be either an integer as
- returned by @code{current-time} or a pair as returned by
- @code{gettimeofday}. When the waiting is aborted, @code{#f} is
- returned. Otherwise the function returns @code{#t}.
- @end deffn
- @deffn {Scheme Procedure} mutex-owner mutex
- @deffnx {C Function} scm_mutex_owner (mutex)
- Return the current owner of @var{mutex}, in the form of a thread or
- @code{#f} (indicating no owner). Note that a mutex may be unowned but
- still locked.
- @end deffn
- @deffn {Scheme Procedure} mutex-level mutex
- @deffnx {C Function} scm_mutex_level (mutex)
- Return the current lock level of @var{mutex}. If @var{mutex} is
- currently unlocked, this value will be 0; otherwise, it will be the
- number of times @var{mutex} has been recursively locked by its current
- owner.
- @end deffn
- @deffn {Scheme Procedure} mutex-locked? mutex
- @deffnx {C Function} scm_mutex_locked_p (mutex)
- Return @code{#t} if @var{mutex} is locked, regardless of ownership;
- otherwise, return @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} make-condition-variable
- @deffnx {C Function} scm_make_condition_variable ()
- Return a new condition variable.
- @end deffn
- @deffn {Scheme Procedure} condition-variable? obj
- @deffnx {C Function} scm_condition_variable_p (obj)
- Return @code{#t} iff @var{obj} is a condition variable; otherwise,
- return @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} wait-condition-variable condvar mutex [time]
- @deffnx {C Function} scm_wait_condition_variable (condvar, mutex, time)
- Wait until @var{condvar} has been signalled. While waiting,
- @var{mutex} is atomically unlocked (as with @code{unlock-mutex}) and
- is locked again when this function returns. When @var{time} is given,
- it specifies a point in time where the waiting should be aborted. It
- can be either a integer as returned by @code{current-time} or a pair
- as returned by @code{gettimeofday}. When the waiting is aborted,
- @code{#f} is returned. When the condition variable has in fact been
- signalled, @code{#t} is returned. The mutex is re-locked in any case
- before @code{wait-condition-variable} returns.
- When a system async is activated for a thread that is blocked in a
- call to @code{wait-condition-variable}, the waiting is interrupted,
- the mutex is locked, and the async is executed. When the async
- returns, the mutex is unlocked again and the waiting is resumed. When
- the thread block while re-acquiring the mutex, execution of asyncs is
- blocked.
- @end deffn
- @deffn {Scheme Procedure} signal-condition-variable condvar
- @deffnx {C Function} scm_signal_condition_variable (condvar)
- Wake up one thread that is waiting for @var{condvar}.
- @end deffn
- @deffn {Scheme Procedure} broadcast-condition-variable condvar
- @deffnx {C Function} scm_broadcast_condition_variable (condvar)
- Wake up all threads that are waiting for @var{condvar}.
- @end deffn
- @sp 1
- The following are higher level operations on mutexes. These are
- available from
- @example
- (use-modules (ice-9 threads))
- @end example
- @deffn macro with-mutex mutex [body@dots{}]
- Lock @var{mutex}, evaluate the @var{body} forms, then unlock
- @var{mutex}. The return value is the return from the last @var{body}
- form.
- The lock, body and unlock form the branches of a @code{dynamic-wind}
- (@pxref{Dynamic Wind}), so @var{mutex} is automatically unlocked if an
- error or new continuation exits @var{body}, and is re-locked if
- @var{body} is re-entered by a captured continuation.
- @end deffn
- @deffn macro monitor body@dots{}
- Evaluate the @var{body} forms, with a mutex locked so only one thread
- can execute that code at any one time. The return value is the return
- from the last @var{body} form.
- Each @code{monitor} form has its own private mutex and the locking and
- evaluation is as per @code{with-mutex} above. A standard mutex
- (@code{make-mutex}) is used, which means @var{body} must not
- recursively re-enter the @code{monitor} form.
- The term ``monitor'' comes from operating system theory, where it
- means a particular bit of code managing access to some resource and
- which only ever executes on behalf of one process at any one time.
- @end deffn
- @node Blocking
- @subsection Blocking in Guile Mode
- Up to Guile version 1.8, a thread blocked in guile mode would prevent
- the garbage collector from running. Thus threads had to explicitly
- leave guile mode with @code{scm_without_guile ()} before making a
- potentially blocking call such as a mutex lock, a @code{select ()}
- system call, etc. The following functions could be used to temporarily
- leave guile mode or to perform some common blocking operations in a
- supported way.
- Starting from Guile 2.0, blocked threads no longer hinder garbage
- collection. Thus, the functions below are not needed anymore. They can
- still be used to inform the GC that a thread is about to block, giving
- it a (small) optimization opportunity for ``stop the world'' garbage
- collections, should they occur while the thread is blocked.
- @deftypefn {C Function} {void *} scm_without_guile (void *(*func) (void *), void *data)
- Leave guile mode, call @var{func} on @var{data}, enter guile mode and
- return the result of calling @var{func}.
- While a thread has left guile mode, it must not call any libguile
- functions except @code{scm_with_guile} or @code{scm_without_guile} and
- must not use any libguile macros. Also, local variables of type
- @code{SCM} that are allocated while not in guile mode are not
- protected from the garbage collector.
- When used from non-guile mode, calling @code{scm_without_guile} is
- still allowed: it simply calls @var{func}. In that way, you can leave
- guile mode without having to know whether the current thread is in
- guile mode or not.
- @end deftypefn
- @deftypefn {C Function} int scm_pthread_mutex_lock (pthread_mutex_t *mutex)
- Like @code{pthread_mutex_lock}, but leaves guile mode while waiting for
- the mutex.
- @end deftypefn
- @deftypefn {C Function} int scm_pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex)
- @deftypefnx {C Function} int scm_pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, struct timespec *abstime)
- Like @code{pthread_cond_wait} and @code{pthread_cond_timedwait}, but
- leaves guile mode while waiting for the condition variable.
- @end deftypefn
- @deftypefn {C Function} int scm_std_select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout)
- Like @code{select} but leaves guile mode while waiting. Also, the
- delivery of a system async causes this function to be interrupted with
- error code @code{EINTR}.
- @end deftypefn
- @deftypefn {C Function} {unsigned int} scm_std_sleep ({unsigned int} seconds)
- Like @code{sleep}, but leaves guile mode while sleeping. Also, the
- delivery of a system async causes this function to be interrupted.
- @end deftypefn
- @deftypefn {C Function} {unsigned long} scm_std_usleep ({unsigned long} usecs)
- Like @code{usleep}, but leaves guile mode while sleeping. Also, the
- delivery of a system async causes this function to be interrupted.
- @end deftypefn
- @node Critical Sections
- @subsection Critical Sections
- @deffn {C Macro} SCM_CRITICAL_SECTION_START
- @deffnx {C Macro} SCM_CRITICAL_SECTION_END
- These two macros can be used to delimit a critical section.
- Syntactically, they are both statements and need to be followed
- immediately by a semicolon.
- Executing @code{SCM_CRITICAL_SECTION_START} will lock a recursive
- mutex and block the executing of system asyncs. Executing
- @code{SCM_CRITICAL_SECTION_END} will unblock the execution of system
- asyncs and unlock the mutex. Thus, the code that executes between
- these two macros can only be executed in one thread at any one time
- and no system asyncs will run. However, because the mutex is a
- recursive one, the code might still be reentered by the same thread.
- You must either allow for this or avoid it, both by careful coding.
- On the other hand, critical sections delimited with these macros can
- be nested since the mutex is recursive.
- You must make sure that for each @code{SCM_CRITICAL_SECTION_START},
- the corresponding @code{SCM_CRITICAL_SECTION_END} is always executed.
- This means that no non-local exit (such as a signalled error) might
- happen, for example.
- @end deffn
- @deftypefn {C Function} void scm_dynwind_critical_section (SCM mutex)
- Call @code{scm_dynwind_lock_mutex} on @var{mutex} and call
- @code{scm_dynwind_block_asyncs}. When @var{mutex} is false, a recursive
- mutex provided by Guile is used instead.
- The effect of a call to @code{scm_dynwind_critical_section} is that
- the current dynwind context (@pxref{Dynamic Wind}) turns into a
- critical section. Because of the locked mutex, no second thread can
- enter it concurrently and because of the blocked asyncs, no system
- async can reenter it from the current thread.
- When the current thread reenters the critical section anyway, the kind
- of @var{mutex} determines what happens: When @var{mutex} is recursive,
- the reentry is allowed. When it is a normal mutex, an error is
- signalled.
- @end deftypefn
- @node Fluids and Dynamic States
- @subsection Fluids and Dynamic States
- @cindex fluids
- A @emph{fluid} is an object that can store one value per @emph{dynamic
- state}. Each thread has a current dynamic state, and when accessing a
- fluid, this current dynamic state is used to provide the actual value.
- In this way, fluids can be used for thread local storage, but they are
- in fact more flexible: dynamic states are objects of their own and can
- be made current for more than one thread at the same time, or only be
- made current temporarily, for example.
- Fluids can also be used to simulate the desirable effects of
- dynamically scoped variables. Dynamically scoped variables are useful
- when you want to set a variable to a value during some dynamic extent
- in the execution of your program and have them revert to their
- original value when the control flow is outside of this dynamic
- extent. See the description of @code{with-fluids} below for details.
- New fluids are created with @code{make-fluid} and @code{fluid?} is
- used for testing whether an object is actually a fluid. The values
- stored in a fluid can be accessed with @code{fluid-ref} and
- @code{fluid-set!}.
- @deffn {Scheme Procedure} make-fluid
- @deffnx {C Function} scm_make_fluid ()
- Return a newly created fluid.
- Fluids are objects that can hold one
- value per dynamic state. That is, modifications to this value are
- only visible to code that executes with the same dynamic state as
- the modifying code. When a new dynamic state is constructed, it
- inherits the values from its parent. Because each thread normally executes
- with its own dynamic state, you can use fluids for thread local storage.
- @end deffn
- @deffn {Scheme Procedure} make-unbound-fluid
- @deffnx {C Function} scm_make_unbound_fluid ()
- Return a new fluid that is initially unbound (instead of being
- implicitly bound to @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} fluid? obj
- @deffnx {C Function} scm_fluid_p (obj)
- Return @code{#t} iff @var{obj} is a fluid; otherwise, return
- @code{#f}.
- @end deffn
- @deffn {Scheme Procedure} fluid-ref fluid
- @deffnx {C Function} scm_fluid_ref (fluid)
- Return the value associated with @var{fluid} in the current
- dynamic root. If @var{fluid} has not been set, then return
- @code{#f}. Calling @code{fluid-ref} on an unbound fluid produces a
- runtime error.
- @end deffn
- @deffn {Scheme Procedure} fluid-set! fluid value
- @deffnx {C Function} scm_fluid_set_x (fluid, value)
- Set the value associated with @var{fluid} in the current dynamic root.
- @end deffn
- @deffn {Scheme Procedure} fluid-unset! fluid
- @deffnx {C Function} scm_fluid_unset_x (fluid)
- Disassociate the given fluid from any value, making it unbound.
- @end deffn
- @deffn {Scheme Procedure} fluid-bound? fluid
- @deffnx {C Function} scm_fluid_bound_p (fluid)
- Returns @code{#t} iff the given fluid is bound to a value, otherwise
- @code{#f}.
- @end deffn
- @code{with-fluids*} temporarily changes the values of one or more fluids,
- so that the given procedure and each procedure called by it access the
- given values. After the procedure returns, the old values are restored.
- @deffn {Scheme Procedure} with-fluid* fluid value thunk
- @deffnx {C Function} scm_with_fluid (fluid, value, thunk)
- Set @var{fluid} to @var{value} temporarily, and call @var{thunk}.
- @var{thunk} must be a procedure with no argument.
- @end deffn
- @deffn {Scheme Procedure} with-fluids* fluids values thunk
- @deffnx {C Function} scm_with_fluids (fluids, values, thunk)
- Set @var{fluids} to @var{values} temporary, and call @var{thunk}.
- @var{fluids} must be a list of fluids and @var{values} must be the
- same number of their values to be applied. Each substitution is done
- in the order given. @var{thunk} must be a procedure with no argument.
- It is called inside a @code{dynamic-wind} and the fluids are
- set/restored when control enter or leaves the established dynamic
- extent.
- @end deffn
- @deffn {Scheme Macro} with-fluids ((fluid value) ...) body...
- Execute @var{body...} while each @var{fluid} is set to the
- corresponding @var{value}. Both @var{fluid} and @var{value} are
- evaluated and @var{fluid} must yield a fluid. @var{body...} is
- executed inside a @code{dynamic-wind} and the fluids are set/restored
- when control enter or leaves the established dynamic extent.
- @end deffn
- @deftypefn {C Function} SCM scm_c_with_fluids (SCM fluids, SCM vals, SCM (*cproc)(void *), void *data)
- @deftypefnx {C Function} SCM scm_c_with_fluid (SCM fluid, SCM val, SCM (*cproc)(void *), void *data)
- The function @code{scm_c_with_fluids} is like @code{scm_with_fluids}
- except that it takes a C function to call instead of a Scheme thunk.
- The function @code{scm_c_with_fluid} is similar but only allows one
- fluid to be set instead of a list.
- @end deftypefn
- @deftypefn {C Function} void scm_dynwind_fluid (SCM fluid, SCM val)
- This function must be used inside a pair of calls to
- @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
- Wind}). During the dynwind context, the fluid @var{fluid} is set to
- @var{val}.
- More precisely, the value of the fluid is swapped with a `backup'
- value whenever the dynwind context is entered or left. The backup
- value is initialized with the @var{val} argument.
- @end deftypefn
- @deffn {Scheme Procedure} make-dynamic-state [parent]
- @deffnx {C Function} scm_make_dynamic_state (parent)
- Return a copy of the dynamic state object @var{parent}
- or of the current dynamic state when @var{parent} is omitted.
- @end deffn
- @deffn {Scheme Procedure} dynamic-state? obj
- @deffnx {C Function} scm_dynamic_state_p (obj)
- Return @code{#t} if @var{obj} is a dynamic state object;
- return @code{#f} otherwise.
- @end deffn
- @deftypefn {C Procedure} int scm_is_dynamic_state (SCM obj)
- Return non-zero if @var{obj} is a dynamic state object;
- return zero otherwise.
- @end deftypefn
- @deffn {Scheme Procedure} current-dynamic-state
- @deffnx {C Function} scm_current_dynamic_state ()
- Return the current dynamic state object.
- @end deffn
- @deffn {Scheme Procedure} set-current-dynamic-state state
- @deffnx {C Function} scm_set_current_dynamic_state (state)
- Set the current dynamic state object to @var{state}
- and return the previous current dynamic state object.
- @end deffn
- @deffn {Scheme Procedure} with-dynamic-state state proc
- @deffnx {C Function} scm_with_dynamic_state (state, proc)
- Call @var{proc} while @var{state} is the current dynamic
- state object.
- @end deffn
- @deftypefn {C Procedure} void scm_dynwind_current_dynamic_state (SCM state)
- Set the current dynamic state to @var{state} for the current dynwind
- context.
- @end deftypefn
- @deftypefn {C Procedure} {void *} scm_c_with_dynamic_state (SCM state, void *(*func)(void *), void *data)
- Like @code{scm_with_dynamic_state}, but call @var{func} with
- @var{data}.
- @end deftypefn
- @node Futures
- @subsection Futures
- @cindex futures
- @cindex fine-grain parallelism
- @cindex parallelism
- The @code{(ice-9 futures)} module provides @dfn{futures}, a construct
- for fine-grain parallelism. A future is a wrapper around an expression
- whose computation may occur in parallel with the code of the calling
- thread, and possibly in parallel with other futures. Like promises,
- futures are essentially proxies that can be queried to obtain the value
- of the enclosed expression:
- @lisp
- (touch (future (+ 2 3)))
- @result{} 5
- @end lisp
- However, unlike promises, the expression associated with a future may be
- evaluated on another CPU core, should one be available. This supports
- @dfn{fine-grain parallelism}, because even relatively small computations
- can be embedded in futures. Consider this sequential code:
- @lisp
- (define (find-prime lst1 lst2)
- (or (find prime? lst1)
- (find prime? lst2)))
- @end lisp
- The two arms of @code{or} are potentially computation-intensive. They
- are independent of one another, yet, they are evaluated sequentially
- when the first one returns @code{#f}. Using futures, one could rewrite
- it like this:
- @lisp
- (define (find-prime lst1 lst2)
- (let ((f (future (find prime? lst2))))
- (or (find prime? lst1)
- (touch f))))
- @end lisp
- This preserves the semantics of @code{find-prime}. On a multi-core
- machine, though, the computation of @code{(find prime? lst2)} may be
- done in parallel with that of the other @code{find} call, which can
- reduce the execution time of @code{find-prime}.
- Note that futures are intended for the evaluation of purely functional
- expressions. Expressions that have side-effects or rely on I/O may
- require additional care, such as explicit synchronization
- (@pxref{Mutexes and Condition Variables}).
- Guile's futures are implemented on top of POSIX threads
- (@pxref{Threads}). Internally, a fixed-size pool of threads is used to
- evaluate futures, such that offloading the evaluation of an expression
- to another thread doesn't incur thread creation costs. By default, the
- pool contains one thread per available CPU core, minus one, to account
- for the main thread. The number of available CPU cores is determined
- using @code{current-processor-count} (@pxref{Processes}).
- @deffn {Scheme Syntax} future exp
- Return a future for expression @var{exp}. This is equivalent to:
- @lisp
- (make-future (lambda () exp))
- @end lisp
- @end deffn
- @deffn {Scheme Procedure} make-future thunk
- Return a future for @var{thunk}, a zero-argument procedure.
- This procedure returns immediately. Execution of @var{thunk} may begin
- in parallel with the calling thread's computations, if idle CPU cores
- are available, or it may start when @code{touch} is invoked on the
- returned future.
- If the execution of @var{thunk} throws an exception, that exception will
- be re-thrown when @code{touch} is invoked on the returned future.
- @end deffn
- @deffn {Scheme Procedure} future? obj
- Return @code{#t} if @var{obj} is a future.
- @end deffn
- @deffn {Scheme Procedure} touch f
- Return the result of the expression embedded in future @var{f}.
- If the result was already computed in parallel, @code{touch} returns
- instantaneously. Otherwise, it waits for the computation to complete,
- if it already started, or initiates it.
- @end deffn
- @node Parallel Forms
- @subsection Parallel forms
- @cindex parallel forms
- The functions described in this section are available from
- @example
- (use-modules (ice-9 threads))
- @end example
- They provide high-level parallel constructs. The following functions
- are implemented in terms of futures (@pxref{Futures}). Thus they are
- relatively cheap as they re-use existing threads, and portable, since
- they automatically use one thread per available CPU core.
- @deffn syntax parallel expr1 @dots{} exprN
- Evaluate each @var{expr} expression in parallel, each in its own thread.
- Return the results as a set of @var{N} multiple values
- (@pxref{Multiple Values}).
- @end deffn
- @deffn syntax letpar ((var1 expr1) @dots{} (varN exprN)) body@dots{}
- Evaluate each @var{expr} in parallel, each in its own thread, then bind
- the results to the corresponding @var{var} variables and evaluate
- @var{body}.
- @code{letpar} is like @code{let} (@pxref{Local Bindings}), but all the
- expressions for the bindings are evaluated in parallel.
- @end deffn
- @deffn {Scheme Procedure} par-map proc lst1 @dots{} lstN
- @deffnx {Scheme Procedure} par-for-each proc lst1 @dots{} lstN
- Call @var{proc} on the elements of the given lists. @code{par-map}
- returns a list comprising the return values from @var{proc}.
- @code{par-for-each} returns an unspecified value, but waits for all
- calls to complete.
- The @var{proc} calls are @code{(@var{proc} @var{elem1} @dots{}
- @var{elemN})}, where each @var{elem} is from the corresponding
- @var{lst}. Each @var{lst} must be the same length. The calls are
- potentially made in parallel, depending on the number of CPU cores
- available.
- These functions are like @code{map} and @code{for-each} (@pxref{List
- Mapping}), but make their @var{proc} calls in parallel.
- @end deffn
- Unlike those above, the functions described below take a number of
- threads as an argument. This makes them inherently non-portable since
- the specified number of threads may differ from the number of available
- CPU cores as returned by @code{current-processor-count}
- (@pxref{Processes}). In addition, these functions create the specified
- number of threads when they are called and terminate them upon
- completion, which makes them quite expensive.
- Therefore, they should be avoided.
- @deffn {Scheme Procedure} n-par-map n proc lst1 @dots{} lstN
- @deffnx {Scheme Procedure} n-par-for-each n proc lst1 @dots{} lstN
- Call @var{proc} on the elements of the given lists, in the same way as
- @code{par-map} and @code{par-for-each} above, but use no more than
- @var{n} threads at any one time. The order in which calls are
- initiated within that threads limit is unspecified.
- These functions are good for controlling resource consumption if
- @var{proc} calls might be costly, or if there are many to be made. On
- a dual-CPU system for instance @math{@var{n}=4} might be enough to
- keep the CPUs utilized, and not consume too much memory.
- @end deffn
- @deffn {Scheme Procedure} n-for-each-par-map n sproc pproc lst1 @dots{} lstN
- Apply @var{pproc} to the elements of the given lists, and apply
- @var{sproc} to each result returned by @var{pproc}. The final return
- value is unspecified, but all calls will have been completed before
- returning.
- The calls made are @code{(@var{sproc} (@var{pproc} @var{elem1} @dots{}
- @var{elemN}))}, where each @var{elem} is from the corresponding
- @var{lst}. Each @var{lst} must have the same number of elements.
- The @var{pproc} calls are made in parallel, in separate threads. No more
- than @var{n} threads are used at any one time. The order in which
- @var{pproc} calls are initiated within that limit is unspecified.
- The @var{sproc} calls are made serially, in list element order, one at
- a time. @var{pproc} calls on later elements may execute in parallel
- with the @var{sproc} calls. Exactly which thread makes each
- @var{sproc} call is unspecified.
- This function is designed for individual calculations that can be done
- in parallel, but with results needing to be handled serially, for
- instance to write them to a file. The @var{n} limit on threads
- controls system resource usage when there are many calculations or
- when they might be costly.
- It will be seen that @code{n-for-each-par-map} is like a combination
- of @code{n-par-map} and @code{for-each},
- @example
- (for-each sproc (n-par-map n pproc lst1 ... lstN))
- @end example
- @noindent
- But the actual implementation is more efficient since each @var{sproc}
- call, in turn, can be initiated once the relevant @var{pproc} call has
- completed, it doesn't need to wait for all to finish.
- @end deffn
- @c Local Variables:
- @c TeX-master: "guile.texi"
- @c End:
|