api-scheduling.texi 41 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028
  1. @c -*-texinfo-*-
  2. @c This is part of the GNU Guile Reference Manual.
  3. @c Copyright (C) 1996, 1997, 2000, 2001, 2002, 2003, 2004, 2007, 2009, 2010
  4. @c Free Software Foundation, Inc.
  5. @c See the file guile.texi for copying conditions.
  6. @node Scheduling
  7. @section Threads, Mutexes, Asyncs and Dynamic Roots
  8. @menu
  9. * Arbiters:: Synchronization primitives.
  10. * Asyncs:: Asynchronous procedure invocation.
  11. * Threads:: Multiple threads of execution.
  12. * Mutexes and Condition Variables:: Synchronization primitives.
  13. * Blocking:: How to block properly in guile mode.
  14. * Critical Sections:: Avoiding concurrency and reentries.
  15. * Fluids and Dynamic States:: Thread-local variables, etc.
  16. * Futures:: Fine-grain parallelism.
  17. * Parallel Forms:: Parallel execution of forms.
  18. @end menu
  19. @node Arbiters
  20. @subsection Arbiters
  21. @cindex arbiters
  22. Arbiters are synchronization objects, they can be used by threads to
  23. control access to a shared resource. An arbiter can be locked to
  24. indicate a resource is in use, and unlocked when done.
  25. An arbiter is like a light-weight mutex (@pxref{Mutexes and Condition
  26. Variables}). It uses less memory and may be faster, but there's no
  27. way for a thread to block waiting on an arbiter, it can only test and
  28. get the status returned.
  29. @deffn {Scheme Procedure} make-arbiter name
  30. @deffnx {C Function} scm_make_arbiter (name)
  31. Return an object of type arbiter and name @var{name}. Its
  32. state is initially unlocked. Arbiters are a way to achieve
  33. process synchronization.
  34. @end deffn
  35. @deffn {Scheme Procedure} try-arbiter arb
  36. @deffnx {C Function} scm_try_arbiter (arb)
  37. If @var{arb} is unlocked, then lock it and return @code{#t}.
  38. If @var{arb} is already locked, then do nothing and return
  39. @code{#f}.
  40. @end deffn
  41. @deffn {Scheme Procedure} release-arbiter arb
  42. @deffnx {C Function} scm_release_arbiter (arb)
  43. If @var{arb} is locked, then unlock it and return @code{#t}. If
  44. @var{arb} is already unlocked, then do nothing and return @code{#f}.
  45. Typical usage is for the thread which locked an arbiter to later
  46. release it, but that's not required, any thread can release it.
  47. @end deffn
  48. @node Asyncs
  49. @subsection Asyncs
  50. @cindex asyncs
  51. @cindex user asyncs
  52. @cindex system asyncs
  53. Asyncs are a means of deferring the execution of Scheme code until it is
  54. safe to do so.
  55. Guile provides two kinds of asyncs that share the basic concept but are
  56. otherwise quite different: system asyncs and user asyncs. System asyncs
  57. are integrated into the core of Guile and are executed automatically
  58. when the system is in a state to allow the execution of Scheme code.
  59. For example, it is not possible to execute Scheme code in a POSIX signal
  60. handler, but such a signal handler can queue a system async to be
  61. executed in the near future, when it is safe to do so.
  62. System asyncs can also be queued for threads other than the current one.
  63. This way, you can cause threads to asynchronously execute arbitrary
  64. code.
  65. User asyncs offer a convenient means of queuing procedures for future
  66. execution and triggering this execution. They will not be executed
  67. automatically.
  68. @menu
  69. * System asyncs::
  70. * User asyncs::
  71. @end menu
  72. @node System asyncs
  73. @subsubsection System asyncs
  74. To cause the future asynchronous execution of a procedure in a given
  75. thread, use @code{system-async-mark}.
  76. Automatic invocation of system asyncs can be temporarily disabled by
  77. calling @code{call-with-blocked-asyncs}. This function works by
  78. temporarily increasing the @emph{async blocking level} of the current
  79. thread while a given procedure is running. The blocking level starts
  80. out at zero, and whenever a safe point is reached, a blocking level
  81. greater than zero will prevent the execution of queued asyncs.
  82. Analogously, the procedure @code{call-with-unblocked-asyncs} will
  83. temporarily decrease the blocking level of the current thread. You
  84. can use it when you want to disable asyncs by default and only allow
  85. them temporarily.
  86. In addition to the C versions of @code{call-with-blocked-asyncs} and
  87. @code{call-with-unblocked-asyncs}, C code can use
  88. @code{scm_dynwind_block_asyncs} and @code{scm_dynwind_unblock_asyncs}
  89. inside a @dfn{dynamic context} (@pxref{Dynamic Wind}) to block or
  90. unblock system asyncs temporarily.
  91. @deffn {Scheme Procedure} system-async-mark proc [thread]
  92. @deffnx {C Function} scm_system_async_mark (proc)
  93. @deffnx {C Function} scm_system_async_mark_for_thread (proc, thread)
  94. Mark @var{proc} (a procedure with zero arguments) for future execution
  95. in @var{thread}. When @var{proc} has already been marked for
  96. @var{thread} but has not been executed yet, this call has no effect.
  97. When @var{thread} is omitted, the thread that called
  98. @code{system-async-mark} is used.
  99. This procedure is not safe to be called from signal handlers. Use
  100. @code{scm_sigaction} or @code{scm_sigaction_for_thread} to install
  101. signal handlers.
  102. @end deffn
  103. @deffn {Scheme Procedure} call-with-blocked-asyncs proc
  104. @deffnx {C Function} scm_call_with_blocked_asyncs (proc)
  105. Call @var{proc} and block the execution of system asyncs by one level
  106. for the current thread while it is running. Return the value returned
  107. by @var{proc}. For the first two variants, call @var{proc} with no
  108. arguments; for the third, call it with @var{data}.
  109. @end deffn
  110. @deftypefn {C Function} {void *} scm_c_call_with_blocked_asyncs (void * (*proc) (void *data), void *data)
  111. The same but with a C function @var{proc} instead of a Scheme thunk.
  112. @end deftypefn
  113. @deffn {Scheme Procedure} call-with-unblocked-asyncs proc
  114. @deffnx {C Function} scm_call_with_unblocked_asyncs (proc)
  115. Call @var{proc} and unblock the execution of system asyncs by one
  116. level for the current thread while it is running. Return the value
  117. returned by @var{proc}. For the first two variants, call @var{proc}
  118. with no arguments; for the third, call it with @var{data}.
  119. @end deffn
  120. @deftypefn {C Function} {void *} scm_c_call_with_unblocked_asyncs (void *(*proc) (void *data), void *data)
  121. The same but with a C function @var{proc} instead of a Scheme thunk.
  122. @end deftypefn
  123. @deftypefn {C Function} void scm_dynwind_block_asyncs ()
  124. During the current dynwind context, increase the blocking of asyncs by
  125. one level. This function must be used inside a pair of calls to
  126. @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
  127. Wind}).
  128. @end deftypefn
  129. @deftypefn {C Function} void scm_dynwind_unblock_asyncs ()
  130. During the current dynwind context, decrease the blocking of asyncs by
  131. one level. This function must be used inside a pair of calls to
  132. @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
  133. Wind}).
  134. @end deftypefn
  135. @node User asyncs
  136. @subsubsection User asyncs
  137. A user async is a pair of a thunk (a parameterless procedure) and a
  138. mark. Setting the mark on a user async will cause the thunk to be
  139. executed when the user async is passed to @code{run-asyncs}. Setting
  140. the mark more than once is satisfied by one execution of the thunk.
  141. User asyncs are created with @code{async}. They are marked with
  142. @code{async-mark}.
  143. @deffn {Scheme Procedure} async thunk
  144. @deffnx {C Function} scm_async (thunk)
  145. Create a new user async for the procedure @var{thunk}.
  146. @end deffn
  147. @deffn {Scheme Procedure} async-mark a
  148. @deffnx {C Function} scm_async_mark (a)
  149. Mark the user async @var{a} for future execution.
  150. @end deffn
  151. @deffn {Scheme Procedure} run-asyncs list_of_a
  152. @deffnx {C Function} scm_run_asyncs (list_of_a)
  153. Execute all thunks from the marked asyncs of the list @var{list_of_a}.
  154. @end deffn
  155. @node Threads
  156. @subsection Threads
  157. @cindex threads
  158. @cindex Guile threads
  159. @cindex POSIX threads
  160. Guile supports POSIX threads, unless it was configured with
  161. @code{--without-threads} or the host lacks POSIX thread support. When
  162. thread support is available, the @code{threads} feature is provided
  163. (@pxref{Feature Manipulation, @code{provided?}}).
  164. The procedures below manipulate Guile threads, which are wrappers around
  165. the system's POSIX threads. For application-level parallelism, using
  166. higher-level constructs, such as futures, is recommended
  167. (@pxref{Futures}).
  168. @deffn {Scheme Procedure} all-threads
  169. @deffnx {C Function} scm_all_threads ()
  170. Return a list of all threads.
  171. @end deffn
  172. @deffn {Scheme Procedure} current-thread
  173. @deffnx {C Function} scm_current_thread ()
  174. Return the thread that called this function.
  175. @end deffn
  176. @c begin (texi-doc-string "guile" "call-with-new-thread")
  177. @deffn {Scheme Procedure} call-with-new-thread thunk [handler]
  178. Call @code{thunk} in a new thread and with a new dynamic state,
  179. returning the new thread. The procedure @var{thunk} is called via
  180. @code{with-continuation-barrier}.
  181. When @var{handler} is specified, then @var{thunk} is called from
  182. within a @code{catch} with tag @code{#t} that has @var{handler} as its
  183. handler. This catch is established inside the continuation barrier.
  184. Once @var{thunk} or @var{handler} returns, the return value is made
  185. the @emph{exit value} of the thread and the thread is terminated.
  186. @end deffn
  187. @deftypefn {C Function} SCM scm_spawn_thread (scm_t_catch_body body, void *body_data, scm_t_catch_handler handler, void *handler_data)
  188. Call @var{body} in a new thread, passing it @var{body_data}, returning
  189. the new thread. The function @var{body} is called via
  190. @code{scm_c_with_continuation_barrier}.
  191. When @var{handler} is non-@code{NULL}, @var{body} is called via
  192. @code{scm_internal_catch} with tag @code{SCM_BOOL_T} that has
  193. @var{handler} and @var{handler_data} as the handler and its data. This
  194. catch is established inside the continuation barrier.
  195. Once @var{body} or @var{handler} returns, the return value is made the
  196. @emph{exit value} of the thread and the thread is terminated.
  197. @end deftypefn
  198. @deffn {Scheme Procedure} thread? obj
  199. @deffnx {C Function} scm_thread_p (obj)
  200. Return @code{#t} iff @var{obj} is a thread; otherwise, return
  201. @code{#f}.
  202. @end deffn
  203. @c begin (texi-doc-string "guile" "join-thread")
  204. @deffn {Scheme Procedure} join-thread thread [timeout [timeoutval]]
  205. @deffnx {C Function} scm_join_thread (thread)
  206. @deffnx {C Function} scm_join_thread_timed (thread, timeout, timeoutval)
  207. Wait for @var{thread} to terminate and return its exit value. Threads
  208. that have not been created with @code{call-with-new-thread} or
  209. @code{scm_spawn_thread} have an exit value of @code{#f}. When
  210. @var{timeout} is given, it specifies a point in time where the waiting
  211. should be aborted. It can be either an integer as returned by
  212. @code{current-time} or a pair as returned by @code{gettimeofday}.
  213. When the waiting is aborted, @var{timeoutval} is returned (if it is
  214. specified; @code{#f} is returned otherwise).
  215. @end deffn
  216. @deffn {Scheme Procedure} thread-exited? thread
  217. @deffnx {C Function} scm_thread_exited_p (thread)
  218. Return @code{#t} iff @var{thread} has exited.
  219. @end deffn
  220. @c begin (texi-doc-string "guile" "yield")
  221. @deffn {Scheme Procedure} yield
  222. If one or more threads are waiting to execute, calling yield forces an
  223. immediate context switch to one of them. Otherwise, yield has no effect.
  224. @end deffn
  225. @deffn {Scheme Procedure} cancel-thread thread
  226. @deffnx {C Function} scm_cancel_thread (thread)
  227. Asynchronously notify @var{thread} to exit. Immediately after
  228. receiving this notification, @var{thread} will call its cleanup handler
  229. (if one has been set) and then terminate, aborting any evaluation that
  230. is in progress.
  231. Because Guile threads are isomorphic with POSIX threads, @var{thread}
  232. will not receive its cancellation signal until it reaches a cancellation
  233. point. See your operating system's POSIX threading documentation for
  234. more information on cancellation points; note that in Guile, unlike
  235. native POSIX threads, a thread can receive a cancellation notification
  236. while attempting to lock a mutex.
  237. @end deffn
  238. @deffn {Scheme Procedure} set-thread-cleanup! thread proc
  239. @deffnx {C Function} scm_set_thread_cleanup_x (thread, proc)
  240. Set @var{proc} as the cleanup handler for the thread @var{thread}.
  241. @var{proc}, which must be a thunk, will be called when @var{thread}
  242. exits, either normally or by being canceled. Thread cleanup handlers
  243. can be used to perform useful tasks like releasing resources, such as
  244. locked mutexes, when thread exit cannot be predicted.
  245. The return value of @var{proc} will be set as the @emph{exit value} of
  246. @var{thread}.
  247. To remove a cleanup handler, pass @code{#f} for @var{proc}.
  248. @end deffn
  249. @deffn {Scheme Procedure} thread-cleanup thread
  250. @deffnx {C Function} scm_thread_cleanup (thread)
  251. Return the cleanup handler currently installed for the thread
  252. @var{thread}. If no cleanup handler is currently installed,
  253. thread-cleanup returns @code{#f}.
  254. @end deffn
  255. Higher level thread procedures are available by loading the
  256. @code{(ice-9 threads)} module. These provide standardized
  257. thread creation.
  258. @deffn macro make-thread proc [args@dots{}]
  259. Apply @var{proc} to @var{args} in a new thread formed by
  260. @code{call-with-new-thread} using a default error handler that display
  261. the error to the current error port. The @var{args@dots{}}
  262. expressions are evaluated in the new thread.
  263. @end deffn
  264. @deffn macro begin-thread first [rest@dots{}]
  265. Evaluate forms @var{first} and @var{rest} in a new thread formed by
  266. @code{call-with-new-thread} using a default error handler that display
  267. the error to the current error port.
  268. @end deffn
  269. @node Mutexes and Condition Variables
  270. @subsection Mutexes and Condition Variables
  271. @cindex mutex
  272. @cindex condition variable
  273. A mutex is a thread synchronization object, it can be used by threads
  274. to control access to a shared resource. A mutex can be locked to
  275. indicate a resource is in use, and other threads can then block on the
  276. mutex to wait for the resource (or can just test and do something else
  277. if not available). ``Mutex'' is short for ``mutual exclusion''.
  278. There are two types of mutexes in Guile, ``standard'' and
  279. ``recursive''. They're created by @code{make-mutex} and
  280. @code{make-recursive-mutex} respectively, the operation functions are
  281. then common to both.
  282. Note that for both types of mutex there's no protection against a
  283. ``deadly embrace''. For instance if one thread has locked mutex A and
  284. is waiting on mutex B, but another thread owns B and is waiting on A,
  285. then an endless wait will occur (in the current implementation).
  286. Acquiring requisite mutexes in a fixed order (like always A before B)
  287. in all threads is one way to avoid such problems.
  288. @sp 1
  289. @deffn {Scheme Procedure} make-mutex . flags
  290. @deffnx {C Function} scm_make_mutex ()
  291. @deffnx {C Function} scm_make_mutex_with_flags (SCM flags)
  292. Return a new mutex. It is initially unlocked. If @var{flags} is
  293. specified, it must be a list of symbols specifying configuration flags
  294. for the newly-created mutex. The supported flags are:
  295. @table @code
  296. @item unchecked-unlock
  297. Unless this flag is present, a call to `unlock-mutex' on the returned
  298. mutex when it is already unlocked will cause an error to be signalled.
  299. @item allow-external-unlock
  300. Allow the returned mutex to be unlocked by the calling thread even if
  301. it was originally locked by a different thread.
  302. @item recursive
  303. The returned mutex will be recursive.
  304. @end table
  305. @end deffn
  306. @deffn {Scheme Procedure} mutex? obj
  307. @deffnx {C Function} scm_mutex_p (obj)
  308. Return @code{#t} iff @var{obj} is a mutex; otherwise, return
  309. @code{#f}.
  310. @end deffn
  311. @deffn {Scheme Procedure} make-recursive-mutex
  312. @deffnx {C Function} scm_make_recursive_mutex ()
  313. Create a new recursive mutex. It is initially unlocked. Calling this
  314. function is equivalent to calling `make-mutex' and specifying the
  315. @code{recursive} flag.
  316. @end deffn
  317. @deffn {Scheme Procedure} lock-mutex mutex [timeout [owner]]
  318. @deffnx {C Function} scm_lock_mutex (mutex)
  319. @deffnx {C Function} scm_lock_mutex_timed (mutex, timeout, owner)
  320. Lock @var{mutex}. If the mutex is already locked, then block and
  321. return only when @var{mutex} has been acquired.
  322. When @var{timeout} is given, it specifies a point in time where the
  323. waiting should be aborted. It can be either an integer as returned
  324. by @code{current-time} or a pair as returned by @code{gettimeofday}.
  325. When the waiting is aborted, @code{#f} is returned.
  326. When @var{owner} is given, it specifies an owner for @var{mutex} other
  327. than the calling thread. @var{owner} may also be @code{#f},
  328. indicating that the mutex should be locked but left unowned.
  329. For standard mutexes (@code{make-mutex}), and error is signalled if
  330. the thread has itself already locked @var{mutex}.
  331. For a recursive mutex (@code{make-recursive-mutex}), if the thread has
  332. itself already locked @var{mutex}, then a further @code{lock-mutex}
  333. call increments the lock count. An additional @code{unlock-mutex}
  334. will be required to finally release.
  335. If @var{mutex} was locked by a thread that exited before unlocking it,
  336. the next attempt to lock @var{mutex} will succeed, but
  337. @code{abandoned-mutex-error} will be signalled.
  338. When a system async (@pxref{System asyncs}) is activated for a thread
  339. blocked in @code{lock-mutex}, the wait is interrupted and the async is
  340. executed. When the async returns, the wait resumes.
  341. @end deffn
  342. @deftypefn {C Function} void scm_dynwind_lock_mutex (SCM mutex)
  343. Arrange for @var{mutex} to be locked whenever the current dynwind
  344. context is entered and to be unlocked when it is exited.
  345. @end deftypefn
  346. @deffn {Scheme Procedure} try-mutex mx
  347. @deffnx {C Function} scm_try_mutex (mx)
  348. Try to lock @var{mutex} as per @code{lock-mutex}. If @var{mutex} can
  349. be acquired immediately then this is done and the return is @code{#t}.
  350. If @var{mutex} is locked by some other thread then nothing is done and
  351. the return is @code{#f}.
  352. @end deffn
  353. @deffn {Scheme Procedure} unlock-mutex mutex [condvar [timeout]]
  354. @deffnx {C Function} scm_unlock_mutex (mutex)
  355. @deffnx {C Function} scm_unlock_mutex_timed (mutex, condvar, timeout)
  356. Unlock @var{mutex}. An error is signalled if @var{mutex} is not locked
  357. and was not created with the @code{unchecked-unlock} flag set, or if
  358. @var{mutex} is locked by a thread other than the calling thread and was
  359. not created with the @code{allow-external-unlock} flag set.
  360. If @var{condvar} is given, it specifies a condition variable upon
  361. which the calling thread will wait to be signalled before returning.
  362. (This behavior is very similar to that of
  363. @code{wait-condition-variable}, except that the mutex is left in an
  364. unlocked state when the function returns.)
  365. When @var{timeout} is also given, it specifies a point in time where
  366. the waiting should be aborted. It can be either an integer as
  367. returned by @code{current-time} or a pair as returned by
  368. @code{gettimeofday}. When the waiting is aborted, @code{#f} is
  369. returned. Otherwise the function returns @code{#t}.
  370. @end deffn
  371. @deffn {Scheme Procedure} mutex-owner mutex
  372. @deffnx {C Function} scm_mutex_owner (mutex)
  373. Return the current owner of @var{mutex}, in the form of a thread or
  374. @code{#f} (indicating no owner). Note that a mutex may be unowned but
  375. still locked.
  376. @end deffn
  377. @deffn {Scheme Procedure} mutex-level mutex
  378. @deffnx {C Function} scm_mutex_level (mutex)
  379. Return the current lock level of @var{mutex}. If @var{mutex} is
  380. currently unlocked, this value will be 0; otherwise, it will be the
  381. number of times @var{mutex} has been recursively locked by its current
  382. owner.
  383. @end deffn
  384. @deffn {Scheme Procedure} mutex-locked? mutex
  385. @deffnx {C Function} scm_mutex_locked_p (mutex)
  386. Return @code{#t} if @var{mutex} is locked, regardless of ownership;
  387. otherwise, return @code{#f}.
  388. @end deffn
  389. @deffn {Scheme Procedure} make-condition-variable
  390. @deffnx {C Function} scm_make_condition_variable ()
  391. Return a new condition variable.
  392. @end deffn
  393. @deffn {Scheme Procedure} condition-variable? obj
  394. @deffnx {C Function} scm_condition_variable_p (obj)
  395. Return @code{#t} iff @var{obj} is a condition variable; otherwise,
  396. return @code{#f}.
  397. @end deffn
  398. @deffn {Scheme Procedure} wait-condition-variable condvar mutex [time]
  399. @deffnx {C Function} scm_wait_condition_variable (condvar, mutex, time)
  400. Wait until @var{condvar} has been signalled. While waiting,
  401. @var{mutex} is atomically unlocked (as with @code{unlock-mutex}) and
  402. is locked again when this function returns. When @var{time} is given,
  403. it specifies a point in time where the waiting should be aborted. It
  404. can be either a integer as returned by @code{current-time} or a pair
  405. as returned by @code{gettimeofday}. When the waiting is aborted,
  406. @code{#f} is returned. When the condition variable has in fact been
  407. signalled, @code{#t} is returned. The mutex is re-locked in any case
  408. before @code{wait-condition-variable} returns.
  409. When a system async is activated for a thread that is blocked in a
  410. call to @code{wait-condition-variable}, the waiting is interrupted,
  411. the mutex is locked, and the async is executed. When the async
  412. returns, the mutex is unlocked again and the waiting is resumed. When
  413. the thread block while re-acquiring the mutex, execution of asyncs is
  414. blocked.
  415. @end deffn
  416. @deffn {Scheme Procedure} signal-condition-variable condvar
  417. @deffnx {C Function} scm_signal_condition_variable (condvar)
  418. Wake up one thread that is waiting for @var{condvar}.
  419. @end deffn
  420. @deffn {Scheme Procedure} broadcast-condition-variable condvar
  421. @deffnx {C Function} scm_broadcast_condition_variable (condvar)
  422. Wake up all threads that are waiting for @var{condvar}.
  423. @end deffn
  424. @sp 1
  425. The following are higher level operations on mutexes. These are
  426. available from
  427. @example
  428. (use-modules (ice-9 threads))
  429. @end example
  430. @deffn macro with-mutex mutex [body@dots{}]
  431. Lock @var{mutex}, evaluate the @var{body} forms, then unlock
  432. @var{mutex}. The return value is the return from the last @var{body}
  433. form.
  434. The lock, body and unlock form the branches of a @code{dynamic-wind}
  435. (@pxref{Dynamic Wind}), so @var{mutex} is automatically unlocked if an
  436. error or new continuation exits @var{body}, and is re-locked if
  437. @var{body} is re-entered by a captured continuation.
  438. @end deffn
  439. @deffn macro monitor body@dots{}
  440. Evaluate the @var{body} forms, with a mutex locked so only one thread
  441. can execute that code at any one time. The return value is the return
  442. from the last @var{body} form.
  443. Each @code{monitor} form has its own private mutex and the locking and
  444. evaluation is as per @code{with-mutex} above. A standard mutex
  445. (@code{make-mutex}) is used, which means @var{body} must not
  446. recursively re-enter the @code{monitor} form.
  447. The term ``monitor'' comes from operating system theory, where it
  448. means a particular bit of code managing access to some resource and
  449. which only ever executes on behalf of one process at any one time.
  450. @end deffn
  451. @node Blocking
  452. @subsection Blocking in Guile Mode
  453. Up to Guile version 1.8, a thread blocked in guile mode would prevent
  454. the garbage collector from running. Thus threads had to explicitly
  455. leave guile mode with @code{scm_without_guile ()} before making a
  456. potentially blocking call such as a mutex lock, a @code{select ()}
  457. system call, etc. The following functions could be used to temporarily
  458. leave guile mode or to perform some common blocking operations in a
  459. supported way.
  460. Starting from Guile 2.0, blocked threads no longer hinder garbage
  461. collection. Thus, the functions below are not needed anymore. They can
  462. still be used to inform the GC that a thread is about to block, giving
  463. it a (small) optimization opportunity for ``stop the world'' garbage
  464. collections, should they occur while the thread is blocked.
  465. @deftypefn {C Function} {void *} scm_without_guile (void *(*func) (void *), void *data)
  466. Leave guile mode, call @var{func} on @var{data}, enter guile mode and
  467. return the result of calling @var{func}.
  468. While a thread has left guile mode, it must not call any libguile
  469. functions except @code{scm_with_guile} or @code{scm_without_guile} and
  470. must not use any libguile macros. Also, local variables of type
  471. @code{SCM} that are allocated while not in guile mode are not
  472. protected from the garbage collector.
  473. When used from non-guile mode, calling @code{scm_without_guile} is
  474. still allowed: it simply calls @var{func}. In that way, you can leave
  475. guile mode without having to know whether the current thread is in
  476. guile mode or not.
  477. @end deftypefn
  478. @deftypefn {C Function} int scm_pthread_mutex_lock (pthread_mutex_t *mutex)
  479. Like @code{pthread_mutex_lock}, but leaves guile mode while waiting for
  480. the mutex.
  481. @end deftypefn
  482. @deftypefn {C Function} int scm_pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex)
  483. @deftypefnx {C Function} int scm_pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, struct timespec *abstime)
  484. Like @code{pthread_cond_wait} and @code{pthread_cond_timedwait}, but
  485. leaves guile mode while waiting for the condition variable.
  486. @end deftypefn
  487. @deftypefn {C Function} int scm_std_select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout)
  488. Like @code{select} but leaves guile mode while waiting. Also, the
  489. delivery of a system async causes this function to be interrupted with
  490. error code @code{EINTR}.
  491. @end deftypefn
  492. @deftypefn {C Function} {unsigned int} scm_std_sleep ({unsigned int} seconds)
  493. Like @code{sleep}, but leaves guile mode while sleeping. Also, the
  494. delivery of a system async causes this function to be interrupted.
  495. @end deftypefn
  496. @deftypefn {C Function} {unsigned long} scm_std_usleep ({unsigned long} usecs)
  497. Like @code{usleep}, but leaves guile mode while sleeping. Also, the
  498. delivery of a system async causes this function to be interrupted.
  499. @end deftypefn
  500. @node Critical Sections
  501. @subsection Critical Sections
  502. @deffn {C Macro} SCM_CRITICAL_SECTION_START
  503. @deffnx {C Macro} SCM_CRITICAL_SECTION_END
  504. These two macros can be used to delimit a critical section.
  505. Syntactically, they are both statements and need to be followed
  506. immediately by a semicolon.
  507. Executing @code{SCM_CRITICAL_SECTION_START} will lock a recursive
  508. mutex and block the executing of system asyncs. Executing
  509. @code{SCM_CRITICAL_SECTION_END} will unblock the execution of system
  510. asyncs and unlock the mutex. Thus, the code that executes between
  511. these two macros can only be executed in one thread at any one time
  512. and no system asyncs will run. However, because the mutex is a
  513. recursive one, the code might still be reentered by the same thread.
  514. You must either allow for this or avoid it, both by careful coding.
  515. On the other hand, critical sections delimited with these macros can
  516. be nested since the mutex is recursive.
  517. You must make sure that for each @code{SCM_CRITICAL_SECTION_START},
  518. the corresponding @code{SCM_CRITICAL_SECTION_END} is always executed.
  519. This means that no non-local exit (such as a signalled error) might
  520. happen, for example.
  521. @end deffn
  522. @deftypefn {C Function} void scm_dynwind_critical_section (SCM mutex)
  523. Call @code{scm_dynwind_lock_mutex} on @var{mutex} and call
  524. @code{scm_dynwind_block_asyncs}. When @var{mutex} is false, a recursive
  525. mutex provided by Guile is used instead.
  526. The effect of a call to @code{scm_dynwind_critical_section} is that
  527. the current dynwind context (@pxref{Dynamic Wind}) turns into a
  528. critical section. Because of the locked mutex, no second thread can
  529. enter it concurrently and because of the blocked asyncs, no system
  530. async can reenter it from the current thread.
  531. When the current thread reenters the critical section anyway, the kind
  532. of @var{mutex} determines what happens: When @var{mutex} is recursive,
  533. the reentry is allowed. When it is a normal mutex, an error is
  534. signalled.
  535. @end deftypefn
  536. @node Fluids and Dynamic States
  537. @subsection Fluids and Dynamic States
  538. @cindex fluids
  539. A @emph{fluid} is an object that can store one value per @emph{dynamic
  540. state}. Each thread has a current dynamic state, and when accessing a
  541. fluid, this current dynamic state is used to provide the actual value.
  542. In this way, fluids can be used for thread local storage, but they are
  543. in fact more flexible: dynamic states are objects of their own and can
  544. be made current for more than one thread at the same time, or only be
  545. made current temporarily, for example.
  546. Fluids can also be used to simulate the desirable effects of
  547. dynamically scoped variables. Dynamically scoped variables are useful
  548. when you want to set a variable to a value during some dynamic extent
  549. in the execution of your program and have them revert to their
  550. original value when the control flow is outside of this dynamic
  551. extent. See the description of @code{with-fluids} below for details.
  552. New fluids are created with @code{make-fluid} and @code{fluid?} is
  553. used for testing whether an object is actually a fluid. The values
  554. stored in a fluid can be accessed with @code{fluid-ref} and
  555. @code{fluid-set!}.
  556. @deffn {Scheme Procedure} make-fluid
  557. @deffnx {C Function} scm_make_fluid ()
  558. Return a newly created fluid.
  559. Fluids are objects that can hold one
  560. value per dynamic state. That is, modifications to this value are
  561. only visible to code that executes with the same dynamic state as
  562. the modifying code. When a new dynamic state is constructed, it
  563. inherits the values from its parent. Because each thread normally executes
  564. with its own dynamic state, you can use fluids for thread local storage.
  565. @end deffn
  566. @deffn {Scheme Procedure} make-unbound-fluid
  567. @deffnx {C Function} scm_make_unbound_fluid ()
  568. Return a new fluid that is initially unbound (instead of being
  569. implicitly bound to @code{#f}.
  570. @end deffn
  571. @deffn {Scheme Procedure} fluid? obj
  572. @deffnx {C Function} scm_fluid_p (obj)
  573. Return @code{#t} iff @var{obj} is a fluid; otherwise, return
  574. @code{#f}.
  575. @end deffn
  576. @deffn {Scheme Procedure} fluid-ref fluid
  577. @deffnx {C Function} scm_fluid_ref (fluid)
  578. Return the value associated with @var{fluid} in the current
  579. dynamic root. If @var{fluid} has not been set, then return
  580. @code{#f}. Calling @code{fluid-ref} on an unbound fluid produces a
  581. runtime error.
  582. @end deffn
  583. @deffn {Scheme Procedure} fluid-set! fluid value
  584. @deffnx {C Function} scm_fluid_set_x (fluid, value)
  585. Set the value associated with @var{fluid} in the current dynamic root.
  586. @end deffn
  587. @deffn {Scheme Procedure} fluid-unset! fluid
  588. @deffnx {C Function} scm_fluid_unset_x (fluid)
  589. Disassociate the given fluid from any value, making it unbound.
  590. @end deffn
  591. @deffn {Scheme Procedure} fluid-bound? fluid
  592. @deffnx {C Function} scm_fluid_bound_p (fluid)
  593. Returns @code{#t} iff the given fluid is bound to a value, otherwise
  594. @code{#f}.
  595. @end deffn
  596. @code{with-fluids*} temporarily changes the values of one or more fluids,
  597. so that the given procedure and each procedure called by it access the
  598. given values. After the procedure returns, the old values are restored.
  599. @deffn {Scheme Procedure} with-fluid* fluid value thunk
  600. @deffnx {C Function} scm_with_fluid (fluid, value, thunk)
  601. Set @var{fluid} to @var{value} temporarily, and call @var{thunk}.
  602. @var{thunk} must be a procedure with no argument.
  603. @end deffn
  604. @deffn {Scheme Procedure} with-fluids* fluids values thunk
  605. @deffnx {C Function} scm_with_fluids (fluids, values, thunk)
  606. Set @var{fluids} to @var{values} temporary, and call @var{thunk}.
  607. @var{fluids} must be a list of fluids and @var{values} must be the
  608. same number of their values to be applied. Each substitution is done
  609. in the order given. @var{thunk} must be a procedure with no argument.
  610. It is called inside a @code{dynamic-wind} and the fluids are
  611. set/restored when control enter or leaves the established dynamic
  612. extent.
  613. @end deffn
  614. @deffn {Scheme Macro} with-fluids ((fluid value) ...) body...
  615. Execute @var{body...} while each @var{fluid} is set to the
  616. corresponding @var{value}. Both @var{fluid} and @var{value} are
  617. evaluated and @var{fluid} must yield a fluid. @var{body...} is
  618. executed inside a @code{dynamic-wind} and the fluids are set/restored
  619. when control enter or leaves the established dynamic extent.
  620. @end deffn
  621. @deftypefn {C Function} SCM scm_c_with_fluids (SCM fluids, SCM vals, SCM (*cproc)(void *), void *data)
  622. @deftypefnx {C Function} SCM scm_c_with_fluid (SCM fluid, SCM val, SCM (*cproc)(void *), void *data)
  623. The function @code{scm_c_with_fluids} is like @code{scm_with_fluids}
  624. except that it takes a C function to call instead of a Scheme thunk.
  625. The function @code{scm_c_with_fluid} is similar but only allows one
  626. fluid to be set instead of a list.
  627. @end deftypefn
  628. @deftypefn {C Function} void scm_dynwind_fluid (SCM fluid, SCM val)
  629. This function must be used inside a pair of calls to
  630. @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
  631. Wind}). During the dynwind context, the fluid @var{fluid} is set to
  632. @var{val}.
  633. More precisely, the value of the fluid is swapped with a `backup'
  634. value whenever the dynwind context is entered or left. The backup
  635. value is initialized with the @var{val} argument.
  636. @end deftypefn
  637. @deffn {Scheme Procedure} make-dynamic-state [parent]
  638. @deffnx {C Function} scm_make_dynamic_state (parent)
  639. Return a copy of the dynamic state object @var{parent}
  640. or of the current dynamic state when @var{parent} is omitted.
  641. @end deffn
  642. @deffn {Scheme Procedure} dynamic-state? obj
  643. @deffnx {C Function} scm_dynamic_state_p (obj)
  644. Return @code{#t} if @var{obj} is a dynamic state object;
  645. return @code{#f} otherwise.
  646. @end deffn
  647. @deftypefn {C Procedure} int scm_is_dynamic_state (SCM obj)
  648. Return non-zero if @var{obj} is a dynamic state object;
  649. return zero otherwise.
  650. @end deftypefn
  651. @deffn {Scheme Procedure} current-dynamic-state
  652. @deffnx {C Function} scm_current_dynamic_state ()
  653. Return the current dynamic state object.
  654. @end deffn
  655. @deffn {Scheme Procedure} set-current-dynamic-state state
  656. @deffnx {C Function} scm_set_current_dynamic_state (state)
  657. Set the current dynamic state object to @var{state}
  658. and return the previous current dynamic state object.
  659. @end deffn
  660. @deffn {Scheme Procedure} with-dynamic-state state proc
  661. @deffnx {C Function} scm_with_dynamic_state (state, proc)
  662. Call @var{proc} while @var{state} is the current dynamic
  663. state object.
  664. @end deffn
  665. @deftypefn {C Procedure} void scm_dynwind_current_dynamic_state (SCM state)
  666. Set the current dynamic state to @var{state} for the current dynwind
  667. context.
  668. @end deftypefn
  669. @deftypefn {C Procedure} {void *} scm_c_with_dynamic_state (SCM state, void *(*func)(void *), void *data)
  670. Like @code{scm_with_dynamic_state}, but call @var{func} with
  671. @var{data}.
  672. @end deftypefn
  673. @node Futures
  674. @subsection Futures
  675. @cindex futures
  676. @cindex fine-grain parallelism
  677. @cindex parallelism
  678. The @code{(ice-9 futures)} module provides @dfn{futures}, a construct
  679. for fine-grain parallelism. A future is a wrapper around an expression
  680. whose computation may occur in parallel with the code of the calling
  681. thread, and possibly in parallel with other futures. Like promises,
  682. futures are essentially proxies that can be queried to obtain the value
  683. of the enclosed expression:
  684. @lisp
  685. (touch (future (+ 2 3)))
  686. @result{} 5
  687. @end lisp
  688. However, unlike promises, the expression associated with a future may be
  689. evaluated on another CPU core, should one be available. This supports
  690. @dfn{fine-grain parallelism}, because even relatively small computations
  691. can be embedded in futures. Consider this sequential code:
  692. @lisp
  693. (define (find-prime lst1 lst2)
  694. (or (find prime? lst1)
  695. (find prime? lst2)))
  696. @end lisp
  697. The two arms of @code{or} are potentially computation-intensive. They
  698. are independent of one another, yet, they are evaluated sequentially
  699. when the first one returns @code{#f}. Using futures, one could rewrite
  700. it like this:
  701. @lisp
  702. (define (find-prime lst1 lst2)
  703. (let ((f (future (find prime? lst2))))
  704. (or (find prime? lst1)
  705. (touch f))))
  706. @end lisp
  707. This preserves the semantics of @code{find-prime}. On a multi-core
  708. machine, though, the computation of @code{(find prime? lst2)} may be
  709. done in parallel with that of the other @code{find} call, which can
  710. reduce the execution time of @code{find-prime}.
  711. Note that futures are intended for the evaluation of purely functional
  712. expressions. Expressions that have side-effects or rely on I/O may
  713. require additional care, such as explicit synchronization
  714. (@pxref{Mutexes and Condition Variables}).
  715. Guile's futures are implemented on top of POSIX threads
  716. (@pxref{Threads}). Internally, a fixed-size pool of threads is used to
  717. evaluate futures, such that offloading the evaluation of an expression
  718. to another thread doesn't incur thread creation costs. By default, the
  719. pool contains one thread per available CPU core, minus one, to account
  720. for the main thread. The number of available CPU cores is determined
  721. using @code{current-processor-count} (@pxref{Processes}).
  722. @deffn {Scheme Syntax} future exp
  723. Return a future for expression @var{exp}. This is equivalent to:
  724. @lisp
  725. (make-future (lambda () exp))
  726. @end lisp
  727. @end deffn
  728. @deffn {Scheme Procedure} make-future thunk
  729. Return a future for @var{thunk}, a zero-argument procedure.
  730. This procedure returns immediately. Execution of @var{thunk} may begin
  731. in parallel with the calling thread's computations, if idle CPU cores
  732. are available, or it may start when @code{touch} is invoked on the
  733. returned future.
  734. If the execution of @var{thunk} throws an exception, that exception will
  735. be re-thrown when @code{touch} is invoked on the returned future.
  736. @end deffn
  737. @deffn {Scheme Procedure} future? obj
  738. Return @code{#t} if @var{obj} is a future.
  739. @end deffn
  740. @deffn {Scheme Procedure} touch f
  741. Return the result of the expression embedded in future @var{f}.
  742. If the result was already computed in parallel, @code{touch} returns
  743. instantaneously. Otherwise, it waits for the computation to complete,
  744. if it already started, or initiates it.
  745. @end deffn
  746. @node Parallel Forms
  747. @subsection Parallel forms
  748. @cindex parallel forms
  749. The functions described in this section are available from
  750. @example
  751. (use-modules (ice-9 threads))
  752. @end example
  753. They provide high-level parallel constructs. The following functions
  754. are implemented in terms of futures (@pxref{Futures}). Thus they are
  755. relatively cheap as they re-use existing threads, and portable, since
  756. they automatically use one thread per available CPU core.
  757. @deffn syntax parallel expr1 @dots{} exprN
  758. Evaluate each @var{expr} expression in parallel, each in its own thread.
  759. Return the results as a set of @var{N} multiple values
  760. (@pxref{Multiple Values}).
  761. @end deffn
  762. @deffn syntax letpar ((var1 expr1) @dots{} (varN exprN)) body@dots{}
  763. Evaluate each @var{expr} in parallel, each in its own thread, then bind
  764. the results to the corresponding @var{var} variables and evaluate
  765. @var{body}.
  766. @code{letpar} is like @code{let} (@pxref{Local Bindings}), but all the
  767. expressions for the bindings are evaluated in parallel.
  768. @end deffn
  769. @deffn {Scheme Procedure} par-map proc lst1 @dots{} lstN
  770. @deffnx {Scheme Procedure} par-for-each proc lst1 @dots{} lstN
  771. Call @var{proc} on the elements of the given lists. @code{par-map}
  772. returns a list comprising the return values from @var{proc}.
  773. @code{par-for-each} returns an unspecified value, but waits for all
  774. calls to complete.
  775. The @var{proc} calls are @code{(@var{proc} @var{elem1} @dots{}
  776. @var{elemN})}, where each @var{elem} is from the corresponding
  777. @var{lst}. Each @var{lst} must be the same length. The calls are
  778. potentially made in parallel, depending on the number of CPU cores
  779. available.
  780. These functions are like @code{map} and @code{for-each} (@pxref{List
  781. Mapping}), but make their @var{proc} calls in parallel.
  782. @end deffn
  783. Unlike those above, the functions described below take a number of
  784. threads as an argument. This makes them inherently non-portable since
  785. the specified number of threads may differ from the number of available
  786. CPU cores as returned by @code{current-processor-count}
  787. (@pxref{Processes}). In addition, these functions create the specified
  788. number of threads when they are called and terminate them upon
  789. completion, which makes them quite expensive.
  790. Therefore, they should be avoided.
  791. @deffn {Scheme Procedure} n-par-map n proc lst1 @dots{} lstN
  792. @deffnx {Scheme Procedure} n-par-for-each n proc lst1 @dots{} lstN
  793. Call @var{proc} on the elements of the given lists, in the same way as
  794. @code{par-map} and @code{par-for-each} above, but use no more than
  795. @var{n} threads at any one time. The order in which calls are
  796. initiated within that threads limit is unspecified.
  797. These functions are good for controlling resource consumption if
  798. @var{proc} calls might be costly, or if there are many to be made. On
  799. a dual-CPU system for instance @math{@var{n}=4} might be enough to
  800. keep the CPUs utilized, and not consume too much memory.
  801. @end deffn
  802. @deffn {Scheme Procedure} n-for-each-par-map n sproc pproc lst1 @dots{} lstN
  803. Apply @var{pproc} to the elements of the given lists, and apply
  804. @var{sproc} to each result returned by @var{pproc}. The final return
  805. value is unspecified, but all calls will have been completed before
  806. returning.
  807. The calls made are @code{(@var{sproc} (@var{pproc} @var{elem1} @dots{}
  808. @var{elemN}))}, where each @var{elem} is from the corresponding
  809. @var{lst}. Each @var{lst} must have the same number of elements.
  810. The @var{pproc} calls are made in parallel, in separate threads. No more
  811. than @var{n} threads are used at any one time. The order in which
  812. @var{pproc} calls are initiated within that limit is unspecified.
  813. The @var{sproc} calls are made serially, in list element order, one at
  814. a time. @var{pproc} calls on later elements may execute in parallel
  815. with the @var{sproc} calls. Exactly which thread makes each
  816. @var{sproc} call is unspecified.
  817. This function is designed for individual calculations that can be done
  818. in parallel, but with results needing to be handled serially, for
  819. instance to write them to a file. The @var{n} limit on threads
  820. controls system resource usage when there are many calculations or
  821. when they might be costly.
  822. It will be seen that @code{n-for-each-par-map} is like a combination
  823. of @code{n-par-map} and @code{for-each},
  824. @example
  825. (for-each sproc (n-par-map n pproc lst1 ... lstN))
  826. @end example
  827. @noindent
  828. But the actual implementation is more efficient since each @var{sproc}
  829. call, in turn, can be initiated once the relevant @var{pproc} call has
  830. completed, it doesn't need to wait for all to finish.
  831. @end deffn
  832. @c Local Variables:
  833. @c TeX-master: "guile.texi"
  834. @c End: