api-scheduling.texi 47 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989910010110210310410510610710810911011111211311411511611711811912012112212312412512612712812913013113213313413513613713813914014114214314414514614714814915015115215315415515615715815916016116216316416516616716816917017117217317417517617717817918018118218318418518618718818919019119219319419519619719819920020120220320420520620720820921021121221321421521621721821922022122222322422522622722822923023123223323423523623723823924024124224324424524624724824925025125225325425525625725825926026126226326426526626726826927027127227327427527627727827928028128228328428528628728828929029129229329429529629729829930030130230330430530630730830931031131231331431531631731831932032132232332432532632732832933033133233333433533633733833934034134234334434534634734834935035135235335435535635735835936036136236336436536636736836937037137237337437537637737837938038138238338438538638738838939039139239339439539639739839940040140240340440540640740840941041141241341441541641741841942042142242342442542642742842943043143243343443543643743843944044144244344444544644744844945045145245345445545645745845946046146246346446546646746846947047147247347447547647747847948048148248348448548648748848949049149249349449549649749849950050150250350450550650750850951051151251351451551651751851952052152252352452552652752852953053153253353453553653753853954054154254354454554654754854955055155255355455555655755855956056156256356456556656756856957057157257357457557657757857958058158258358458558658758858959059159259359459559659759859960060160260360460560660760860961061161261361461561661761861962062162262362462562662762862963063163263363463563663763863964064164264364464564664764864965065165265365465565665765865966066166266366466566666766866967067167267367467567667767867968068168268368468568668768868969069169269369469569669769869970070170270370470570670770870971071171271371471571671771871972072172272372472572672772872973073173273373473573673773873974074174274374474574674774874975075175275375475575675775875976076176276376476576676776876977077177277377477577677777877978078178278378478578678778878979079179279379479579679779879980080180280380480580680780880981081181281381481581681781881982082182282382482582682782882983083183283383483583683783883984084184284384484584684784884985085185285385485585685785885986086186286386486586686786886987087187287387487587687787887988088188288388488588688788888989089189289389489589689789889990090190290390490590690790890991091191291391491591691791891992092192292392492592692792892993093193293393493593693793893994094194294394494594694794894995095195295395495595695795895996096196296396496596696796896997097197297397497597697797897998098198298398498598698798898999099199299399499599699799899910001001100210031004100510061007100810091010101110121013101410151016101710181019102010211022102310241025102610271028102910301031103210331034103510361037103810391040104110421043104410451046104710481049105010511052105310541055105610571058105910601061106210631064106510661067106810691070107110721073107410751076107710781079108010811082108310841085108610871088108910901091109210931094109510961097109810991100110111021103110411051106110711081109111011111112111311141115111611171118111911201121112211231124112511261127112811291130113111321133113411351136113711381139114011411142114311441145114611471148114911501151115211531154115511561157115811591160116111621163116411651166116711681169117011711172117311741175117611771178117911801181118211831184118511861187118811891190119111921193
  1. @c -*-texinfo-*-
  2. @c This is part of the GNU Guile Reference Manual.
  3. @c Copyright (C) 1996, 1997, 2000, 2001, 2002, 2003, 2004, 2007, 2009, 2010, 2012, 2013
  4. @c Free Software Foundation, Inc.
  5. @c See the file guile.texi for copying conditions.
  6. @node Scheduling
  7. @section Threads, Mutexes, Asyncs and Dynamic Roots
  8. @menu
  9. * Arbiters:: Synchronization primitives.
  10. * Asyncs:: Asynchronous procedure invocation.
  11. * Threads:: Multiple threads of execution.
  12. * Mutexes and Condition Variables:: Synchronization primitives.
  13. * Blocking:: How to block properly in guile mode.
  14. * Critical Sections:: Avoiding concurrency and reentries.
  15. * Fluids and Dynamic States:: Thread-local variables, etc.
  16. * Parameters:: Dynamic scoping in Scheme.
  17. * Futures:: Fine-grain parallelism.
  18. * Parallel Forms:: Parallel execution of forms.
  19. @end menu
  20. @node Arbiters
  21. @subsection Arbiters
  22. @cindex arbiters
  23. Arbiters are synchronization objects, they can be used by threads to
  24. control access to a shared resource. An arbiter can be locked to
  25. indicate a resource is in use, and unlocked when done.
  26. An arbiter is like a light-weight mutex (@pxref{Mutexes and Condition
  27. Variables}). It uses less memory and may be faster, but there's no
  28. way for a thread to block waiting on an arbiter, it can only test and
  29. get the status returned.
  30. @deffn {Scheme Procedure} make-arbiter name
  31. @deffnx {C Function} scm_make_arbiter (name)
  32. Return an object of type arbiter and name @var{name}. Its
  33. state is initially unlocked. Arbiters are a way to achieve
  34. process synchronization.
  35. @end deffn
  36. @deffn {Scheme Procedure} try-arbiter arb
  37. @deffnx {C Function} scm_try_arbiter (arb)
  38. If @var{arb} is unlocked, then lock it and return @code{#t}.
  39. If @var{arb} is already locked, then do nothing and return
  40. @code{#f}.
  41. @end deffn
  42. @deffn {Scheme Procedure} release-arbiter arb
  43. @deffnx {C Function} scm_release_arbiter (arb)
  44. If @var{arb} is locked, then unlock it and return @code{#t}. If
  45. @var{arb} is already unlocked, then do nothing and return @code{#f}.
  46. Typical usage is for the thread which locked an arbiter to later
  47. release it, but that's not required, any thread can release it.
  48. @end deffn
  49. @node Asyncs
  50. @subsection Asyncs
  51. @cindex asyncs
  52. @cindex user asyncs
  53. @cindex system asyncs
  54. Asyncs are a means of deferring the execution of Scheme code until it is
  55. safe to do so.
  56. Guile provides two kinds of asyncs that share the basic concept but are
  57. otherwise quite different: system asyncs and user asyncs. System asyncs
  58. are integrated into the core of Guile and are executed automatically
  59. when the system is in a state to allow the execution of Scheme code.
  60. For example, it is not possible to execute Scheme code in a POSIX signal
  61. handler, but such a signal handler can queue a system async to be
  62. executed in the near future, when it is safe to do so.
  63. System asyncs can also be queued for threads other than the current one.
  64. This way, you can cause threads to asynchronously execute arbitrary
  65. code.
  66. User asyncs offer a convenient means of queuing procedures for future
  67. execution and triggering this execution. They will not be executed
  68. automatically.
  69. @menu
  70. * System asyncs::
  71. * User asyncs::
  72. @end menu
  73. @node System asyncs
  74. @subsubsection System asyncs
  75. To cause the future asynchronous execution of a procedure in a given
  76. thread, use @code{system-async-mark}.
  77. Automatic invocation of system asyncs can be temporarily disabled by
  78. calling @code{call-with-blocked-asyncs}. This function works by
  79. temporarily increasing the @emph{async blocking level} of the current
  80. thread while a given procedure is running. The blocking level starts
  81. out at zero, and whenever a safe point is reached, a blocking level
  82. greater than zero will prevent the execution of queued asyncs.
  83. Analogously, the procedure @code{call-with-unblocked-asyncs} will
  84. temporarily decrease the blocking level of the current thread. You
  85. can use it when you want to disable asyncs by default and only allow
  86. them temporarily.
  87. In addition to the C versions of @code{call-with-blocked-asyncs} and
  88. @code{call-with-unblocked-asyncs}, C code can use
  89. @code{scm_dynwind_block_asyncs} and @code{scm_dynwind_unblock_asyncs}
  90. inside a @dfn{dynamic context} (@pxref{Dynamic Wind}) to block or
  91. unblock system asyncs temporarily.
  92. @deffn {Scheme Procedure} system-async-mark proc [thread]
  93. @deffnx {C Function} scm_system_async_mark (proc)
  94. @deffnx {C Function} scm_system_async_mark_for_thread (proc, thread)
  95. Mark @var{proc} (a procedure with zero arguments) for future execution
  96. in @var{thread}. When @var{proc} has already been marked for
  97. @var{thread} but has not been executed yet, this call has no effect.
  98. When @var{thread} is omitted, the thread that called
  99. @code{system-async-mark} is used.
  100. This procedure is not safe to be called from signal handlers. Use
  101. @code{scm_sigaction} or @code{scm_sigaction_for_thread} to install
  102. signal handlers.
  103. @end deffn
  104. @deffn {Scheme Procedure} call-with-blocked-asyncs proc
  105. @deffnx {C Function} scm_call_with_blocked_asyncs (proc)
  106. Call @var{proc} and block the execution of system asyncs by one level
  107. for the current thread while it is running. Return the value returned
  108. by @var{proc}. For the first two variants, call @var{proc} with no
  109. arguments; for the third, call it with @var{data}.
  110. @end deffn
  111. @deftypefn {C Function} {void *} scm_c_call_with_blocked_asyncs (void * (*proc) (void *data), void *data)
  112. The same but with a C function @var{proc} instead of a Scheme thunk.
  113. @end deftypefn
  114. @deffn {Scheme Procedure} call-with-unblocked-asyncs proc
  115. @deffnx {C Function} scm_call_with_unblocked_asyncs (proc)
  116. Call @var{proc} and unblock the execution of system asyncs by one
  117. level for the current thread while it is running. Return the value
  118. returned by @var{proc}. For the first two variants, call @var{proc}
  119. with no arguments; for the third, call it with @var{data}.
  120. @end deffn
  121. @deftypefn {C Function} {void *} scm_c_call_with_unblocked_asyncs (void *(*proc) (void *data), void *data)
  122. The same but with a C function @var{proc} instead of a Scheme thunk.
  123. @end deftypefn
  124. @deftypefn {C Function} void scm_dynwind_block_asyncs ()
  125. During the current dynwind context, increase the blocking of asyncs by
  126. one level. This function must be used inside a pair of calls to
  127. @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
  128. Wind}).
  129. @end deftypefn
  130. @deftypefn {C Function} void scm_dynwind_unblock_asyncs ()
  131. During the current dynwind context, decrease the blocking of asyncs by
  132. one level. This function must be used inside a pair of calls to
  133. @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
  134. Wind}).
  135. @end deftypefn
  136. @node User asyncs
  137. @subsubsection User asyncs
  138. A user async is a pair of a thunk (a parameterless procedure) and a
  139. mark. Setting the mark on a user async will cause the thunk to be
  140. executed when the user async is passed to @code{run-asyncs}. Setting
  141. the mark more than once is satisfied by one execution of the thunk.
  142. User asyncs are created with @code{async}. They are marked with
  143. @code{async-mark}.
  144. @deffn {Scheme Procedure} async thunk
  145. @deffnx {C Function} scm_async (thunk)
  146. Create a new user async for the procedure @var{thunk}.
  147. @end deffn
  148. @deffn {Scheme Procedure} async-mark a
  149. @deffnx {C Function} scm_async_mark (a)
  150. Mark the user async @var{a} for future execution.
  151. @end deffn
  152. @deffn {Scheme Procedure} run-asyncs list_of_a
  153. @deffnx {C Function} scm_run_asyncs (list_of_a)
  154. Execute all thunks from the marked asyncs of the list @var{list_of_a}.
  155. @end deffn
  156. @node Threads
  157. @subsection Threads
  158. @cindex threads
  159. @cindex Guile threads
  160. @cindex POSIX threads
  161. Guile supports POSIX threads, unless it was configured with
  162. @code{--without-threads} or the host lacks POSIX thread support. When
  163. thread support is available, the @code{threads} feature is provided
  164. (@pxref{Feature Manipulation, @code{provided?}}).
  165. The procedures below manipulate Guile threads, which are wrappers around
  166. the system's POSIX threads. For application-level parallelism, using
  167. higher-level constructs, such as futures, is recommended
  168. (@pxref{Futures}).
  169. @deffn {Scheme Procedure} all-threads
  170. @deffnx {C Function} scm_all_threads ()
  171. Return a list of all threads.
  172. @end deffn
  173. @deffn {Scheme Procedure} current-thread
  174. @deffnx {C Function} scm_current_thread ()
  175. Return the thread that called this function.
  176. @end deffn
  177. @c begin (texi-doc-string "guile" "call-with-new-thread")
  178. @deffn {Scheme Procedure} call-with-new-thread thunk [handler]
  179. Call @code{thunk} in a new thread and with a new dynamic state,
  180. returning the new thread. The procedure @var{thunk} is called via
  181. @code{with-continuation-barrier}.
  182. When @var{handler} is specified, then @var{thunk} is called from
  183. within a @code{catch} with tag @code{#t} that has @var{handler} as its
  184. handler. This catch is established inside the continuation barrier.
  185. Once @var{thunk} or @var{handler} returns, the return value is made
  186. the @emph{exit value} of the thread and the thread is terminated.
  187. @end deffn
  188. @deftypefn {C Function} SCM scm_spawn_thread (scm_t_catch_body body, void *body_data, scm_t_catch_handler handler, void *handler_data)
  189. Call @var{body} in a new thread, passing it @var{body_data}, returning
  190. the new thread. The function @var{body} is called via
  191. @code{scm_c_with_continuation_barrier}.
  192. When @var{handler} is non-@code{NULL}, @var{body} is called via
  193. @code{scm_internal_catch} with tag @code{SCM_BOOL_T} that has
  194. @var{handler} and @var{handler_data} as the handler and its data. This
  195. catch is established inside the continuation barrier.
  196. Once @var{body} or @var{handler} returns, the return value is made the
  197. @emph{exit value} of the thread and the thread is terminated.
  198. @end deftypefn
  199. @deffn {Scheme Procedure} thread? obj
  200. @deffnx {C Function} scm_thread_p (obj)
  201. Return @code{#t} ff @var{obj} is a thread; otherwise, return
  202. @code{#f}.
  203. @end deffn
  204. @c begin (texi-doc-string "guile" "join-thread")
  205. @deffn {Scheme Procedure} join-thread thread [timeout [timeoutval]]
  206. @deffnx {C Function} scm_join_thread (thread)
  207. @deffnx {C Function} scm_join_thread_timed (thread, timeout, timeoutval)
  208. Wait for @var{thread} to terminate and return its exit value. Threads
  209. that have not been created with @code{call-with-new-thread} or
  210. @code{scm_spawn_thread} have an exit value of @code{#f}. When
  211. @var{timeout} is given, it specifies a point in time where the waiting
  212. should be aborted. It can be either an integer as returned by
  213. @code{current-time} or a pair as returned by @code{gettimeofday}.
  214. When the waiting is aborted, @var{timeoutval} is returned (if it is
  215. specified; @code{#f} is returned otherwise).
  216. @end deffn
  217. @deffn {Scheme Procedure} thread-exited? thread
  218. @deffnx {C Function} scm_thread_exited_p (thread)
  219. Return @code{#t} if @var{thread} has exited, or @code{#f} otherwise.
  220. @end deffn
  221. @c begin (texi-doc-string "guile" "yield")
  222. @deffn {Scheme Procedure} yield
  223. If one or more threads are waiting to execute, calling yield forces an
  224. immediate context switch to one of them. Otherwise, yield has no effect.
  225. @end deffn
  226. @deffn {Scheme Procedure} cancel-thread thread
  227. @deffnx {C Function} scm_cancel_thread (thread)
  228. Asynchronously notify @var{thread} to exit. Immediately after
  229. receiving this notification, @var{thread} will call its cleanup handler
  230. (if one has been set) and then terminate, aborting any evaluation that
  231. is in progress.
  232. Because Guile threads are isomorphic with POSIX threads, @var{thread}
  233. will not receive its cancellation signal until it reaches a cancellation
  234. point. See your operating system's POSIX threading documentation for
  235. more information on cancellation points; note that in Guile, unlike
  236. native POSIX threads, a thread can receive a cancellation notification
  237. while attempting to lock a mutex.
  238. @end deffn
  239. @deffn {Scheme Procedure} set-thread-cleanup! thread proc
  240. @deffnx {C Function} scm_set_thread_cleanup_x (thread, proc)
  241. Set @var{proc} as the cleanup handler for the thread @var{thread}.
  242. @var{proc}, which must be a thunk, will be called when @var{thread}
  243. exits, either normally or by being canceled. Thread cleanup handlers
  244. can be used to perform useful tasks like releasing resources, such as
  245. locked mutexes, when thread exit cannot be predicted.
  246. The return value of @var{proc} will be set as the @emph{exit value} of
  247. @var{thread}.
  248. To remove a cleanup handler, pass @code{#f} for @var{proc}.
  249. @end deffn
  250. @deffn {Scheme Procedure} thread-cleanup thread
  251. @deffnx {C Function} scm_thread_cleanup (thread)
  252. Return the cleanup handler currently installed for the thread
  253. @var{thread}. If no cleanup handler is currently installed,
  254. thread-cleanup returns @code{#f}.
  255. @end deffn
  256. Higher level thread procedures are available by loading the
  257. @code{(ice-9 threads)} module. These provide standardized
  258. thread creation.
  259. @deffn macro make-thread proc arg @dots{}
  260. Apply @var{proc} to @var{arg} @dots{} in a new thread formed by
  261. @code{call-with-new-thread} using a default error handler that display
  262. the error to the current error port. The @var{arg} @dots{}
  263. expressions are evaluated in the new thread.
  264. @end deffn
  265. @deffn macro begin-thread expr1 expr2 @dots{}
  266. Evaluate forms @var{expr1} @var{expr2} @dots{} in a new thread formed by
  267. @code{call-with-new-thread} using a default error handler that display
  268. the error to the current error port.
  269. @end deffn
  270. @node Mutexes and Condition Variables
  271. @subsection Mutexes and Condition Variables
  272. @cindex mutex
  273. @cindex condition variable
  274. A mutex is a thread synchronization object, it can be used by threads
  275. to control access to a shared resource. A mutex can be locked to
  276. indicate a resource is in use, and other threads can then block on the
  277. mutex to wait for the resource (or can just test and do something else
  278. if not available). ``Mutex'' is short for ``mutual exclusion''.
  279. There are two types of mutexes in Guile, ``standard'' and
  280. ``recursive''. They're created by @code{make-mutex} and
  281. @code{make-recursive-mutex} respectively, the operation functions are
  282. then common to both.
  283. Note that for both types of mutex there's no protection against a
  284. ``deadly embrace''. For instance if one thread has locked mutex A and
  285. is waiting on mutex B, but another thread owns B and is waiting on A,
  286. then an endless wait will occur (in the current implementation).
  287. Acquiring requisite mutexes in a fixed order (like always A before B)
  288. in all threads is one way to avoid such problems.
  289. @sp 1
  290. @deffn {Scheme Procedure} make-mutex flag @dots{}
  291. @deffnx {C Function} scm_make_mutex ()
  292. @deffnx {C Function} scm_make_mutex_with_flags (SCM flags)
  293. Return a new mutex. It is initially unlocked. If @var{flag} @dots{} is
  294. specified, it must be a list of symbols specifying configuration flags
  295. for the newly-created mutex. The supported flags are:
  296. @table @code
  297. @item unchecked-unlock
  298. Unless this flag is present, a call to `unlock-mutex' on the returned
  299. mutex when it is already unlocked will cause an error to be signalled.
  300. @item allow-external-unlock
  301. Allow the returned mutex to be unlocked by the calling thread even if
  302. it was originally locked by a different thread.
  303. @item recursive
  304. The returned mutex will be recursive.
  305. @end table
  306. @end deffn
  307. @deffn {Scheme Procedure} mutex? obj
  308. @deffnx {C Function} scm_mutex_p (obj)
  309. Return @code{#t} if @var{obj} is a mutex; otherwise, return
  310. @code{#f}.
  311. @end deffn
  312. @deffn {Scheme Procedure} make-recursive-mutex
  313. @deffnx {C Function} scm_make_recursive_mutex ()
  314. Create a new recursive mutex. It is initially unlocked. Calling this
  315. function is equivalent to calling `make-mutex' and specifying the
  316. @code{recursive} flag.
  317. @end deffn
  318. @deffn {Scheme Procedure} lock-mutex mutex [timeout [owner]]
  319. @deffnx {C Function} scm_lock_mutex (mutex)
  320. @deffnx {C Function} scm_lock_mutex_timed (mutex, timeout, owner)
  321. Lock @var{mutex}. If the mutex is already locked, then block and
  322. return only when @var{mutex} has been acquired.
  323. When @var{timeout} is given, it specifies a point in time where the
  324. waiting should be aborted. It can be either an integer as returned
  325. by @code{current-time} or a pair as returned by @code{gettimeofday}.
  326. When the waiting is aborted, @code{#f} is returned.
  327. When @var{owner} is given, it specifies an owner for @var{mutex} other
  328. than the calling thread. @var{owner} may also be @code{#f},
  329. indicating that the mutex should be locked but left unowned.
  330. For standard mutexes (@code{make-mutex}), and error is signalled if
  331. the thread has itself already locked @var{mutex}.
  332. For a recursive mutex (@code{make-recursive-mutex}), if the thread has
  333. itself already locked @var{mutex}, then a further @code{lock-mutex}
  334. call increments the lock count. An additional @code{unlock-mutex}
  335. will be required to finally release.
  336. If @var{mutex} was locked by a thread that exited before unlocking it,
  337. the next attempt to lock @var{mutex} will succeed, but
  338. @code{abandoned-mutex-error} will be signalled.
  339. When a system async (@pxref{System asyncs}) is activated for a thread
  340. blocked in @code{lock-mutex}, the wait is interrupted and the async is
  341. executed. When the async returns, the wait resumes.
  342. @end deffn
  343. @deftypefn {C Function} void scm_dynwind_lock_mutex (SCM mutex)
  344. Arrange for @var{mutex} to be locked whenever the current dynwind
  345. context is entered and to be unlocked when it is exited.
  346. @end deftypefn
  347. @deffn {Scheme Procedure} try-mutex mx
  348. @deffnx {C Function} scm_try_mutex (mx)
  349. Try to lock @var{mutex} as per @code{lock-mutex}. If @var{mutex} can
  350. be acquired immediately then this is done and the return is @code{#t}.
  351. If @var{mutex} is locked by some other thread then nothing is done and
  352. the return is @code{#f}.
  353. @end deffn
  354. @deffn {Scheme Procedure} unlock-mutex mutex [condvar [timeout]]
  355. @deffnx {C Function} scm_unlock_mutex (mutex)
  356. @deffnx {C Function} scm_unlock_mutex_timed (mutex, condvar, timeout)
  357. Unlock @var{mutex}. An error is signalled if @var{mutex} is not locked
  358. and was not created with the @code{unchecked-unlock} flag set, or if
  359. @var{mutex} is locked by a thread other than the calling thread and was
  360. not created with the @code{allow-external-unlock} flag set.
  361. If @var{condvar} is given, it specifies a condition variable upon
  362. which the calling thread will wait to be signalled before returning.
  363. (This behavior is very similar to that of
  364. @code{wait-condition-variable}, except that the mutex is left in an
  365. unlocked state when the function returns.)
  366. When @var{timeout} is also given and not false, it specifies a point in
  367. time where the waiting should be aborted. It can be either an integer
  368. as returned by @code{current-time} or a pair as returned by
  369. @code{gettimeofday}. When the waiting is aborted, @code{#f} is
  370. returned. Otherwise the function returns @code{#t}.
  371. @end deffn
  372. @deffn {Scheme Procedure} mutex-owner mutex
  373. @deffnx {C Function} scm_mutex_owner (mutex)
  374. Return the current owner of @var{mutex}, in the form of a thread or
  375. @code{#f} (indicating no owner). Note that a mutex may be unowned but
  376. still locked.
  377. @end deffn
  378. @deffn {Scheme Procedure} mutex-level mutex
  379. @deffnx {C Function} scm_mutex_level (mutex)
  380. Return the current lock level of @var{mutex}. If @var{mutex} is
  381. currently unlocked, this value will be 0; otherwise, it will be the
  382. number of times @var{mutex} has been recursively locked by its current
  383. owner.
  384. @end deffn
  385. @deffn {Scheme Procedure} mutex-locked? mutex
  386. @deffnx {C Function} scm_mutex_locked_p (mutex)
  387. Return @code{#t} if @var{mutex} is locked, regardless of ownership;
  388. otherwise, return @code{#f}.
  389. @end deffn
  390. @deffn {Scheme Procedure} make-condition-variable
  391. @deffnx {C Function} scm_make_condition_variable ()
  392. Return a new condition variable.
  393. @end deffn
  394. @deffn {Scheme Procedure} condition-variable? obj
  395. @deffnx {C Function} scm_condition_variable_p (obj)
  396. Return @code{#t} if @var{obj} is a condition variable; otherwise,
  397. return @code{#f}.
  398. @end deffn
  399. @deffn {Scheme Procedure} wait-condition-variable condvar mutex [time]
  400. @deffnx {C Function} scm_wait_condition_variable (condvar, mutex, time)
  401. Wait until @var{condvar} has been signalled. While waiting,
  402. @var{mutex} is atomically unlocked (as with @code{unlock-mutex}) and
  403. is locked again when this function returns. When @var{time} is given,
  404. it specifies a point in time where the waiting should be aborted. It
  405. can be either a integer as returned by @code{current-time} or a pair
  406. as returned by @code{gettimeofday}. When the waiting is aborted,
  407. @code{#f} is returned. When the condition variable has in fact been
  408. signalled, @code{#t} is returned. The mutex is re-locked in any case
  409. before @code{wait-condition-variable} returns.
  410. When a system async is activated for a thread that is blocked in a
  411. call to @code{wait-condition-variable}, the waiting is interrupted,
  412. the mutex is locked, and the async is executed. When the async
  413. returns, the mutex is unlocked again and the waiting is resumed. When
  414. the thread block while re-acquiring the mutex, execution of asyncs is
  415. blocked.
  416. @end deffn
  417. @deffn {Scheme Procedure} signal-condition-variable condvar
  418. @deffnx {C Function} scm_signal_condition_variable (condvar)
  419. Wake up one thread that is waiting for @var{condvar}.
  420. @end deffn
  421. @deffn {Scheme Procedure} broadcast-condition-variable condvar
  422. @deffnx {C Function} scm_broadcast_condition_variable (condvar)
  423. Wake up all threads that are waiting for @var{condvar}.
  424. @end deffn
  425. @sp 1
  426. The following are higher level operations on mutexes. These are
  427. available from
  428. @example
  429. (use-modules (ice-9 threads))
  430. @end example
  431. @deffn macro with-mutex mutex body1 body2 @dots{}
  432. Lock @var{mutex}, evaluate the body @var{body1} @var{body2} @dots{},
  433. then unlock @var{mutex}. The return value is that returned by the last
  434. body form.
  435. The lock, body and unlock form the branches of a @code{dynamic-wind}
  436. (@pxref{Dynamic Wind}), so @var{mutex} is automatically unlocked if an
  437. error or new continuation exits the body, and is re-locked if
  438. the body is re-entered by a captured continuation.
  439. @end deffn
  440. @deffn macro monitor body1 body2 @dots{}
  441. Evaluate the body form @var{body1} @var{body2} @dots{} with a mutex
  442. locked so only one thread can execute that code at any one time. The
  443. return value is the return from the last body form.
  444. Each @code{monitor} form has its own private mutex and the locking and
  445. evaluation is as per @code{with-mutex} above. A standard mutex
  446. (@code{make-mutex}) is used, which means the body must not
  447. recursively re-enter the @code{monitor} form.
  448. The term ``monitor'' comes from operating system theory, where it
  449. means a particular bit of code managing access to some resource and
  450. which only ever executes on behalf of one process at any one time.
  451. @end deffn
  452. @node Blocking
  453. @subsection Blocking in Guile Mode
  454. Up to Guile version 1.8, a thread blocked in guile mode would prevent
  455. the garbage collector from running. Thus threads had to explicitly
  456. leave guile mode with @code{scm_without_guile ()} before making a
  457. potentially blocking call such as a mutex lock, a @code{select ()}
  458. system call, etc. The following functions could be used to temporarily
  459. leave guile mode or to perform some common blocking operations in a
  460. supported way.
  461. Starting from Guile 2.0, blocked threads no longer hinder garbage
  462. collection. Thus, the functions below are not needed anymore. They can
  463. still be used to inform the GC that a thread is about to block, giving
  464. it a (small) optimization opportunity for ``stop the world'' garbage
  465. collections, should they occur while the thread is blocked.
  466. @deftypefn {C Function} {void *} scm_without_guile (void *(*func) (void *), void *data)
  467. Leave guile mode, call @var{func} on @var{data}, enter guile mode and
  468. return the result of calling @var{func}.
  469. While a thread has left guile mode, it must not call any libguile
  470. functions except @code{scm_with_guile} or @code{scm_without_guile} and
  471. must not use any libguile macros. Also, local variables of type
  472. @code{SCM} that are allocated while not in guile mode are not
  473. protected from the garbage collector.
  474. When used from non-guile mode, calling @code{scm_without_guile} is
  475. still allowed: it simply calls @var{func}. In that way, you can leave
  476. guile mode without having to know whether the current thread is in
  477. guile mode or not.
  478. @end deftypefn
  479. @deftypefn {C Function} int scm_pthread_mutex_lock (pthread_mutex_t *mutex)
  480. Like @code{pthread_mutex_lock}, but leaves guile mode while waiting for
  481. the mutex.
  482. @end deftypefn
  483. @deftypefn {C Function} int scm_pthread_cond_wait (pthread_cond_t *cond, pthread_mutex_t *mutex)
  484. @deftypefnx {C Function} int scm_pthread_cond_timedwait (pthread_cond_t *cond, pthread_mutex_t *mutex, struct timespec *abstime)
  485. Like @code{pthread_cond_wait} and @code{pthread_cond_timedwait}, but
  486. leaves guile mode while waiting for the condition variable.
  487. @end deftypefn
  488. @deftypefn {C Function} int scm_std_select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout)
  489. Like @code{select} but leaves guile mode while waiting. Also, the
  490. delivery of a system async causes this function to be interrupted with
  491. error code @code{EINTR}.
  492. @end deftypefn
  493. @deftypefn {C Function} {unsigned int} scm_std_sleep ({unsigned int} seconds)
  494. Like @code{sleep}, but leaves guile mode while sleeping. Also, the
  495. delivery of a system async causes this function to be interrupted.
  496. @end deftypefn
  497. @deftypefn {C Function} {unsigned long} scm_std_usleep ({unsigned long} usecs)
  498. Like @code{usleep}, but leaves guile mode while sleeping. Also, the
  499. delivery of a system async causes this function to be interrupted.
  500. @end deftypefn
  501. @node Critical Sections
  502. @subsection Critical Sections
  503. @deffn {C Macro} SCM_CRITICAL_SECTION_START
  504. @deffnx {C Macro} SCM_CRITICAL_SECTION_END
  505. These two macros can be used to delimit a critical section.
  506. Syntactically, they are both statements and need to be followed
  507. immediately by a semicolon.
  508. Executing @code{SCM_CRITICAL_SECTION_START} will lock a recursive
  509. mutex and block the executing of system asyncs. Executing
  510. @code{SCM_CRITICAL_SECTION_END} will unblock the execution of system
  511. asyncs and unlock the mutex. Thus, the code that executes between
  512. these two macros can only be executed in one thread at any one time
  513. and no system asyncs will run. However, because the mutex is a
  514. recursive one, the code might still be reentered by the same thread.
  515. You must either allow for this or avoid it, both by careful coding.
  516. On the other hand, critical sections delimited with these macros can
  517. be nested since the mutex is recursive.
  518. You must make sure that for each @code{SCM_CRITICAL_SECTION_START},
  519. the corresponding @code{SCM_CRITICAL_SECTION_END} is always executed.
  520. This means that no non-local exit (such as a signalled error) might
  521. happen, for example.
  522. @end deffn
  523. @deftypefn {C Function} void scm_dynwind_critical_section (SCM mutex)
  524. Call @code{scm_dynwind_lock_mutex} on @var{mutex} and call
  525. @code{scm_dynwind_block_asyncs}. When @var{mutex} is false, a recursive
  526. mutex provided by Guile is used instead.
  527. The effect of a call to @code{scm_dynwind_critical_section} is that
  528. the current dynwind context (@pxref{Dynamic Wind}) turns into a
  529. critical section. Because of the locked mutex, no second thread can
  530. enter it concurrently and because of the blocked asyncs, no system
  531. async can reenter it from the current thread.
  532. When the current thread reenters the critical section anyway, the kind
  533. of @var{mutex} determines what happens: When @var{mutex} is recursive,
  534. the reentry is allowed. When it is a normal mutex, an error is
  535. signalled.
  536. @end deftypefn
  537. @node Fluids and Dynamic States
  538. @subsection Fluids and Dynamic States
  539. @cindex fluids
  540. A @emph{fluid} is an object that can store one value per @emph{dynamic
  541. state}. Each thread has a current dynamic state, and when accessing a
  542. fluid, this current dynamic state is used to provide the actual value.
  543. In this way, fluids can be used for thread local storage, but they are
  544. in fact more flexible: dynamic states are objects of their own and can
  545. be made current for more than one thread at the same time, or only be
  546. made current temporarily, for example.
  547. Fluids can also be used to simulate the desirable effects of
  548. dynamically scoped variables. Dynamically scoped variables are useful
  549. when you want to set a variable to a value during some dynamic extent
  550. in the execution of your program and have them revert to their
  551. original value when the control flow is outside of this dynamic
  552. extent. See the description of @code{with-fluids} below for details.
  553. New fluids are created with @code{make-fluid} and @code{fluid?} is
  554. used for testing whether an object is actually a fluid. The values
  555. stored in a fluid can be accessed with @code{fluid-ref} and
  556. @code{fluid-set!}.
  557. @deffn {Scheme Procedure} make-fluid [dflt]
  558. @deffnx {C Function} scm_make_fluid ()
  559. @deffnx {C Function} scm_make_fluid_with_default (dflt)
  560. Return a newly created fluid, whose initial value is @var{dflt}, or
  561. @code{#f} if @var{dflt} is not given.
  562. Fluids are objects that can hold one
  563. value per dynamic state. That is, modifications to this value are
  564. only visible to code that executes with the same dynamic state as
  565. the modifying code. When a new dynamic state is constructed, it
  566. inherits the values from its parent. Because each thread normally executes
  567. with its own dynamic state, you can use fluids for thread local storage.
  568. @end deffn
  569. @deffn {Scheme Procedure} make-unbound-fluid
  570. @deffnx {C Function} scm_make_unbound_fluid ()
  571. Return a new fluid that is initially unbound (instead of being
  572. implicitly bound to some definite value).
  573. @end deffn
  574. @deffn {Scheme Procedure} fluid? obj
  575. @deffnx {C Function} scm_fluid_p (obj)
  576. Return @code{#t} if @var{obj} is a fluid; otherwise, return
  577. @code{#f}.
  578. @end deffn
  579. @deffn {Scheme Procedure} fluid-ref fluid
  580. @deffnx {C Function} scm_fluid_ref (fluid)
  581. Return the value associated with @var{fluid} in the current
  582. dynamic root. If @var{fluid} has not been set, then return
  583. its default value. Calling @code{fluid-ref} on an unbound fluid produces
  584. a runtime error.
  585. @end deffn
  586. @deffn {Scheme Procedure} fluid-set! fluid value
  587. @deffnx {C Function} scm_fluid_set_x (fluid, value)
  588. Set the value associated with @var{fluid} in the current dynamic root.
  589. @end deffn
  590. @deffn {Scheme Procedure} fluid-unset! fluid
  591. @deffnx {C Function} scm_fluid_unset_x (fluid)
  592. Disassociate the given fluid from any value, making it unbound.
  593. @end deffn
  594. @deffn {Scheme Procedure} fluid-bound? fluid
  595. @deffnx {C Function} scm_fluid_bound_p (fluid)
  596. Returns @code{#t} if the given fluid is bound to a value, otherwise
  597. @code{#f}.
  598. @end deffn
  599. @code{with-fluids*} temporarily changes the values of one or more fluids,
  600. so that the given procedure and each procedure called by it access the
  601. given values. After the procedure returns, the old values are restored.
  602. @deffn {Scheme Procedure} with-fluid* fluid value thunk
  603. @deffnx {C Function} scm_with_fluid (fluid, value, thunk)
  604. Set @var{fluid} to @var{value} temporarily, and call @var{thunk}.
  605. @var{thunk} must be a procedure with no argument.
  606. @end deffn
  607. @deffn {Scheme Procedure} with-fluids* fluids values thunk
  608. @deffnx {C Function} scm_with_fluids (fluids, values, thunk)
  609. Set @var{fluids} to @var{values} temporary, and call @var{thunk}.
  610. @var{fluids} must be a list of fluids and @var{values} must be the
  611. same number of their values to be applied. Each substitution is done
  612. in the order given. @var{thunk} must be a procedure with no argument.
  613. It is called inside a @code{dynamic-wind} and the fluids are
  614. set/restored when control enter or leaves the established dynamic
  615. extent.
  616. @end deffn
  617. @deffn {Scheme Macro} with-fluids ((fluid value) @dots{}) body1 body2 @dots{}
  618. Execute body @var{body1} @var{body2} @dots{} while each @var{fluid} is
  619. set to the corresponding @var{value}. Both @var{fluid} and @var{value}
  620. are evaluated and @var{fluid} must yield a fluid. The body is executed
  621. inside a @code{dynamic-wind} and the fluids are set/restored when
  622. control enter or leaves the established dynamic extent.
  623. @end deffn
  624. @deftypefn {C Function} SCM scm_c_with_fluids (SCM fluids, SCM vals, SCM (*cproc)(void *), void *data)
  625. @deftypefnx {C Function} SCM scm_c_with_fluid (SCM fluid, SCM val, SCM (*cproc)(void *), void *data)
  626. The function @code{scm_c_with_fluids} is like @code{scm_with_fluids}
  627. except that it takes a C function to call instead of a Scheme thunk.
  628. The function @code{scm_c_with_fluid} is similar but only allows one
  629. fluid to be set instead of a list.
  630. @end deftypefn
  631. @deftypefn {C Function} void scm_dynwind_fluid (SCM fluid, SCM val)
  632. This function must be used inside a pair of calls to
  633. @code{scm_dynwind_begin} and @code{scm_dynwind_end} (@pxref{Dynamic
  634. Wind}). During the dynwind context, the fluid @var{fluid} is set to
  635. @var{val}.
  636. More precisely, the value of the fluid is swapped with a `backup'
  637. value whenever the dynwind context is entered or left. The backup
  638. value is initialized with the @var{val} argument.
  639. @end deftypefn
  640. @deffn {Scheme Procedure} make-dynamic-state [parent]
  641. @deffnx {C Function} scm_make_dynamic_state (parent)
  642. Return a copy of the dynamic state object @var{parent}
  643. or of the current dynamic state when @var{parent} is omitted.
  644. @end deffn
  645. @deffn {Scheme Procedure} dynamic-state? obj
  646. @deffnx {C Function} scm_dynamic_state_p (obj)
  647. Return @code{#t} if @var{obj} is a dynamic state object;
  648. return @code{#f} otherwise.
  649. @end deffn
  650. @deftypefn {C Procedure} int scm_is_dynamic_state (SCM obj)
  651. Return non-zero if @var{obj} is a dynamic state object;
  652. return zero otherwise.
  653. @end deftypefn
  654. @deffn {Scheme Procedure} current-dynamic-state
  655. @deffnx {C Function} scm_current_dynamic_state ()
  656. Return the current dynamic state object.
  657. @end deffn
  658. @deffn {Scheme Procedure} set-current-dynamic-state state
  659. @deffnx {C Function} scm_set_current_dynamic_state (state)
  660. Set the current dynamic state object to @var{state}
  661. and return the previous current dynamic state object.
  662. @end deffn
  663. @deffn {Scheme Procedure} with-dynamic-state state proc
  664. @deffnx {C Function} scm_with_dynamic_state (state, proc)
  665. Call @var{proc} while @var{state} is the current dynamic
  666. state object.
  667. @end deffn
  668. @deftypefn {C Procedure} void scm_dynwind_current_dynamic_state (SCM state)
  669. Set the current dynamic state to @var{state} for the current dynwind
  670. context.
  671. @end deftypefn
  672. @deftypefn {C Procedure} {void *} scm_c_with_dynamic_state (SCM state, void *(*func)(void *), void *data)
  673. Like @code{scm_with_dynamic_state}, but call @var{func} with
  674. @var{data}.
  675. @end deftypefn
  676. @node Parameters
  677. @subsection Parameters
  678. @cindex SRFI-39
  679. @cindex parameter object
  680. @tindex Parameter
  681. A parameter object is a procedure. Calling it with no arguments returns
  682. its value. Calling it with one argument sets the value.
  683. @example
  684. (define my-param (make-parameter 123))
  685. (my-param) @result{} 123
  686. (my-param 456)
  687. (my-param) @result{} 456
  688. @end example
  689. The @code{parameterize} special form establishes new locations for
  690. parameters, those new locations having effect within the dynamic scope
  691. of the @code{parameterize} body. Leaving restores the previous
  692. locations. Re-entering (through a saved continuation) will again use
  693. the new locations.
  694. @example
  695. (parameterize ((my-param 789))
  696. (my-param)) @result{} 789
  697. (my-param) @result{} 456
  698. @end example
  699. Parameters are like dynamically bound variables in other Lisp dialects.
  700. They allow an application to establish parameter settings (as the name
  701. suggests) just for the execution of a particular bit of code, restoring
  702. when done. Examples of such parameters might be case-sensitivity for a
  703. search, or a prompt for user input.
  704. Global variables are not as good as parameter objects for this sort of
  705. thing. Changes to them are visible to all threads, but in Guile
  706. parameter object locations are per-thread, thereby truly limiting the
  707. effect of @code{parameterize} to just its dynamic execution.
  708. Passing arguments to functions is thread-safe, but that soon becomes
  709. tedious when there's more than a few or when they need to pass down
  710. through several layers of calls before reaching the point they should
  711. affect. And introducing a new setting to existing code is often easier
  712. with a parameter object than adding arguments.
  713. @deffn {Scheme Procedure} make-parameter init [converter]
  714. Return a new parameter object, with initial value @var{init}.
  715. If a @var{converter} is given, then a call @code{(@var{converter}
  716. val)} is made for each value set, its return is the value stored.
  717. Such a call is made for the @var{init} initial value too.
  718. A @var{converter} allows values to be validated, or put into a
  719. canonical form. For example,
  720. @example
  721. (define my-param (make-parameter 123
  722. (lambda (val)
  723. (if (not (number? val))
  724. (error "must be a number"))
  725. (inexact->exact val))))
  726. (my-param 0.75)
  727. (my-param) @result{} 3/4
  728. @end example
  729. @end deffn
  730. @deffn {library syntax} parameterize ((param value) @dots{}) body1 body2 @dots{}
  731. Establish a new dynamic scope with the given @var{param}s bound to new
  732. locations and set to the given @var{value}s. @var{body1} @var{body2}
  733. @dots{} is evaluated in that environment. The value returned is that of
  734. last body form.
  735. Each @var{param} is an expression which is evaluated to get the
  736. parameter object. Often this will just be the name of a variable
  737. holding the object, but it can be anything that evaluates to a
  738. parameter.
  739. The @var{param} expressions and @var{value} expressions are all
  740. evaluated before establishing the new dynamic bindings, and they're
  741. evaluated in an unspecified order.
  742. For example,
  743. @example
  744. (define prompt (make-parameter "Type something: "))
  745. (define (get-input)
  746. (display (prompt))
  747. ...)
  748. (parameterize ((prompt "Type a number: "))
  749. (get-input)
  750. ...)
  751. @end example
  752. @end deffn
  753. Parameter objects are implemented using fluids (@pxref{Fluids and
  754. Dynamic States}), so each dynamic state has its own parameter
  755. locations. That includes the separate locations when outside any
  756. @code{parameterize} form. When a parameter is created it gets a
  757. separate initial location in each dynamic state, all initialized to the
  758. given @var{init} value.
  759. New code should probably just use parameters instead of fluids, because
  760. the interface is better. But for migrating old code or otherwise
  761. providing interoperability, Guile provides the @code{fluid->parameter}
  762. procedure:
  763. @deffn {Scheme Procedure} fluid->parameter fluid [conv]
  764. Make a parameter that wraps a fluid.
  765. The value of the parameter will be the same as the value of the fluid.
  766. If the parameter is rebound in some dynamic extent, perhaps via
  767. @code{parameterize}, the new value will be run through the optional
  768. @var{conv} procedure, as with any parameter. Note that unlike
  769. @code{make-parameter}, @var{conv} is not applied to the initial value.
  770. @end deffn
  771. As alluded to above, because each thread usually has a separate dynamic
  772. state, each thread has its own locations behind parameter objects, and
  773. changes in one thread are not visible to any other. When a new dynamic
  774. state or thread is created, the values of parameters in the originating
  775. context are copied, into new locations.
  776. @cindex SRFI-39
  777. Guile's parameters conform to SRFI-39 (@pxref{SRFI-39}).
  778. @node Futures
  779. @subsection Futures
  780. @cindex futures
  781. @cindex fine-grain parallelism
  782. @cindex parallelism
  783. The @code{(ice-9 futures)} module provides @dfn{futures}, a construct
  784. for fine-grain parallelism. A future is a wrapper around an expression
  785. whose computation may occur in parallel with the code of the calling
  786. thread, and possibly in parallel with other futures. Like promises,
  787. futures are essentially proxies that can be queried to obtain the value
  788. of the enclosed expression:
  789. @lisp
  790. (touch (future (+ 2 3)))
  791. @result{} 5
  792. @end lisp
  793. However, unlike promises, the expression associated with a future may be
  794. evaluated on another CPU core, should one be available. This supports
  795. @dfn{fine-grain parallelism}, because even relatively small computations
  796. can be embedded in futures. Consider this sequential code:
  797. @lisp
  798. (define (find-prime lst1 lst2)
  799. (or (find prime? lst1)
  800. (find prime? lst2)))
  801. @end lisp
  802. The two arms of @code{or} are potentially computation-intensive. They
  803. are independent of one another, yet, they are evaluated sequentially
  804. when the first one returns @code{#f}. Using futures, one could rewrite
  805. it like this:
  806. @lisp
  807. (define (find-prime lst1 lst2)
  808. (let ((f (future (find prime? lst2))))
  809. (or (find prime? lst1)
  810. (touch f))))
  811. @end lisp
  812. This preserves the semantics of @code{find-prime}. On a multi-core
  813. machine, though, the computation of @code{(find prime? lst2)} may be
  814. done in parallel with that of the other @code{find} call, which can
  815. reduce the execution time of @code{find-prime}.
  816. Futures may be nested: a future can itself spawn and then @code{touch}
  817. other futures, leading to a directed acyclic graph of futures. Using
  818. this facility, a parallel @code{map} procedure can be defined along
  819. these lines:
  820. @lisp
  821. (use-modules (ice-9 futures) (ice-9 match))
  822. (define (par-map proc lst)
  823. (match lst
  824. (()
  825. '())
  826. ((head tail ...)
  827. (let ((tail (future (par-map proc tail)))
  828. (head (proc head)))
  829. (cons head (touch tail))))))
  830. @end lisp
  831. Note that futures are intended for the evaluation of purely functional
  832. expressions. Expressions that have side-effects or rely on I/O may
  833. require additional care, such as explicit synchronization
  834. (@pxref{Mutexes and Condition Variables}).
  835. Guile's futures are implemented on top of POSIX threads
  836. (@pxref{Threads}). Internally, a fixed-size pool of threads is used to
  837. evaluate futures, such that offloading the evaluation of an expression
  838. to another thread doesn't incur thread creation costs. By default, the
  839. pool contains one thread per available CPU core, minus one, to account
  840. for the main thread. The number of available CPU cores is determined
  841. using @code{current-processor-count} (@pxref{Processes}).
  842. When a thread touches a future that has not completed yet, it processes
  843. any pending future while waiting for it to complete, or just waits if
  844. there are no pending futures. When @code{touch} is called from within a
  845. future, the execution of the calling future is suspended, allowing its
  846. host thread to process other futures, and resumed when the touched
  847. future has completed. This suspend/resume is achieved by capturing the
  848. calling future's continuation, and later reinstating it (@pxref{Prompts,
  849. delimited continuations}).
  850. Note that @code{par-map} above is not tail-recursive. This could lead
  851. to stack overflows when @var{lst} is large compared to
  852. @code{(current-processor-count)}. To address that, @code{touch} uses
  853. the suspend mechanism described above to limit the number of nested
  854. futures executing on the same stack. Thus, the above code should never
  855. run into stack overflows.
  856. @deffn {Scheme Syntax} future exp
  857. Return a future for expression @var{exp}. This is equivalent to:
  858. @lisp
  859. (make-future (lambda () exp))
  860. @end lisp
  861. @end deffn
  862. @deffn {Scheme Procedure} make-future thunk
  863. Return a future for @var{thunk}, a zero-argument procedure.
  864. This procedure returns immediately. Execution of @var{thunk} may begin
  865. in parallel with the calling thread's computations, if idle CPU cores
  866. are available, or it may start when @code{touch} is invoked on the
  867. returned future.
  868. If the execution of @var{thunk} throws an exception, that exception will
  869. be re-thrown when @code{touch} is invoked on the returned future.
  870. @end deffn
  871. @deffn {Scheme Procedure} future? obj
  872. Return @code{#t} if @var{obj} is a future.
  873. @end deffn
  874. @deffn {Scheme Procedure} touch f
  875. Return the result of the expression embedded in future @var{f}.
  876. If the result was already computed in parallel, @code{touch} returns
  877. instantaneously. Otherwise, it waits for the computation to complete,
  878. if it already started, or initiates it. In the former case, the calling
  879. thread may process other futures in the meantime.
  880. @end deffn
  881. @node Parallel Forms
  882. @subsection Parallel forms
  883. @cindex parallel forms
  884. The functions described in this section are available from
  885. @example
  886. (use-modules (ice-9 threads))
  887. @end example
  888. They provide high-level parallel constructs. The following functions
  889. are implemented in terms of futures (@pxref{Futures}). Thus they are
  890. relatively cheap as they re-use existing threads, and portable, since
  891. they automatically use one thread per available CPU core.
  892. @deffn syntax parallel expr @dots{}
  893. Evaluate each @var{expr} expression in parallel, each in its own thread.
  894. Return the results of @var{n} expressions as a set of @var{n} multiple
  895. values (@pxref{Multiple Values}).
  896. @end deffn
  897. @deffn syntax letpar ((var expr) @dots{}) body1 body2 @dots{}
  898. Evaluate each @var{expr} in parallel, each in its own thread, then bind
  899. the results to the corresponding @var{var} variables, and then evaluate
  900. @var{body1} @var{body2} @enddots{}
  901. @code{letpar} is like @code{let} (@pxref{Local Bindings}), but all the
  902. expressions for the bindings are evaluated in parallel.
  903. @end deffn
  904. @deffn {Scheme Procedure} par-map proc lst1 lst2 @dots{}
  905. @deffnx {Scheme Procedure} par-for-each proc lst1 lst2 @dots{}
  906. Call @var{proc} on the elements of the given lists. @code{par-map}
  907. returns a list comprising the return values from @var{proc}.
  908. @code{par-for-each} returns an unspecified value, but waits for all
  909. calls to complete.
  910. The @var{proc} calls are @code{(@var{proc} @var{elem1} @var{elem2}
  911. @dots{})}, where each @var{elem} is from the corresponding @var{lst} .
  912. Each @var{lst} must be the same length. The calls are potentially made
  913. in parallel, depending on the number of CPU cores available.
  914. These functions are like @code{map} and @code{for-each} (@pxref{List
  915. Mapping}), but make their @var{proc} calls in parallel.
  916. @end deffn
  917. Unlike those above, the functions described below take a number of
  918. threads as an argument. This makes them inherently non-portable since
  919. the specified number of threads may differ from the number of available
  920. CPU cores as returned by @code{current-processor-count}
  921. (@pxref{Processes}). In addition, these functions create the specified
  922. number of threads when they are called and terminate them upon
  923. completion, which makes them quite expensive.
  924. Therefore, they should be avoided.
  925. @deffn {Scheme Procedure} n-par-map n proc lst1 lst2 @dots{}
  926. @deffnx {Scheme Procedure} n-par-for-each n proc lst1 lst2 @dots{}
  927. Call @var{proc} on the elements of the given lists, in the same way as
  928. @code{par-map} and @code{par-for-each} above, but use no more than
  929. @var{n} threads at any one time. The order in which calls are
  930. initiated within that threads limit is unspecified.
  931. These functions are good for controlling resource consumption if
  932. @var{proc} calls might be costly, or if there are many to be made. On
  933. a dual-CPU system for instance @math{@var{n}=4} might be enough to
  934. keep the CPUs utilized, and not consume too much memory.
  935. @end deffn
  936. @deffn {Scheme Procedure} n-for-each-par-map n sproc pproc lst1 lst2 @dots{}
  937. Apply @var{pproc} to the elements of the given lists, and apply
  938. @var{sproc} to each result returned by @var{pproc}. The final return
  939. value is unspecified, but all calls will have been completed before
  940. returning.
  941. The calls made are @code{(@var{sproc} (@var{pproc} @var{elem1} @dots{}
  942. @var{elemN}))}, where each @var{elem} is from the corresponding
  943. @var{lst}. Each @var{lst} must have the same number of elements.
  944. The @var{pproc} calls are made in parallel, in separate threads. No more
  945. than @var{n} threads are used at any one time. The order in which
  946. @var{pproc} calls are initiated within that limit is unspecified.
  947. The @var{sproc} calls are made serially, in list element order, one at
  948. a time. @var{pproc} calls on later elements may execute in parallel
  949. with the @var{sproc} calls. Exactly which thread makes each
  950. @var{sproc} call is unspecified.
  951. This function is designed for individual calculations that can be done
  952. in parallel, but with results needing to be handled serially, for
  953. instance to write them to a file. The @var{n} limit on threads
  954. controls system resource usage when there are many calculations or
  955. when they might be costly.
  956. It will be seen that @code{n-for-each-par-map} is like a combination
  957. of @code{n-par-map} and @code{for-each},
  958. @example
  959. (for-each sproc (n-par-map n pproc lst1 ... lstN))
  960. @end example
  961. @noindent
  962. But the actual implementation is more efficient since each @var{sproc}
  963. call, in turn, can be initiated once the relevant @var{pproc} call has
  964. completed, it doesn't need to wait for all to finish.
  965. @end deffn
  966. @c Local Variables:
  967. @c TeX-master: "guile.texi"
  968. @c End: