spawn.txt 3.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
  1. ==========================================================
  2. Parallel & Spawn
  3. ==========================================================
  4. Nim has two flavors of parallelism:
  5. 1) `Structured`:idx parallelism via the ``parallel`` statement.
  6. 2) `Unstructured`:idx: parallelism via the standalone ``spawn`` statement.
  7. Both need the `threadpool <threadpool.html>`_ module to work.
  8. Somewhat confusingly, ``spawn`` is also used in the ``parallel`` statement
  9. with slightly different semantics. ``spawn`` always takes a call expression of
  10. the form ``f(a, ...)``. Let ``T`` be ``f``'s return type. If ``T`` is ``void``
  11. then ``spawn``'s return type is also ``void``. Within a ``parallel`` section
  12. ``spawn``'s return type is ``T``, otherwise it is ``FlowVar[T]``.
  13. The compiler can ensure the location in ``location = spawn f(...)`` is not
  14. read prematurely within a ``parallel`` section and so there is no need for
  15. the overhead of an indirection via ``FlowVar[T]`` to ensure correctness.
  16. Spawn statement
  17. ===============
  18. A standalone ``spawn`` statement is a simple construct. It executes
  19. the passed expression on the thread pool and returns a `data flow variable`:idx:
  20. ``FlowVar[T]`` that can be read from. The reading with the ``^`` operator is
  21. **blocking**. However, one can use ``awaitAny`` to wait on multiple flow
  22. variables at the same time:
  23. .. code-block:: nim
  24. import threadpool, ...
  25. # wait until 2 out of 3 servers received the update:
  26. proc main =
  27. var responses = newSeq[FlowVarBase](3)
  28. for i in 0..2:
  29. responses[i] = spawn tellServer(Update, "key", "value")
  30. var index = awaitAny(responses)
  31. assert index >= 0
  32. responses.del(index)
  33. discard awaitAny(responses)
  34. Data flow variables ensure that no data races
  35. are possible. Due to technical limitations not every type ``T`` is possible in
  36. a data flow variable: ``T`` has to be of the type ``ref``, ``string``, ``seq``
  37. or of a type that doesn't contain a type that is garbage collected. This
  38. restriction will be removed in the future.
  39. Parallel statement
  40. ==================
  41. Example:
  42. .. code-block:: nim
  43. # Compute PI in an inefficient way
  44. import strutils, math, threadpool
  45. proc term(k: float): float = 4 * math.pow(-1, k) / (2*k + 1)
  46. proc pi(n: int): float =
  47. var ch = newSeq[float](n+1)
  48. parallel:
  49. for k in 0..ch.high:
  50. ch[k] = spawn term(float(k))
  51. for k in 0..ch.high:
  52. result += ch[k]
  53. echo formatFloat(pi(5000))
  54. The parallel statement is the preferred mechanism to introduce parallelism
  55. in a Nim program. A subset of the Nim language is valid within a
  56. ``parallel`` section. This subset is checked to be free of data races at
  57. compile time. A sophisticated `disjoint checker`:idx: ensures that no data
  58. races are possible even though shared memory is extensively supported!
  59. The subset is in fact the full language with the following
  60. restrictions / changes:
  61. * ``spawn`` within a ``parallel`` section has special semantics.
  62. * Every location of the form ``a[i]`` and ``a[i..j]`` and ``dest`` where
  63. ``dest`` is part of the pattern ``dest = spawn f(...)`` has to be
  64. provably disjoint. This is called the *disjoint check*.
  65. * Every other complex location ``loc`` that is used in a spawned
  66. proc (``spawn f(loc)``) has to be immutable for the duration of
  67. the ``parallel`` section. This is called the *immutability check*. Currently
  68. it is not specified what exactly "complex location" means. We need to make
  69. this an optimization!
  70. * Every array access has to be provably within bounds. This is called
  71. the *bounds check*.
  72. * Slices are optimized so that no copy is performed. This optimization is not
  73. yet performed for ordinary slices outside of a ``parallel`` section.