exercise-1.tex 39 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717
  1. \documentclass[a4paper,%
  2. 11pt,%
  3. DIV14,
  4. headsepline,%
  5. headings=normal,
  6. ]{scrartcl}
  7. %\usepackage{ngerman}
  8. \usepackage[utf8]{inputenc}
  9. \usepackage[T1]{fontenc}
  10. \usepackage{textcomp}
  11. % for matlab code
  12. % bw = blackwhite - optimized for print, otherwise source is colored
  13. %\usepackage[framed,numbered,bw]{mcode}
  14. % for other code
  15. %\usepackage{listings}
  16. %\usepackage[ansinew]{inputenc}
  17. \usepackage{color}
  18. \usepackage{hyperref}
  19. \usepackage{graphicx}
  20. \usepackage{listings}
  21. \usepackage[hang, font = footnotesize]{caption}
  22. %\usepackage{german}
  23. \usepackage{algorithm}
  24. \usepackage{algpseudocode}
  25. \usepackage{enumitem}
  26. \usepackage{amssymb}
  27. \usepackage{amsmath}
  28. \usepackage{mathrsfs}
  29. \usepackage{mathalpha}
  30. \usepackage{wrapfig}
  31. \usepackage{listings}
  32. \lstset{
  33. tabsize = 4, %% set tab space width
  34. showstringspaces = false, %% prevent space marking in strings, string is defined as the text that is generally printed directly to the console
  35. numbers = left, %% display line numbers on the left
  36. commentstyle = \color{red}, %% set comment color
  37. keywordstyle = \color{blue}, %% set keyword color
  38. stringstyle = \color{red}, %% set string color
  39. rulecolor = \color{black}, %% set frame color to avoid being affected by text color
  40. basicstyle = \small \ttfamily , %% set listing font and size
  41. breaklines = true, %% enable line breaking
  42. numberstyle = \tiny,
  43. }
  44. \renewcommand{\thesubsection}{Ex~\arabic{section}.\arabic{subsection}}
  45. \lstset{basicstyle=\footnotesize\ttfamily,breaklines=true}
  46. \begin{document}
  47. \hrule height 1px
  48. \vspace*{1ex}
  49. \begin{minipage}[t]{.45\linewidth}
  50. \strut\vspace*{-\baselineskip}\newline
  51. \includegraphics[height=.9cm]{res/Inf-Logo_black_en.pdf}
  52. \includegraphics[height=.9cm]{res/par-logo.pdf}
  53. \end{minipage}
  54. \hfill
  55. \begin{minipage}[t]{.5\linewidth}
  56. \flushright{
  57. Research Group for Parallel Computing\\%
  58. Faculty of Informatics\\%
  59. TU Wien}
  60. \end{minipage}
  61. \vspace*{1ex}
  62. \hrule
  63. \vspace*{2ex}
  64. \begin{center}
  65. {\LARGE\textbf{Advanced Multiprocessor Programming}}\\
  66. {\large{}%
  67. Summer term 2023 \\
  68. Theory exercise 1 of 2 \\
  69. \vspace*{2ex}
  70. Norbert Tremurici (11907086)
  71. %\semestershort\\
  72. %\assignmentname{0}}
  73. }
  74. \end{center}
  75. \hfill
  76. \begin{minipage}[b]{1\linewidth}
  77. \hfill
  78. \begin{tabular}{@{}ll@{}}
  79. Issue date: & 2023-03-27\\
  80. Due date: & \color{red}{2023-04-24 (23:59)}
  81. \end{tabular}
  82. \end{minipage}
  83. \vspace*{3ex}
  84. \hrule
  85. \vspace*{1ex}
  86. \begin{flushright}
  87. \textbf{\underline{Total points: 38 (+ 2 bonus)}}
  88. \end{flushright}
  89. %===================================
  90. \section{Amdahl's Law} \label{sec:amdahl}
  91. \subsection{(2 points)} \label{ex:6_1}
  92. Suppose $98\%$ of a program's execution time is perfectly parallelizable.
  93. \begin{enumerate}[label = \alph*)]
  94. \item What is the overall speedup that can be achieved by running said program on a machine with 96 processing cores?
  95. \textbf{Solution} Amdahl's Law gives us the achievable speedup by plugging in the fraction of parallelizable work $p = 0.98$ and core count $n = 96$.
  96. $S = \frac{1}{1-p+p/n} = \frac{1}{0.02 + 0.98 / 96} = \frac{9600}{96 \cdot 2 + 98} = \frac{9600}{290} \approx 33.103$
  97. \item Provide a tight upper limit for the overall speedup that can be achieved for this program (on any machine).
  98. Because a speedup is achieved by parallelizing the parallelizable workload on as many cores as possible, we can determine a tight upper limit for the overall speedup by taking the limit of the core count $n$ towards infinity.
  99. $\hat S = \lim\limits_{n \to \infty} \frac{1}{1-p+p/n} = \frac{1}{1-p} = \frac{1}{0.02} = 50$
  100. Conceptually, even if the parallel workload is infinitely sped up, the execution time is limited by the sequential fraction.
  101. \end{enumerate}
  102. \subsection{(2 points)} \label{ex:6_2}
  103. The company you work for is about to upgrade all of their servers to slower but more energy efficient processors with much higher core counts.
  104. Suppose the sequential part of the main program running on these servers accounts for $20\%$ of the program's computation time.
  105. In order to compensate for the slower processing speeds and to better utilize the higher core count, management asks you to optimize the sequential part of said program --- make it run $k$ times faster ---, s.t. the revised program scales 3 times better with a given number of processing cores meaning the \emph{relative (!)} speedup of the revised program is 3 times higher than that of the original program.
  106. What value of $k$ should you require for a given number of processor cores $n$?
  107. \textbf{Solution} We can solve this problem by considering two speedups $S_1$ and $S_2$ with a parallelizable fraction $p = 0.8$, additionally requiring $S_2$ to be 3 times as high as $S_1$ and for $S_2$ to have a factor of $1/k$ in the sequential fraction
  108. $$
  109. \begin{aligned}
  110. S_1 &= \frac{1}{0.2 + 0.8/n} \\
  111. S_2 &= \frac{0.2/k + 0.8}{0.2/k + 0.8/n} = 3 S_1
  112. \end{aligned}
  113. $$
  114. We can then solve for $k$
  115. $$
  116. \begin{aligned}
  117. \frac{3}{0.2 + 0.8/n} &= \frac{0.2/k + 0.8}{0.2/k + 0.8/n} \\
  118. \frac{30n}{2n + 8} &= \frac{2n + 8kn}{2n + 8k} \\
  119. \frac{30}{2n + 8} &= \frac{2 + 8k}{2n + 8k} \\
  120. 30(2n + 8k) &= (2 + 8k)(2n + 8) \\
  121. 60(n + 4k) &= 4n + 16 + 16kn + 64k \\
  122. 15n + 60k &= n + 4 + k(4n + 16) \\
  123. k(60 - 4n + 16) &= n + 4 - 15n \\
  124. k &= \frac{4 - 14n}{44 - 4n} \\
  125. k &= \frac{1 - 3.5n}{11 - n} \\
  126. \end{aligned}
  127. $$
  128. Do note that for $n \leq 11$, it is impossible to achieve the desired result because $k$ is either infinite or negative.
  129. \subsection{(2 points + 2 bonus points)} \label{ex:6_3}
  130. Let $p = 0.7$ denote the perfectly parallelizable fraction of a given program.
  131. Suppose there are two \emph{mutually exclusive} optimizations to be done on this program, meaning if one optimization is done, the other is no longer available:
  132. \begin{enumerate}[label={(\Roman*)}]
  133. \item Improve the execution time of the parallelizable part $p$ by a factor of 7. \label{ex:6_3_it1}
  134. \item Improve the execution time of the sequential part $1-p$ by a factor of 3. \label{ex:6_3_it2}
  135. \end{enumerate}
  136. \begin{enumerate}[label = \alph*)]
  137. \item Provide the relative speedup $s(n)$ of the original program, as well as of the program after optimization \ref{ex:6_3_it1} $s^{\ref{ex:6_3_it1}}(n)$ and the one after \ref{ex:6_3_it2} $s^{\ref{ex:6_3_it2}}(n)$ for general processor core count $n$ and for $n = 10$.
  138. Calculate the execution times $T^{\ref{ex:6_3_it1}}_n$ and $T^{\ref{ex:6_3_it2}}_n$ (of the two optimized programs) for $n=10$ in terms of the original workload normalized to 1.
  139. (\emph{Hint: $T^{\ref{ex:6_3_it1}}_1 = 0.4$ and $T^{\ref{ex:6_3_it2}}_1 = 0.8$})
  140. With the sequential time normalized to 1, the relative speedup $s(n)$ is given by
  141. $$s(n) = \frac{T_1}{T_p} = \frac{0.3 + 0.7}{0.3 + 0.7/n} = \frac{10n}{3n + 7}$$
  142. The optimizations resolve to
  143. $$s^{(I)}(n) = \frac{0.3 + 0.7/7}{0.3 + 0.7/7n} = \frac{0.4}{0.3 + 0.1/n} = \frac{4n}{3n + 1}$$
  144. $$s^{(II)}(n) = \frac{0.3/3 + 0.7}{0.3/3 + 0.7/n} = \frac{0.8}{0.1 + 0.7/n} = \frac{8n}{n + 7}$$
  145. For $n = 10$, we get $s(10) = \frac{100}{37} \approx 2.703$, $s^{(I)}(10) = \frac{40}{31} \approx 1.29$ and $s^{(II)}(10) = \frac{80}{17} \approx 4.706$.
  146. If the original execution times are all normalized to 1, we can get the new execution times for the two optimization by $T_{10}^{(I)} = \frac{1}{s^{(I)}(10)} = \frac{31}{40} \approx 0.775$ and $T_{10}^{(II)} = \frac{1}{s^{(II)}(10)} = \frac{17}{80} \approx 0.213$.
  147. \item Find a tight bound $n'$ for the processing core count s.t. for $n \ge n'$ optimization \ref{ex:6_3_it2} results in \emph{lower execution times ($\ne$ speedup)} than optimization \ref{ex:6_3_it1}.
  148. Provide your starting inequality!
  149. \textbf{Solution} We start out with the inequality expressing the desired situation
  150. $$
  151. \begin{aligned}
  152. T_{n}^{(I)} &\geq T_{n}^{(II)} \\
  153. 0.3 + \frac{0.7}{7n} &\geq \frac{0.3}{3} + \frac{0.7}{n} \\
  154. 3 + \frac{1}{n} &\geq 1 + \frac{7}{n} \\
  155. 2 &\geq \frac{6}{n} \\
  156. n &\geq 3 \\
  157. \end{aligned}
  158. $$
  159. We get the result that for $n \geq 3$, optimization $(II)$ results in a lower execution time, $n' = 3$.
  160. \item \emph{[2 bonus points]} Explain why optimization \ref{ex:6_3_it1} results in a lower relative speedup than even the original (unoptimized) program.
  161. What is the intuition and potential pitfall behind relative speedup?
  162. \textbf{Solution} Because we achieve speedups by decreasing the execution time of the parallelizable fraction of the program execution, we can see how an increasing core count reduces only the parallelizable fraction.
  163. In optimization $(I)$ we improve the performance by making the initial parallel fraction lower by a factor of 7, which is generally an improvement, but when looking at the speedup achieved by varying the core count $n$, we observe lower speedups as parallelization scales worse than before.
  164. There is less parallelizable workload to be sped up, so the relative speedup appears worse than even the original program.
  165. \end{enumerate}
  166. % This (type of) exercise is a total pain to correct and doesn't really contribute much to students' understanding.
  167. % Hence I cut it
  168. %\subsection{(1 point)} \label{ex:7}
  169. %Running your application on two processors yields a speedup of $S_2$.
  170. %Use Amdahl's law to derive a formula for $S_n$, the speedup on $n$ processors, in terms of $n$ and $S_2$.
  171. \subsection{(2 points)} \label{ex:8}
  172. You have a choice between buying one uniprocessor that executes ten billion instructions per second, or a twenty-processor multiprocessor where each processor executes one billion instructions per second.
  173. Explain how you would decide which to buy for a particular application.
  174. \textbf{Solution} $CPU_U$ has $n = 1$ and processes $10^{10}$ instructions/s/core, while $CPU_M$ has $n = 20$ and processes $10^9$ instructions/s/core.
  175. We get general execution times $T^U = 0.1$ and $T^M = p - 1 + \frac{p}{20}$.
  176. If we want to find out at which point either processor becomes more viable, we need to consider at which parallel fraction $p$ the multiprocessor CPU becomes faster, if any. We can set their execution times equal (effectively considering the point at which one processor overtakes the other) and see which parallel fraction $p$ results from that.
  177. $$
  178. \begin{aligned}
  179. T^U = T^M \\
  180. 0.1 = 1 - p + \frac{p}{20} \\
  181. 2 = 20 - 20p + p \\
  182. 19p = 18 \\
  183. p = \frac{18}{19} \approx 0.947
  184. \end{aligned}
  185. $$
  186. \newpage
  187. \section{Locks} \label{sec:locks}
  188. %\subsection{(2 points)} \label{ex:9}
  189. %Define $r$-bounded waiting for a given mutual exclusion algorithm to mean that if $D_A^j \rightarrow D_B^k$ then $CS_A^j \rightarrow CS_B^{k+r}$.
  190. %Is there a way to define a doorway for the Peterson algorithm such that it provides $r$-bounded waiting for some value of $r$?
  191. %If yes, define a doorway (a code interval in Figure \ref{fig:peterson_lock}) and prove the statement for a specific $r$.
  192. %Otherwise sketch an impossibility proof.
  193. %
  194. %\begin{figure}[H]
  195. % \centering \includegraphics[width=9cm]{res/Peterson Lock.jpg}
  196. % \caption{Peterson Lock} \label{fig:peterson_lock}
  197. %\end{figure}
  198. \subsection{(2 points)} \label{ex:10}
  199. Why do we need to define a doorway section and why cannot we define FCFS in a mutual exclusion algorithm based on the order in which the first instruction in the $lock()$ method was executed?
  200. Argue your answer in a case-by-case manner based on the nature of the first instruction executed by $lock()$: a read or write, to separate or the same location.
  201. \textbf{Solution} We can consider two threads A and B and exhaust all possible cases:
  202. \begin{itemize}
  203. \item A \textit{read} instruction by A and B is performed to the \textit{same memory location}. In this case, it is not clear which read is performed first, if it is impossible to tell which read was performed first, we cannot define FCFS.
  204. \item A \textit{read} instruction by A and B is performed to \textit{different memory locations}. This case suffers the same problem as above.
  205. \item A \textit{write} instruction by A and B is performed to the \textit{same memory location}. In this scenario it is possible to tell who went first, but the thread that writes last still cannot tell whether there was a thread that came before it since the write may have overwritten a previous write.
  206. \item A \textit{write} instruction by A and B is performed to \textit{different memory locations}. Because they write to different memory locations, it is impossible to tell who went first.
  207. \end{itemize}
  208. We can see that in all cases, we cannot uphold a fair order for both threads because of a lack of information to both threads, regardless of the first instruction performed and its order. Thus we cannot define FCFS in a mutual exclusion algorithm this way.
  209. \subsection{(4 points)} \label{ex:11}
  210. Programmers at the Flaky Computer Corporation designed the protocol shown in Figure \ref{fig:flaky_lock} to achieve $n$-thread mutual exclusion.
  211. For each question either sketch a proof or display an execution, where it fails.
  212. \\
  213. \begin{minipage}{\textwidth}
  214. \begin{minipage}{0.5\textwidth}
  215. \begin{enumerate}[label=\alph*)]
  216. \item Does this protocol satisfy mutual exclusion?
  217. \item Is this protocol starvation-free?
  218. \item Is this protocol deadlock-free?
  219. \end{enumerate}
  220. \end{minipage}
  221. \begin{minipage}{0.5\textwidth}
  222. \begin{figure}[H]
  223. \includegraphics[width=0.7\textwidth]{res/Flaky Lock.jpg}
  224. \caption{Flaky Lock} \label{fig:flaky_lock}
  225. \end{figure}
  226. \end{minipage}
  227. \end{minipage}
  228. \textbf{mutually exclusive}
  229. Yes, we can prove that this protocol satisfies mutual exclusion by sketching a proof by contradiction.
  230. \textbf{Proof} Assume two threads have entered the critical section at the same time, that is to say, two threads A and B have performed a \texttt{read} operation on \texttt{turn} and exited the while loop on line number 11.
  231. The sequence of events (without loss of generality) must have been:
  232. $$
  233. \begin{aligned}
  234. write_B(turn = B) &\to write_A(turn = A) \Rightarrow \\
  235. read_B(busy == false) &\to read_A(busy == false) \Rightarrow \\
  236. write_B(busy = true) &\to write_A(busy = true) \Rightarrow \\
  237. read_B(turn == B) &\to read_A(turn == A)
  238. \end{aligned}
  239. $$
  240. Where (in this example) the events separated by the single precedence operator $\to$ may be swapped while the events separated by the strict precendence operator $\Rightarrow$ may not be swapped, for the sake of this argument.
  241. Any of these sequences are not possible, both A and B require a read on \texttt{turn} with their respective values, but in between setting this value and reading it later, there must have been a read and write on \texttt{busy}.
  242. Only one thread can successfully set and keep \texttt{turn} and pass the \texttt{busy} check.
  243. \textbf{not starvation-free}
  244. With two threads, one thread can enter infinitely many times.
  245. We can construct an execution with two threads A and B where one thread enters infinitely many times.
  246. $$
  247. \begin{aligned}
  248. write_B(turn = B) &\to write_A(turn = A) \to read_A(busy == false) \to \\
  249. write_A(busy = true) &\to read_A(turn == A)
  250. \end{aligned}
  251. $$
  252. At this point, A exits \texttt{lock()}, B keeps trying.
  253. $$
  254. read_B(busy == true) \to write_B(turn = B) \to write_A(busy = false)
  255. $$
  256. Now A exits \texttt{unlock()}, enters \texttt{lock()} again.
  257. $$
  258. \begin{aligned}
  259. write_A(busy = false) &\to write_A(turn = A) \to read_A(busy == false) \to \\
  260. write_A(busy = true) &\to read_A(turn == A)
  261. \end{aligned}
  262. $$
  263. And A has successfully locked another time and exits \texttt{lock()}.
  264. We can repeat this execution in this way indefinitely.
  265. \textbf{not deadlock-free}
  266. We can construct an indefinite execution where two threads A and B prevent each other from ever acquiring the lock.
  267. $$
  268. \begin{aligned}
  269. write_A(turn = A) &\to write_B(turn = B) \to read_A(busy == false) \to write_A(busy = true) \to \\
  270. read_B(busy == true) &\to read_A(turn == B)
  271. \end{aligned}
  272. $$
  273. At this point both A and B are back to line 8 (where they started) and the cycle can repeat anew.
  274. %\subsection{(2 points)} \label{ex:12}
  275. %Show that the Filter lock allows some threads to overtake others (who are at least attempting level 1, otherwise the statement is trivial) an unbounded number of times.
  276. \subsection{(10 points)} \label{ex:13}
  277. Another way to generalize the two-thread Peterson lock seen in Figure \ref{fig:peterson_lock} is to arrange a number of 2-thread Peterson locks in a binary tree.
  278. Suppose $n$ is a power of two.
  279. Each thread is assigned a leaf lock which it shares with one other thread.
  280. Each lock treats one thread as thread 0 and the other as thread 1.
  281. In the tree-lock's acquire method, the thread acquires every two-thread Peterson lock from that thread's leaf to the root.
  282. The tree-lock's release method for the tree-lock unlocks each of the 2-thread Peterson locks that thread has acquired, \emph{starting from the leaf up to the root}.
  283. At any time, a thread can be delayed for a finite duration.
  284. (In other words, threads can take naps, or even vacations, but they do not drop dead.)
  285. For each of the first three listed properties below, either sketch a formal induction proof that it holds, or describe a (possibly infinite) execution, where it is violated.
  286. In case of a violation: can you find a (simple) fix to make it work?
  287. If so, provide the necessary changes and sketch an induction proof that your fix really works.
  288. \begin{figure}[H]
  289. \centering \includegraphics[width=0.47\textwidth]{res/Peterson Lock.jpg}
  290. \caption{Peterson Lock} \label{fig:peterson_lock}
  291. \end{figure}
  292. \begin{enumerate}[label=\alph*)]
  293. \item Mutual exclusion
  294. \textbf{Solution} Mutual exclusion does not hold, because of the way the tree is unlocked.
  295. We can construct an example, starting from a state where one thread has currently locked all its stages, with threads from the left and right subtrees attempting to acquire their locks.
  296. Assume there are three threads A, B and C and at least three lock nodes N, E and W (north, east and west).
  297. N is the root node, W is of the left subtree and E is of the right subtree.
  298. Threads A and B are leafs of W, C is a leaf of E.
  299. Without loss of generality, assume A has locked N (and thus has also locked W).
  300. The states of the nodes are, $flag_W = [true, true]$, $victim_W = 1$, $flag_E = [true, false]$, $victim_E = 0$, $flag_N = [true, true]$, $victim_N = 1$.
  301. That is to say, B is trying to lock W and C has locked E and is trying to lock N, A has locked W and N.
  302. We can now construct a sequence of events that violates mutual exclusion as A tries to unlock from the leaf up.
  303. $$
  304. \begin{aligned}
  305. write_A(flag_W[0] = false) &\to read_B(flag_W[0] == false) \to \\
  306. write_B(flag_N[0] = true) &\to write_B(victim_N = 0) \to \\
  307. write_C(flag_N[1] = true) &\to write_C(victim_N = 1) \to \\
  308. write_A(flag_N[0] = false) &\to read_B(victim_N == 1) \to \\
  309. read_C(flag_N[0] == false)
  310. \end{aligned}
  311. $$
  312. We can now see that C has read $flag_N[0]$ to be $false$ (thus entering the critical section) and B has read $victim_N$ to be 1 (thus entering the critical section as well).
  313. Mutual exclusion is violated.
  314. The problem stems from the fact that in the part of the tree consisting of nodes N, W and E, there can be three threads simultaneously.
  315. In our example there are B and C waiting for N, while A occupies and unlock N.
  316. While A is busy going up the tree and unlocking the locks, another thread from the left subtree can move up along and eventually contest the root lock with a thread from the right subtree.
  317. A \textbf{simple fix} would be to change the way the tree lock is unlocked:
  318. the thread unlocks all locks, starting from the root node to the leafs.
  319. Now, this situation where one thread has the root node locked and two other threads are contesting the root node cannot happen, because A must have started from the root node and continued down the tree, blocking a thread from its subtree until these other locks are unlocked.
  320. We can sketch an inductive proof that mutual exclusion is satisfied after this fix.
  321. \textbf{Induction hypothesis} The tree-lock satisfies mutual exclusion for n threads.
  322. \textbf{Base case} For $n = 2$, there is only a single Peterson lock, which we know to satisfy mutual exclusion.
  323. \textbf{Induction step} For the induction step, we grow the tree by another layer, $n' = 2n$ (subtrees handle $n$ threads).
  324. By the induction hypothesis, we know that each subtree guarantees mutual exclusion and thus there can be only one thread at the root node of each subtree.
  325. We now consider all cases, either the new root node is locked or unlocked.
  326. If it is unlocked, the contest can only be between two threads, one from each subtree and the situation resembles a regular Peterson lock.
  327. It it is locked, then one subtree must have its root node locked by the same thread that occupies the new root node, thus there can only be one thread attempting to acquire the lock from the other subtree.
  328. The problematic scenario described previously cannot happen, because A unlocks from the new root node first.
  329. A has to unlock two locks before a thread from its subtree can move up and either the thread of the other subtree acquires the new root node's lock, or we reach a situation where there are two threads contesting the new root node lock with it being unlocked, which is again a situation resembling a regular Peterson lock.
  330. Thus after the induction step the induction hypothesis still holds for $n' = 2n$ threads.
  331. \item Deadlock freedom
  332. \textbf{Solution} The tree-lock is deadlock-free.
  333. \textbf{Induction hypothesis} The tree-lock is deadlock-free for n threads.
  334. \textbf{Base case} For $n = 2$, the tree-lock resembles a regular tree-lock, which we know to be deadlock-free.
  335. \textbf{Induction step} Growing the tree by another layer, $n' = 2n$, we need to consider whether at any layer all threads can deadlock.
  336. By our induction hypothesis, we know that not all threads of both subtrees can become deadlocked, so we only consider the new root node.
  337. If the root node is currently locked, then upon unlocking (which it will after a finite amount of time) a thread coming from the other subtree will succeed.
  338. If the root node is not currently locked, then there can be a contest between at most two threads coming from the left and right subtrees.
  339. This situation resembles a regular Peterson lock, which we know to be deadlock-free.
  340. Thus the induction hypothesis still holds for $n' = 2n$ threads.
  341. \item Starvation freedom
  342. The induction argument for starvation freedom is analogous to the induction argument for deadlock freedom.
  343. \item Is there an upper bound on the number of times the tree-lock can be acquired and released between the time a thread starts acquiring the tree-lock and when it succeeds?
  344. If so, sketch a proof, otherwise construct an unbounded execution.
  345. \textbf{Solution} We can place an upper bound on the number of times the tree-lock can be acquired and released between the time a thread starts acquiring the tree lock and the time that it succeeds.
  346. This is because we don't unlock the tree-lock starting from the root, but from the leaf.
  347. In order to get an execution where a thread is overtaken an unbounded number of times, we would need that this thread (lets call this thread thread A) is waiting at some layer $l$ for a lock to be unlocked while at the (uppermost) root layer $h$, $h > l$, threads from the other subtree acquire and release the lock an unbounded number of times.
  348. This cannot be, as the situation that at layer $l$ thread A is blocked by another thread that has acquired the tree-lock while from the other subtree other threads pass A an unbounded number of times is impossible.
  349. The lock at $h$ is unlocked strictly after the lock at $l$ is unlocked.
  350. The tree-lock is unlocked starting from the leaf, so before the root node at layer $h$ becomes unlocked, the lock at layer $l$ becomes unlocked, unblocking A.
  351. It could be that A is blocked by another thread of the same subtree, but as we have seen the tree-lock remains starvation-free and it cannot be blocked indefinitely.
  352. If the unlocking thread naps or takes a vacation at any stage before having unlocked the root node, no one can acquire the tree-lock.
  353. We have also seen that this lock does not guarantee mutual exclusion, so in theory two threads can overtake A at layer $l$ (if $l$ is not the lowermost layer).
  354. The worst that can happen if at any stage mutual exclusion is violated is that A gets to the root node lock faster than it would otherwise.
  355. In the worst case, A is only slowly shifted up the tree, we can be safe by picking a conservative upper bound like $10^h$, where $h$ is the uppermost layer of the tree and also the height of the tree.
  356. \end{enumerate}
  357. \subsection{(4 points)} \label{ex:14}
  358. The $\mathfrak{L}$-exclusion problem is a variant of the starvation-free mutual exclusion problem.
  359. We make two changes: as many as $\mathfrak{L}$ threads may be in the critical section at the same time, and fewer than $\mathfrak{L}$ threads might fail (by halting) in the critical section.
  360. An implementation must satisfy the following conditions:
  361. \begin{itemize}
  362. \item $\mathfrak{L}$\textbf{-exclusion:} at any time at most $\mathfrak{L}$ threads are in the critical section.
  363. \item $\mathfrak{L}$\textbf{-starvation-freedom:} as long as fewer than $\mathfrak{L}$ threads are in the critical section, some thread that wants to enter the critical section will eventually succeed (even if some threads in the critical section have halted).
  364. \end{itemize}
  365. Modify the $n$-process Filter mutual exclusion algorithm from Figure \ref{fig:filter_lock} to turn it into an $\mathfrak{L}$-exclusion algorithm.
  366. Provide the whole source code!
  367. \begin{figure}[H]
  368. \centering \includegraphics[width=10cm]{res/Filter Lock.jpg}
  369. \caption{Filter Lock} \label{fig:filter_lock}
  370. \end{figure}
  371. \textbf{Solution} The filter lock works by filtering one thread at each level, in combination with there being $n - 1$ levels to get through.
  372. As $n - 1$ threads are filtered, only 1 threads gets into the critical section.
  373. The first change is simple and obvious then, because we want to let $l$ threads pass, we can eliminate $l$ levels, which makes it possible for $l$ threads to get past instead of 1.
  374. The new number of levels is $n - l - 1$.
  375. A second change is necessary for the while loop that spins on the condition that there is a thread on higher levels.
  376. If there is a thread in the critical section, then all threads are blocked in this while loop, even though we require it to be possible for $l$ threads to proceed.
  377. The fix is change this check into a different kind of check:
  378. from the perspective of our thread, we count all threads that are on levels smaller or equal to our own thread.
  379. If this number is greater than $n - l$, then there can be at most $l - 1$ threads in the critical section, so we spin until this number becomes greater than $n - l$ (that is, while this number is smaller or equal to $n - l$).
  380. The full code can be seen in the listing below.
  381. \begin{lstlisting}[language = Java]
  382. class Filter implements Lock {
  383. int[] level;
  384. int[] victim;
  385. public Filter(int n) {
  386. level = new int[n];
  387. victim = new int[n - l];
  388. for (int i = 0; i < n - l; i++) {
  389. level[i] = 0;
  390. }
  391. }
  392. public void lock() {
  393. int me = ThreadID.get();
  394. // CHANGE: l filters are cut off
  395. for (int i = 1; i < n - l; i++) { // attempt level i
  396. level[me] = i;
  397. victim[i] = me;
  398. // CHANGE: it is counted if there are n-l threads in levels 1 to i
  399. do {
  400. int hasntPassedMe = 1; // me is not past me
  401. for (k = 0; k <= i; k++)
  402. if (k != me && level[k] <= i)
  403. hasntPassedMe++;
  404. } while (hasntPassedMe <= n - l && victim[i] == me);
  405. // the new code spins until the victim is changed or the count of hasntPassedMe < n - l, which implies that there can only be less than l threads in the critical section
  406. }
  407. }
  408. public unlock() {
  409. int me = ThreadID.get();
  410. level[me] = 0; // through the filter
  411. }
  412. }
  413. \end{lstlisting}
  414. \subsection{(2 points)} \label{ex:15}
  415. In practice, almost all lock acquisitions are uncontended, so the most practical measure of a lock's performance is the number of steps needed for a thread to acquire a lock when no other thread is concurrently trying to acquire the lock.
  416. Scientists at Cantaloupe-Melon University have devised the following `wrapper' for an arbitrary lock, shown in Figure \ref{fig:fastpath_lock}.
  417. They claim that if the base $Lock$ class provides mutual exclusion and is starvation-free, so does the $FastPath$ lock, but it can be acquired in a constant number of steps in the absence of contention.
  418. Sketch an argument why they are right, or give a counterexample.
  419. \begin{figure}[H]
  420. \centering \includegraphics[width=10cm]{res/FastPath Lock.jpg}
  421. \caption{FastPath Lock} \label{fig:fastpath_lock}
  422. \end{figure}
  423. \textbf{Solution} The scientists at Cantaloupe-Melon University seem to have failed their objective, because it appears that we can give a counterexample to the claim that it provides mutual exclusion and is starvation-free (by showing that it does not satisfy mutual exclusion).
  424. Consider an execution between threads A and B, with $y = -1$ (no other thread has currently acquired the lock).
  425. $$
  426. \begin{aligned}
  427. write_A(x = 0) &\to write_B(x = 1) \to \\
  428. read_A(y == -1) &\to read_B(y == -1) \to \\
  429. write_A(y = 0) &\to write_B(y = 1) \to \\
  430. read_A(x == 1) &\to read_B(x == 1)
  431. \end{aligned}
  432. $$
  433. At this point, both A and B are at the if condition in line 10.
  434. A sees that $x \not = 0$ and enters the slow path, locking traditionally and acquiring the internal lock.
  435. B sees that $x = 1$ and foresakes the slow path, passing on and thereby successfully acquiring the fast-path lock.
  436. Mutual exclusion is violated and it becomes apparent once more that there are no shortcuts to life!
  437. \section{Register construction} \label{sec:register_con}
  438. \subsection{(2 points)} \label{ex:38_new}
  439. Consider the safe Boolean MRSW construction shown in Figure \ref{fig:safe_bool_mrsw}.
  440. \\
  441. True or false: if we replace the safe Boolean SRSW register array with an array of atomic SRSW registers, then the construction yields an atomic Boolean MRSW register.
  442. Justify your answer by sketching a proof or providing a counterexample.
  443. \begin{figure}[H]
  444. \centering \includegraphics[width=10cm]{res/Safe Boolean MRSW.jpg}
  445. \caption{Safe Boolean MRSW construction} \label{fig:safe_bool_mrsw}
  446. \end{figure}
  447. \textbf{Solution} False.
  448. Consider four events $R_1$, $R_2$, $R_3$ and $W$ as reads and writes respectively.
  449. We construct a counterexample by providing the following relations:
  450. $R_1 = read(false)$, $R_2 = read(true)$, $R_3 = read(false)$, $W = write(true)$ with $R_1 \to W$, $R_2 \to R_3$, $R_2 \not \to W$ and $R_3 \not \to W$.
  451. That is to say, the previous value was $false$, the new value $true$ is being written.
  452. Read $R_3$ happens strictly after read $R_2$, and reads $R_2$ and $R_3$ happen concurrently to the write $W$.
  453. \begin{figure}[H]
  454. \centering \includegraphics[width=12cm]{res/ex-3-1.png}
  455. \caption{Ex 3.1 counterexample} \label{fig:ex-3-1}
  456. \end{figure}
  457. This has been illustrated in figure \ref{fig:ex-3-1}.
  458. Because the MRSW register needs to be atomic, it should be linearizable with respect to the write and reads.
  459. But as $R_3$ happens strictly after read $R_2$, no such linearization points in $W$, $R_2$ and $R_3$ can be found, thus violating atomicity.
  460. For this to be possible, $R_2$ and $R_3$ would have to overlap.
  461. This example is in fact a valid example, because the atomic SRSW registers are written one after another, where the threads with lower thread IDs can read after the write to their register is committed while the threads with higher thread IDs can read before the write to their register is committed.
  462. The answer is no, the construction does not yield an atomic Boolean MRSW register.
  463. %\subsection{(2 points)} \label{ex:35}
  464. %Consider the safe Boolean MRSW construction shown in Figure \ref{fig:safe_bool_mrsw}.
  465. %\\
  466. %True of false: if we replace the safe Boolean SRSW register array with an array of regular Boolean SRSW registers, then the construction yields a regular Boolean MRSW register.
  467. %Justify your answer by sketching a proof or providing a counterexample.
  468. \subsection{(2 points)} \label{ex:36}
  469. Consider the atomic MRSW register construction shown in Figure \ref{fig:atomic_mval_mrsw}.
  470. \\
  471. True or false: if we replace the atomic SRSW registers with regular SRSW registers, then the construction still yields an atomic MRSW register.
  472. Justify your answer by sketching a proof or providing a counterexample.
  473. \begin{figure}[H]
  474. \centering \includegraphics[width=12cm]{res/Atomic M-val MRSW.jpg}
  475. \caption{Atomic MRSW construction} \label{fig:atomic_mval_mrsw}
  476. \end{figure}
  477. \textbf{Solution} False.
  478. Consider three events $R_1$, $R_2$ and $W$, $R_1 = read(<1, A>)$, $R_2 = read(<0, B>)$ and $W = write(<1, A>)$.
  479. In this example, the previous and committed value was $<0, B>$.
  480. We have relations $R_1 \to R_2$ as well as $R_1 \not \to W$, $R_2 \not \to W$ (and vice-versa).
  481. That is to say, $R_2$ happens strictly after $R_1$ and $W$ is concurrent to $R_1$ and $R_2$.
  482. This has been illustrated in figure \ref{fig:ex-3-2}.
  483. \begin{figure}[H]
  484. \centering \includegraphics[width=12cm]{res/ex-3-2.png}
  485. \caption{Ex 3.2 counterexample} \label{fig:ex-3-2}
  486. \end{figure}
  487. The problem here is that it is valid for $R_2$ to be a read returning the previous value $<0, B>$.
  488. The register construction in question is an atomic MRSW register, but the internal registers are regular SRSW registers.
  489. This means there is a register in the diagonal for thread 1, for which the write can be pending.
  490. Because both reads happen concurrently, it is perfectly acceptable for the first read $R_1$ to read the new value while the second read $R_2$ reads the old value.
  491. Usually we would combat this problem by assisting propagation of values in read operations, but exactly for this register in the diagonal that is being written, the readers are disallowed from writing it.
  492. In the code, any registers in the diagonal are skipped, as can be seen in line 23 of figure \ref{fig:atomic_mval_mrsw}.
  493. Similar to before in Ex 3.1, this example is not linearizable, so our construction cannot be a valid atomic MRSW register.
  494. %\subsection{(2 points)} \label{ex:34}
  495. %Consider the safe Boolean MRSW construction shown in Figure \ref{fig:safe_bool_mrsw}.
  496. %\\
  497. %True or false: if we replace the safe Boolean SRSW register array with an array of safe $M$-valued registers, then the construction yields a safe $M$-valued MRSW register.
  498. %Justify your answer by sketching a proof or providing a counterexample.
  499. %\subsection{(2 points)} \label{ex:37_new}
  500. %Give an example of a sequentially-consistent execution that is not safe.
  501. %\subsection{(2 points)} \label{ex:39}
  502. %Consider the regular Boolean MRSW construction shown in Figure \ref{fig:regular_bool_mrsw}.
  503. %\\
  504. %True of false: if we replace the safe Boolean MRSW register with a safe $M$-valued MRSW register, then the construction yields a regular $M$-valued MRSW register.
  505. %Justify your answer by sketching a proof or counterexample.
  506. %
  507. %\begin{figure}[H]
  508. % \centering \includegraphics[width=12cm]{res/Regular Boolean MRSW.jpg}
  509. % \caption{Regular Boolean MRSW construction} \label{fig:regular_bool_mrsw}
  510. %\end{figure}
  511. \subsection{(4 points)} \label{ex:40_new}
  512. Does Peterson's two-thread mutual exclusion algorithm from Figure \ref{fig:peterson_lock} still work if the shared atomic \texttt{flag} registers are replaced by safe registers?
  513. Argue by either providing a proof sketch or counterexample.
  514. \textbf{Solution} Yes, it does.
  515. To prove this, we can consider what really changes.
  516. Only the situations in which a read is being performed concurrently to a write have any changes, so we restrict ourselves to these cases.
  517. Now there is the possibility that for such a read a random value is returned while the write is being performed.
  518. If it is the value that is currently being written, that is completely fine.
  519. If it is the value that is not being written, then it must be the previous value, because the registers are boolean.
  520. We can look at all relevant cases more precisely.
  521. There is only a danger when i accesses the flag register of thread j, because threads generally don't access their own registers.
  522. The only read to $flag[j]$ happens in the while loop, where the lock spins.
  523. The only writes to $flag[i]$ happen either in \texttt{lock()} during the doorway section, or in \texttt{unlock()}.
  524. If the flag is being written in \texttt{lock()} and read at the same time, then it must be the case that flag was previously $false$ and is being written $true$, because the thread is not in the critical section and about to enter it.
  525. This case is fine, since the thread reading the flag (lets call this thread thread A) has already finished its doorway and mutual exclusion is guaranteed because the thread writing the flag (lets call this thread thread B) will spin:
  526. Thread A has set its flag to true and completed its write to victim.
  527. Thread B is in the process of writing the flag, so it will write to to the victim next and become blocked, spinning in the while loop.
  528. If the flag is being written in \texttt{unlock()} (by thread B) and read at the same time (by thread A), then it must be the case that the flag was previously $true$ and being written $false$, because the thread entered the critical section and is exiting it.
  529. If A reads the old value $true$, it continues spinning in the while loop, so in this case only a delay is caused (that is certain to be overcome once the read in the loop returns the new value).
  530. As we can see, there are no problematic cases and thus the Peterson two-thread mutual exclusion algorithm still works, even if the shared atomic \texttt{flag} registers are replaced by safe registers.
  531. %\subsection{(6 points)} \label{ex:41}
  532. %Consider the following implementation of a register in a distributed, message-passing system.
  533. %There are $n$ single-threaded processors $P_0, \ldots, P_{n-1}$ arranged in a ring, where $P_i$ can send messages only to $P_{i+1\ mod\ n}$.
  534. %Messages are delivered in FIFO order along each link.
  535. %Each processor keeps a copy of the shared register.
  536. %\begin{itemize}
  537. % \item To read a register, the processor reads the copy in its local memory.
  538. % \item A processor $P_i$ starts a $write()$ call of value $v$ to register $x$, by sending the message `$P_i:\ write\ v\ to\ x$' to $P_{i+1\ mod\ n}$.
  539. % \item If $P_i$ receives a message `$P_j:\ write\ v\ to\ x$' for $i \ne j$, then it writes $v$ to its local copy of $x$ and forwards the message to $P_{i+1\ mod\ n}$.
  540. % \item If $P_i$ receives a message `$P_i:\ write\ v\ to\ x$' then it writes $v$ to its local copy of $x$ and discards the message.
  541. % The $write()$ call is now complete.
  542. %\end{itemize}
  543. %
  544. %For the following questions either provide a short proof sketch or counterexample.
  545. %If write operations do not overlap, ...
  546. %\begin{enumerate}[label=\alph*)]
  547. % \item is this register implementation regular?
  548. % \item is it atomic?
  549. %\end{enumerate}
  550. %If multiple processors are allowed to call $write()$ simultaneously, ...
  551. %\begin{enumerate}[label=\alph*)]
  552. % \setcounter{enumi}{2}
  553. % \item is this register implementation safe?
  554. %\end{enumerate}
  555. \end{document}