FastBernoulliTrial.h 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379
  1. /* -*- Mode: C++; tab-width: 8; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
  2. /* This Source Code Form is subject to the terms of the Mozilla Public
  3. * License, v. 2.0. If a copy of the MPL was not distributed with this
  4. * file, You can obtain one at http://mozilla.org/MPL/2.0/. */
  5. #ifndef mozilla_FastBernoulliTrial_h
  6. #define mozilla_FastBernoulliTrial_h
  7. #include "mozilla/Assertions.h"
  8. #include "mozilla/XorShift128PlusRNG.h"
  9. #include <cmath>
  10. #include <stdint.h>
  11. namespace mozilla {
  12. /**
  13. * class FastBernoulliTrial: Efficient sampling with uniform probability
  14. *
  15. * When gathering statistics about a program's behavior, we may be observing
  16. * events that occur very frequently (e.g., function calls or memory
  17. * allocations) and we may be gathering information that is somewhat expensive
  18. * to produce (e.g., call stacks). Sampling all the events could have a
  19. * significant impact on the program's performance.
  20. *
  21. * Why not just sample every N'th event? This technique is called "systematic
  22. * sampling"; it's simple and efficient, and it's fine if we imagine a
  23. * patternless stream of events. But what if we're sampling allocations, and the
  24. * program happens to have a loop where each iteration does exactly N
  25. * allocations? You would end up sampling the same allocation every time through
  26. * the loop; the entire rest of the loop becomes invisible to your measurements!
  27. * More generally, if each iteration does M allocations, and M and N have any
  28. * common divisor at all, most allocation sites will never be sampled. If
  29. * they're both even, say, the odd-numbered allocations disappear from your
  30. * results.
  31. *
  32. * Ideally, we'd like each event to have some probability P of being sampled,
  33. * independent of its neighbors and of its position in the sequence. This is
  34. * called "Bernoulli sampling", and it doesn't suffer from any of the problems
  35. * mentioned above.
  36. *
  37. * One disadvantage of Bernoulli sampling is that you can't be sure exactly how
  38. * many samples you'll get: technically, it's possible that you might sample
  39. * none of them, or all of them. But if the number of events N is large, these
  40. * aren't likely outcomes; you can generally expect somewhere around P * N
  41. * events to be sampled.
  42. *
  43. * The other disadvantage of Bernoulli sampling is that you have to generate a
  44. * random number for every event, which can be slow.
  45. *
  46. * [significant pause]
  47. *
  48. * BUT NOT WITH THIS CLASS! FastBernoulliTrial lets you do true Bernoulli
  49. * sampling, while generating a fresh random number only when we do decide to
  50. * sample an event, not on every trial. When it decides not to sample, a call to
  51. * |FastBernoulliTrial::trial| is nothing but decrementing a counter and
  52. * comparing it to zero. So the lower your sampling probability is, the less
  53. * overhead FastBernoulliTrial imposes.
  54. *
  55. * Probabilities of 0 and 1 are handled efficiently. (In neither case need we
  56. * ever generate a random number at all.)
  57. *
  58. * The essential API:
  59. *
  60. * - FastBernoulliTrial(double P)
  61. * Construct an instance that selects events with probability P.
  62. *
  63. * - FastBernoulliTrial::trial()
  64. * Return true with probability P. Call this each time an event occurs, to
  65. * decide whether to sample it or not.
  66. *
  67. * - FastBernoulliTrial::trial(size_t n)
  68. * Equivalent to calling trial() |n| times, and returning true if any of those
  69. * calls do. However, like trial, this runs in fast constant time.
  70. *
  71. * What is this good for? In some applications, some events are "bigger" than
  72. * others. For example, large allocations are more significant than small
  73. * allocations. Perhaps we'd like to imagine that we're drawing allocations
  74. * from a stream of bytes, and performing a separate Bernoulli trial on every
  75. * byte from the stream. We can accomplish this by calling |t.trial(S)| for
  76. * the number of bytes S, and sampling the event if that returns true.
  77. *
  78. * Of course, this style of sampling needs to be paired with analysis and
  79. * presentation that makes the size of the event apparent, lest trials with
  80. * large values for |n| appear to be indistinguishable from those with small
  81. * values for |n|.
  82. */
  83. class FastBernoulliTrial {
  84. /*
  85. * This comment should just read, "Generate skip counts with a geometric
  86. * distribution", and leave everyone to go look that up and see why it's the
  87. * right thing to do, if they don't know already.
  88. *
  89. * BUT IF YOU'RE CURIOUS, COMMENTS ARE FREE...
  90. *
  91. * Instead of generating a fresh random number for every trial, we can
  92. * randomly generate a count of how many times we should return false before
  93. * the next time we return true. We call this a "skip count". Once we've
  94. * returned true, we generate a fresh skip count, and begin counting down
  95. * again.
  96. *
  97. * Here's an awesome fact: by exercising a little care in the way we generate
  98. * skip counts, we can produce results indistinguishable from those we would
  99. * get "rolling the dice" afresh for every trial.
  100. *
  101. * In short, skip counts in Bernoulli trials of probability P obey a geometric
  102. * distribution. If a random variable X is uniformly distributed from [0..1),
  103. * then std::floor(std::log(X) / std::log(1-P)) has the appropriate geometric
  104. * distribution for the skip counts.
  105. *
  106. * Why that formula?
  107. *
  108. * Suppose we're to return |true| with some probability P, say, 0.3. Spread
  109. * all possible futures along a line segment of length 1. In portion P of
  110. * those cases, we'll return true on the next call to |trial|; the skip count
  111. * is 0. For the remaining portion 1-P of cases, the skip count is 1 or more.
  112. *
  113. * skip: 0 1 or more
  114. * |------------------^-----------------------------------------|
  115. * portion: 0.3 0.7
  116. * P 1-P
  117. *
  118. * But the "1 or more" section of the line is subdivided the same way: *within
  119. * that section*, in portion P the second call to |trial()| returns true, and in
  120. * portion 1-P it returns false a second time; the skip count is two or more.
  121. * So we return true on the second call in proportion 0.7 * 0.3, and skip at
  122. * least the first two in proportion 0.7 * 0.7.
  123. *
  124. * skip: 0 1 2 or more
  125. * |------------------^------------^----------------------------|
  126. * portion: 0.3 0.7 * 0.3 0.7 * 0.7
  127. * P (1-P)*P (1-P)^2
  128. *
  129. * We can continue to subdivide:
  130. *
  131. * skip >= 0: |------------------------------------------------- (1-P)^0 --|
  132. * skip >= 1: | ------------------------------- (1-P)^1 --|
  133. * skip >= 2: | ------------------ (1-P)^2 --|
  134. * skip >= 3: | ^ ---------- (1-P)^3 --|
  135. * skip >= 4: | . --- (1-P)^4 --|
  136. * .
  137. * ^X, see below
  138. *
  139. * In other words, the likelihood of the next n calls to |trial| returning
  140. * false is (1-P)^n. The longer a run we require, the more the likelihood
  141. * drops. Further calls may return false too, but this is the probability
  142. * we'll skip at least n.
  143. *
  144. * This is interesting, because we can pick a point along this line segment
  145. * and see which skip count's range it falls within; the point X above, for
  146. * example, is within the ">= 2" range, but not within the ">= 3" range, so it
  147. * designates a skip count of 2. So if we pick points on the line at random
  148. * and use the skip counts they fall under, that will be indistinguishable
  149. * from generating a fresh random number between 0 and 1 for each trial and
  150. * comparing it to P.
  151. *
  152. * So to find the skip count for a point X, we must ask: To what whole power
  153. * must we raise 1-P such that we include X, but the next power would exclude
  154. * it? This is exactly std::floor(std::log(X) / std::log(1-P)).
  155. *
  156. * Our algorithm is then, simply: When constructed, compute an initial skip
  157. * count. Return false from |trial| that many times, and then compute a new skip
  158. * count.
  159. *
  160. * For a call to |trial(n)|, if the skip count is greater than n, return false
  161. * and subtract n from the skip count. If the skip count is less than n,
  162. * return true and compute a new skip count. Since each trial is independent,
  163. * it doesn't matter by how much n overshoots the skip count; we can actually
  164. * compute a new skip count at *any* time without affecting the distribution.
  165. * This is really beautiful.
  166. */
  167. public:
  168. /**
  169. * Construct a fast Bernoulli trial generator. Calls to |trial()| return true
  170. * with probability |aProbability|. Use |aState0| and |aState1| to seed the
  171. * random number generator; both may not be zero.
  172. */
  173. FastBernoulliTrial(double aProbability, uint64_t aState0, uint64_t aState1)
  174. : mProbability(0)
  175. , mInvLogNotProbability(0)
  176. , mGenerator(aState0, aState1)
  177. , mSkipCount(0)
  178. {
  179. setProbability(aProbability);
  180. }
  181. /**
  182. * Return true with probability |mProbability|. Call this each time an event
  183. * occurs, to decide whether to sample it or not. The lower |mProbability| is,
  184. * the faster this function runs.
  185. */
  186. bool trial() {
  187. if (mSkipCount) {
  188. mSkipCount--;
  189. return false;
  190. }
  191. return chooseSkipCount();
  192. }
  193. /**
  194. * Equivalent to calling trial() |n| times, and returning true if any of those
  195. * calls do. However, like trial, this runs in fast constant time.
  196. *
  197. * What is this good for? In some applications, some events are "bigger" than
  198. * others. For example, large allocations are more significant than small
  199. * allocations. Perhaps we'd like to imagine that we're drawing allocations
  200. * from a stream of bytes, and performing a separate Bernoulli trial on every
  201. * byte from the stream. We can accomplish this by calling |t.trial(S)| for
  202. * the number of bytes S, and sampling the event if that returns true.
  203. *
  204. * Of course, this style of sampling needs to be paired with analysis and
  205. * presentation that makes the "size" of the event apparent, lest trials with
  206. * large values for |n| appear to be indistinguishable from those with small
  207. * values for |n|, despite being potentially much more likely to be sampled.
  208. */
  209. bool trial(size_t aCount) {
  210. if (mSkipCount > aCount) {
  211. mSkipCount -= aCount;
  212. return false;
  213. }
  214. return chooseSkipCount();
  215. }
  216. void setRandomState(uint64_t aState0, uint64_t aState1) {
  217. mGenerator.setState(aState0, aState1);
  218. }
  219. void setProbability(double aProbability) {
  220. MOZ_ASSERT(0 <= aProbability && aProbability <= 1);
  221. mProbability = aProbability;
  222. if (0 < mProbability && mProbability < 1) {
  223. /*
  224. * Let's look carefully at how this calculation plays out in floating-
  225. * point arithmetic. We'll assume IEEE, but the final C++ code we arrive
  226. * at would still be fine if our numbers were mathematically perfect. So,
  227. * while we've considered IEEE's edge cases, we haven't done anything that
  228. * should be actively bad when using other representations.
  229. *
  230. * (In the below, read comparisons as exact mathematical comparisons: when
  231. * we say something "equals 1", that means it's exactly equal to 1. We
  232. * treat approximation using intervals with open boundaries: saying a
  233. * value is in (0,1) doesn't specify how close to 0 or 1 the value gets.
  234. * When we use closed boundaries like [2**-53, 1], we're careful to ensure
  235. * the boundary values are actually representable.)
  236. *
  237. * - After the comparison above, we know mProbability is in (0,1).
  238. *
  239. * - The gaps below 1 are 2**-53, so that interval is (0, 1-2**-53].
  240. *
  241. * - Because the floating-point gaps near 1 are wider than those near
  242. * zero, there are many small positive doubles ε such that 1-ε rounds to
  243. * exactly 1. However, 2**-53 can be represented exactly. So
  244. * 1-mProbability is in [2**-53, 1].
  245. *
  246. * - log(1 - mProbability) is thus in (-37, 0].
  247. *
  248. * That range includes zero, but when we use mInvLogNotProbability, it
  249. * would be helpful if we could trust that it's negative. So when log(1
  250. * - mProbability) is 0, we'll just set mProbability to 0, so that
  251. * mInvLogNotProbability is not used in chooseSkipCount.
  252. *
  253. * - How much of the range of mProbability does this cause us to ignore?
  254. * The only value for which log returns 0 is exactly 1; the slope of log
  255. * at 1 is 1, so for small ε such that 1 - ε != 1, log(1 - ε) is -ε,
  256. * never 0. The gaps near one are larger than the gaps near zero, so if
  257. * 1 - ε wasn't 1, then -ε is representable. So if log(1 - mProbability)
  258. * isn't 0, then 1 - mProbability isn't 1, which means that mProbability
  259. * is at least 2**-53, as discussed earlier. This is a sampling
  260. * likelihood of roughly one in ten trillion, which is unlikely to be
  261. * distinguishable from zero in practice.
  262. *
  263. * So by forbidding zero, we've tightened our range to (-37, -2**-53].
  264. *
  265. * - Finally, 1 / log(1 - mProbability) is in [-2**53, -1/37). This all
  266. * falls readily within the range of an IEEE double.
  267. *
  268. * ALL THAT HAVING BEEN SAID: here are the five lines of actual code:
  269. */
  270. double logNotProbability = std::log(1 - mProbability);
  271. if (logNotProbability == 0.0)
  272. mProbability = 0.0;
  273. else
  274. mInvLogNotProbability = 1 / logNotProbability;
  275. }
  276. chooseSkipCount();
  277. }
  278. private:
  279. /* The likelihood that any given call to |trial| should return true. */
  280. double mProbability;
  281. /*
  282. * The value of 1/std::log(1 - mProbability), cached for repeated use.
  283. *
  284. * If mProbability is exactly 0 or exactly 1, we don't use this value.
  285. * Otherwise, we guarantee this value is in the range [-2**53, -1/37), i.e.
  286. * definitely negative, as required by chooseSkipCount. See setProbability for
  287. * the details.
  288. */
  289. double mInvLogNotProbability;
  290. /* Our random number generator. */
  291. non_crypto::XorShift128PlusRNG mGenerator;
  292. /* The number of times |trial| should return false before next returning true. */
  293. size_t mSkipCount;
  294. /*
  295. * Choose the next skip count. This also returns the value that |trial| should
  296. * return, since we have to check for the extreme values for mProbability
  297. * anyway, and |trial| should never return true at all when mProbability is 0.
  298. */
  299. bool chooseSkipCount() {
  300. /*
  301. * If the probability is 1.0, every call to |trial| returns true. Make sure
  302. * mSkipCount is 0.
  303. */
  304. if (mProbability == 1.0) {
  305. mSkipCount = 0;
  306. return true;
  307. }
  308. /*
  309. * If the probabilility is zero, |trial| never returns true. Don't bother us
  310. * for a while.
  311. */
  312. if (mProbability == 0.0) {
  313. mSkipCount = SIZE_MAX;
  314. return false;
  315. }
  316. /*
  317. * What sorts of values can this call to std::floor produce?
  318. *
  319. * Since mGenerator.nextDouble returns a value in [0, 1-2**-53], std::log
  320. * returns a value in the range [-infinity, -2**-53], all negative. Since
  321. * mInvLogNotProbability is negative (see its comments), the product is
  322. * positive and possibly infinite. std::floor returns +infinity unchanged.
  323. * So the result will always be positive.
  324. *
  325. * Converting a double to an integer that is out of range for that integer
  326. * is undefined behavior, so we must clamp our result to SIZE_MAX, to ensure
  327. * we get an acceptable value for mSkipCount.
  328. *
  329. * The clamp is written carefully. Note that if we had said:
  330. *
  331. * if (skipCount > SIZE_MAX)
  332. * skipCount = SIZE_MAX;
  333. *
  334. * that leads to undefined behavior 64-bit machines: SIZE_MAX coerced to
  335. * double is 2^64, not 2^64-1, so this doesn't actually set skipCount to a
  336. * value that can be safely assigned to mSkipCount.
  337. *
  338. * Jakub Oleson cleverly suggested flipping the sense of the comparison: if
  339. * we require that skipCount < SIZE_MAX, then because of the gaps (2048)
  340. * between doubles at that magnitude, the highest double less than 2^64 is
  341. * 2^64 - 2048, which is fine to store in a size_t.
  342. *
  343. * (On 32-bit machines, all size_t values can be represented exactly in
  344. * double, so all is well.)
  345. */
  346. double skipCount = std::floor(std::log(mGenerator.nextDouble())
  347. * mInvLogNotProbability);
  348. if (skipCount < SIZE_MAX)
  349. mSkipCount = skipCount;
  350. else
  351. mSkipCount = SIZE_MAX;
  352. return true;
  353. }
  354. };
  355. } /* namespace mozilla */
  356. #endif /* mozilla_FastBernoulliTrial_h */