rcuref.txt 2.2 KB

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
  1. Reference-count design for elements of lists/arrays protected by RCU.
  2. Reference counting on elements of lists which are protected by traditional
  3. reader/writer spinlocks or semaphores are straightforward:
  4. 1. 2.
  5. add() search_and_reference()
  6. { {
  7. alloc_object read_lock(&list_lock);
  8. ... search_for_element
  9. atomic_set(&el->rc, 1); atomic_inc(&el->rc);
  10. write_lock(&list_lock); ...
  11. add_element read_unlock(&list_lock);
  12. ... ...
  13. write_unlock(&list_lock); }
  14. }
  15. 3. 4.
  16. release_referenced() delete()
  17. { {
  18. ... write_lock(&list_lock);
  19. atomic_dec(&el->rc, relfunc) ...
  20. ... delete_element
  21. } write_unlock(&list_lock);
  22. ...
  23. if (atomic_dec_and_test(&el->rc))
  24. kfree(el);
  25. ...
  26. }
  27. If this list/array is made lock free using RCU as in changing the
  28. write_lock() in add() and delete() to spin_lock() and changing read_lock()
  29. in search_and_reference() to rcu_read_lock(), the atomic_inc() in
  30. search_and_reference() could potentially hold reference to an element which
  31. has already been deleted from the list/array. Use atomic_inc_not_zero()
  32. in this scenario as follows:
  33. 1. 2.
  34. add() search_and_reference()
  35. { {
  36. alloc_object rcu_read_lock();
  37. ... search_for_element
  38. atomic_set(&el->rc, 1); if (!atomic_inc_not_zero(&el->rc)) {
  39. spin_lock(&list_lock); rcu_read_unlock();
  40. return FAIL;
  41. add_element }
  42. ... ...
  43. spin_unlock(&list_lock); rcu_read_unlock();
  44. } }
  45. 3. 4.
  46. release_referenced() delete()
  47. { {
  48. ... spin_lock(&list_lock);
  49. if (atomic_dec_and_test(&el->rc)) ...
  50. call_rcu(&el->head, el_free); delete_element
  51. ... spin_unlock(&list_lock);
  52. } ...
  53. if (atomic_dec_and_test(&el->rc))
  54. call_rcu(&el->head, el_free);
  55. ...
  56. }
  57. Sometimes, a reference to the element needs to be obtained in the
  58. update (write) stream. In such cases, atomic_inc_not_zero() might be
  59. overkill, since we hold the update-side spinlock. One might instead
  60. use atomic_inc() in such cases.