Publications
On the cost of concurrency in transactional memory
Abstract
Traditional techniques for synchronization are based on \emph{locking} that provides threads with exclusive access to shared data. \emph{Coarse-grained} locking typically forces threads to access large amounts of data sequentially and, thus, does not fully exploit hardware concurrency. Program-specific \emph{fine-grained} locking or \emph{non-blocking} (\emph{i.e.}, not using locks) synchronization, on the other hand, is a dark art to most programmers and trusted to the wisdom of a few computing experts. Thus, it is appealing to seek a middle ground between these two extremes: a synchronization mechanism that relieves the programmer of the overhead of reasoning about data conflicts that may arise from concurrent operations without severely limiting the program's performance. The \emph{Transactional Memory (TM)} abstraction is proposed as such a mechanism: it intends to combine an easy-to-use programming interface with an efficient utilization of the concurrent-computing abilities provided by multicore architectures. TM allows the programmer to \emph{speculatively} execute sequences of shared-memory operations as \emph{atomic transactions} with \emph{all-or-nothing} semantics: the transaction can either \emph{commit}, in which case it appears as executed sequentially, or \emph{abort}, in which case its update operations do not take effect. Thus, the programmer can design software having only sequential semantics in mind and let TM take care, at run-time, of resolving the conflicts in concurrent executions. Intuitively, we want TMs to allow for as much \emph{concurrency} as possible: in the absence of severe data conflicts …
- Date
- November 5, 2015
- Authors
- Srivatsan Ravi
- Journal
- arXiv preprint arXiv:1511.01779