Abstract:
The process of designing and implementing correct concurrent data structures is non-trivial and often error prone. The recent commercial availability of Non-Volatile Memory (NVM) has prompted many researchers to also consider designing concurrent data structures that persist shared state allowing the data structure to be recovered following a power failure. These persistent concurrent data structures further complicate the process of achieving correct and efficient implementations.
Much of the existing research regarding persistent concurrent data structures has focused on hand-crafted approaches which often involves persisting an existing volatile concurrent data structure. Unfortunately, designing efficient and correct hand-crafted persistent data structures is often difficult, time consuming and error prone. Much of the challenge stems from the fact that it is already difficult to achieve correct volatile concurrent data structures.
Alternatively, concurrency can be achieved through the use of transactional memory (TM) and Universal constructions (UCs) which produce a concurrent object given a sequential object.
TMs are synchronization mechanisms that allows users to execute sequences of memory accesses as atomic transactions. Similarly, a persistent TM (PTM) provides the same functionality but also ensures that the effects of transactions are written back to NVM.
Analogously, Persistent universal constructions (PUCs), beyond producing a concurrent object from a sequential object, guarantee that the object can be recovered following a crash.
This talk will present algorithms and complexity bounds for PTMs and PUCs supporting the programmability of persistent concurrent applications. Our results provide insights into the tradeoffs associated with correctness and durability properties, cost of recovering the application state following a crash and the actual performance on persistent memory multi-socket CPU architectures.
Bio:
Srivatsan Ravi is a faculty member at the Department of Computer Science and a research lead at the Information Sciences Institute in University of Southern California. His research interests are centered around the theory and practice of concurrent and distributed programming as well as application of distributed techniques for networking and privacy-preserving federated computations. He received his bachelors degree from Anna University (India), his masters degree from Cornell University (New York) and doctoral degree from Technical University of Berlin (Germany) where he was a Marie Curie Ph.D Fellow. He has served as Principal Investigator and Key Personnel in several DARPA and NSF programs.
Abstract:
The process of designing and implementing correct concurrent data structures is non-trivial and often error prone. The recent commercial availability of Non-Volatile Memory (NVM) has prompted many researchers to also consider designing concurrent data structures that persist shared state allowing the data structure to be recovered following a power failure. These persistent concurrent data structures further complicate the process of achieving correct and efficient implementations.
Much of the existing research regarding persistent concurrent data structures has focused on hand-crafted approaches which often involves persisting an existing volatile concurrent data structure. Unfortunately, designing efficient and correct hand-crafted persistent data structures is often difficult, time consuming and error prone. Much of the challenge stems from the fact that it is already difficult to achieve correct volatile concurrent data structures.
Alternatively, concurrency can be achieved through the use of transactional memory (TM) and Universal constructions (UCs) which produce a concurrent object given a sequential object.
TMs are synchronization mechanisms that allows users to execute sequences of memory accesses as atomic transactions. Similarly, a persistent TM (PTM) provides the same functionality but also ensures that the effects of transactions are written back to NVM.
Analogously, Persistent universal constructions (PUCs), beyond producing a concurrent object from a sequential object, guarantee that the object can be recovered following a crash.
This talk will present algorithms and complexity bounds for PTMs and PUCs supporting the programmability of persistent concurrent applications. Our results provide insights into the tradeoffs associated with correctness and durability properties, cost of recovering the application state following a crash and the actual performance on persistent memory multi-socket CPU architectures.
Bio:
Srivatsan Ravi is a faculty member at the Department of Computer Science and a research lead at the Information Sciences Institute in University of Southern California. His research interests are centered around the theory and practice of concurrent and distributed programming as well as application of distributed techniques for networking and privacy-preserving federated computations. He received his bachelors degree from Anna University (India), his masters degree from Cornell University (New York) and doctoral degree from Technical University of Berlin (Germany) where he was a Marie Curie Ph.D Fellow. He has served as Principal Investigator and Key Personnel in several DARPA and NSF programs.