Parallel committing

If you were to use transactional memory in a project, you would likely also need some kind of backing, persistent storage for the data held by it. An STM can be used in memory only, coordinating access to some temporary shared data, but adding the ability to synchronize it with a backing storage makes it much more useful. This is what the new version of Shielded is about.

The new development was actually inspired by trying to find the simplest way to implement data distribution in Shielded, to connect multiple servers in a cluster, and have distributed in-memory transactions. This is something I was asked a couple of times. It’s then only natural to think about connecting to an external database. This could be an SQL or NoSQL database, or something like Redis. It seemed best to add a general mechanism allowing different kinds of distribution implementations to be plugged in easily. This distilled further down to one crucial feature – plugging arbitrary code inside the commit process.

The Shield static class now has new methods, called WhenCommitting, which enable just that. You can now subscribe a method to be called from within the commit process. This method gets called after a commit is checked and allowed to proceed, but before any write actually occurs. During this time, the individual shielded fields are locked. Whatever your method is doing, you are guaranteed that other transactions, which depend on the outcome of your transaction, are waiting for you to complete. You are safe to do changes in an external database, or to publish changes to other servers, whatever you wish. You are also allowed to roll the current transaction back, causing a retry, and to make further changes to the involved fields, but only those which were already changed in the main transaction (since only they were checked and are allowed to commit).

Previously, one would have to use Conditional subscriptions to achieve something like this. The conditional could then use the SideEffect method to execute external changes. But between the transaction, the conditional triggered by it, and the side effect, other threads could jump in, see the new changes, and even make further ones. Although a lot can still be done like this, it lacks the simplicity and strong ordering guarantees provided to WhenCommitting subscriptions. As part of the commit process, you are guaranteed that your external changes are executed in the exact order as in the shielded fields themselves. No additional methods of synchronizing are needed.

Of course, executing arbitrary code during a commit is a dangerous game. These methods could easily take a lot of time to complete, keeping some shielded fields blocked. Shielded has up to this point by default used spin-waiting while some fields are locked in a commit. That is not possible any more. It could be changed with a compiler symbol, SERVER, to use Monitor.Wait and PulseAll to wait for the fields to unlock. However, this is too expensive when not needed. So an important change was made under the hood. The new StampLocker class implements an adaptive locker, which spin-waits first, and after a couple of yields switches to Monitor.Wait/PulseAll. When used with quick, in-memory transactions, it will mostly spin-wait giving better performance. But a longer wait will cause it to switch to Monitor waiting, which saves us from wasting CPU time. In tests which involve thread sleeps during a commit, using Monitor waiting also increases their throughput.

I believe this could be the most significant feature in Shielded. The library can now interop with arbitrary other systems, greatly expanding its possible uses. It can probably do a great job as a wrapping layer around a database, a kind of active caching, allowing faster execution of changes in the data, and particularly faster reads. But with a simple, general mechanism like this, there are many possibilities. Like for example combining Shielded with a distributed consensus implementation, and achieving data distribution. Or even combining both.

I hope you will find the library more useful now. If anyone does anything interesting with it, and is willing to share, please do!

Advertisements

2 thoughts on “Parallel committing

  1. Commenter

    What is the advantage of using an externally backed STM implementation over just a normal external transactional (and possibly in memory) database?

    Reply
    1. Josip Bakić Post author

      Good question. I think there are a couple of points to consider:
      – Pure read operations can, if everything they need is already in-memory (and in the same process…), be faster. Shielded is very non-obstructive to readers, thanks to MVCC.
      – Shielded basically does some of the things you would have to do yourself if you write your own cache. You would at least have to do safety copies. And ensuring that readers get a consistent view of the whole cache (if that is needed) could be pretty hard without resorting to locking and all the well known problems it entails.
      – You do not need to implement backing storage for all your shielded field types, yet you will still have the ability to do safe transactions over both kinds of fields seamlessly. (E.g. the state of some communication channel to a client can be ephemeral, while the data you publish over it will be persistent. And some operations may easily require a transaction spanning both.)
      – Writer conflicts may get detected in memory, before communicating with an external system. Generally, communication load with the external system can be reduced.
      – Various DBs have various consistency guarantees. You can use an STM to supplement them when needed. Lower consistency DBs typically scale better, so not dumping the full burden of consistency checking on the DB could be helpful.

      It will all depend, of course, on how exactly you implement the synchronizing with external storage. When the external DB is strongly consistent, then it will be safeguarding against conflicts anyway, that is true. But STM provides a conceptual simplicity and safety which make it very easy to work with, particularly as the complexity of the system increases. Even if it were used just as a cache.

      And, or course, the mechanism is general, so it can support many different use cases, or a combination of multiple approaches.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s