Benchmarking an STM

While recently discussing with a colleague about a project we’re working on, I mentioned how it might be a good fit for using an STM. I actually think it is a perfect fit. Working on that project is what spurred my interest in the STM concept itself and inspired me to write Shielded. However, the colleague disagreed, and made a remark which struck me as significant. He’s a talented guy, very pragmatic, with a keen gift for smelling problems way before most programmers. His Titanic would see that fatal iceberg even before leaving Southampton. So, the remark?

“If we use it, every access to a field has to go through it.”

Now, granted, this is not big news. The fact that STM slows things down is well known and is probably what makes the concept unappealing to a lot of programmers. But what particularly struck me is the phrase “every access”. A complex system is sure to make a lot of reads and writes within one transaction, many of those hitting the same fields. Cost of the first access in STM should compare well with the cost of taking an ordinary lock, because an STM is roughly equivalent to perfectly granular locking. (And consider how difficult it is to do perfectly granular locking…) But in locking solutions, additional access to a field is negligible.

So, I decided I should measure and see how much it costs to access a field once, and how much to access it after that, in the same transaction. The test I wrote can be seen here. (Please forgive the mess, the ConsoleTests project is a quick test bed, I don’t bother much with keeping it tidy.) Here are the cost calculation results:

cost of empty transaction = 0.890 us
cost of the closure in InTransaction<T> = 0.270 us
cost of the first read = 0.920 us
cost of an additional read = 0.100 us
cost of the first Modify = 5.470 us
cost of an additional Modify = 0.084 us
cost of a Read after Modify = 0.068 us
cost of the first Assign = 8.370 us
cost of an additional Assign = 0.648 us
cost of the first degenerated Assign = 5.280 us
cost of an additional degenerated Assign = 0.563 us

This is all in microseconds. Only the first non-zero digit should be considered relevant, due to variations between test runs. The test was executed on my laptop – an i5-2430M processor ticking at 2.4 GHz, with 4 GB RAM, running Ubuntu 13.10 (64 bit) and Mono 2.10.8.1.

Empty transactions is a simple measure – just a NOP delegate passed into the InTransaction method. The “cost of the closure in InTransaction<T>” refers to a handy overload of the same method which takes a Func<T> instead of an Action, runs it transactionally and returns it’s final result. This comes in very handy. To do this, it has to create a closure, which it passes to the non-generic InTransaction. The measure indicates the additional cost of creating that closure. I’m not very happy with that score, but there’s nothing I can do about it, it is the result of how this is implemented in Mono. On the plus side, it means some of the key Shielded operations are close to the same order of magnitude as allocating a simple closure.

Further results indicate how much it costs to perform certain operations for the first time, and for every additional time, within a transaction. Note that this is not just the running time of the particular method call – adding a field to a transaction’s footprint means that some pre-commit checking will now also be done, depending on the operation.

The cost of an additional access is roughly an order of magnitude smaller than the cost of the first access. After a Modify call it is even better. This might surprise you, but bear in mind that a write prepares a field’s thread-local storage. This is probably the most expensive part of it, but any future read or write is then executed directly against that local storage, making it faster.

Overall, the results seem OK. Something, however, completely sticks out, and that’s the performance of Assign. As a commutable operation, introducing it into a transaction includes the commute sub-transaction mechanism, and the hit this introduced is big. Also, note how, since commutes involve delegates and the creation of closures (specifically, at least two closures will be allocated), additional Assign calls also remain slow when compared to additional Modify calls, whose delegates in the test were not closing over any local variables. It cannot go below the cost of a closure. When degenerated (when the field was read in the transaction, which disables commuting over it) it executes directly, and it’s first call performance is roughly equal to Modify, as would be expected. But additional calls again remain too slow – again, most likely, due to the closures.

To address this, Assign will be made non-commutable, an ordinary op, which should make it faster than Modify. The current method will remain, only bearing a different name, for use when commutability is actually needed. Commutables are very useful when they don’t degenerate, since the price of a needless conflict is far bigger than these microseconds here.

Let me know in the comments what you think about these results, how you feel about the costs, and if you see something here that I have missed. Would you consider using Shielded?

Advertisements

Why I still like obstruction freedom

Recently I’ve tried to change Shielded to use encounter-time locking. As opposed to the current implementation, which employs commit-time locking, this new version would lock an object on every write attempt. This would certainly speed it up, I figured. No IShielded would have to have thread-local storage for it’s temporary data – if a thread locks a field, it can write directly into some private members*. Commit time checking simplifies, knowing that every changed field must already be locked. Sounds great.

The first thing that broke was a test. Some of them would create and wait for a thread which is in deliberate conflict with them. After changing to encounter-time locking, this produced a deadlock. This is, of course, not a problem, since no thread should do that 😉

But, one thing would not fit into this concept well – the commutes. For as long as a thread keeps a field locked, no other thread can do anything with it, not even a commute. This seriously undermines the concept. The results of the tests might have indicated this, with performance becoming less smooth, the speed varying more during a longer test. (Shielded is full of commutable ops, most notably on Count fields of the included collections.)

Locking a field to make sure a longer running transaction succeeds is easy to do. A Changed event handler which throws (or, I don’t know, Monitor.Wait-s) for all but one thread would do the trick just fine. The encounter-time version was, basically, doing this to EVERY field, and producing very little gain otherwise.

Ennals argues that obstruction freedom is an unnecessary requirement for an STM, and that it could be made faster by removing it. Although much of the argument is true, and an encounter-time solution would be faster, the paper creates another problem. It requires that the value of a field be available without any indirection, in order to reduce cache misses. This makes it impossible to have MVCC, which in turn means that reader progress cannot be guaranteed. A thread which only reads some data will, in the commit-time locking version, proceed without even trying to enter the global lock. With encounter-time locking and the no-indirection requirement, it must block on conflict with a writer, and then be restarted!

Another claim in the paper is made, which seems almost ridiculous. The paper says that obstruction freedom means we are unable to control the number of concurrently executing transactions, and must allow for all N of them to execute in parallel. In all practical settings, nobody would be prevented from counting and limiting the number of parallel transactions simply to be so thoroughly obstruction-free. You are free to start and run them at any pace you please. (See Queue.cs for a simple example.)

All in all, commutes and reader progress are the reasons why Shielded still employs commit-time locking, and is obstruction-free during a transaction run. If needed, locking and parallelism control can easily be added to ensure proper prioritization.

* Something similar was implemented in LocalStorage recently. If there’s only one thread changing a field, it will use a private member for storage.

The Importance of Commutables

While developing the first more complicated tests for Shielded, at a time when the library had just the basics of transactional protection covered, I realized the importance of commutable operations for concurrency, even when not using an STM.

One of the most basic things a test needs – a counter of processed items – was automatically a horrible bottleneck. All the parallel transactions would read the current value, increment it, and store the result. But only one can succeed and write into the counter! All of them were in conflict with each other, fighting over that one field. The end result was that transactions would effectively execute serially. For an STM this meant a lot of useless repetitions, transactions working, then trying to make their commit, and then all but one of them failing and starting over. Not cool.

When using locks in a situation like this, the simplest approach is to wrap the counter incrementing together in the same lock block with the rest of your work, but you must use a global lock to protect a global counter, so then you also have pure serial code. Nobody (?) does that. Instead, you typically keep a more granular lock for the “payload” part of the job, and obtain a global counter lock just during incrementing. Or, if you know what you’re doing, you do this:

lock (itemLock)
{
<payload here>
Interlocked.Increment(ref _counter);
}

This works great. You leave your mark on the counter, regardless of how much competition you have. Perhaps you never noticed, but this works only because the individual increments are commutable – they can be freely reordered in time and still have the same net effect. With the introduction of a commutable, your transactions can now run in parallel (provided that they use different item locks). Congratulations, you’ve got pretty good concurrent code.

Naturally, I wanted to support this in Shielded. To be able to do something like this:

x.Commute((ref int n) => n++);

…and have the increment perform in parallel with other transactions, without causing your transaction to conflict with them. The main advantage of an STM is it’s simplicity of use, but not having something like this, and treating a simple, commutable increment as a conflict, makes it almost useless.

I had already encountered commutables in Clojure’s built-in STM. Clojure executes a commute under the global commit lock, by taking the value encountered in the field at that time and performing the op. This is perfectly safe – no one else is in that lock with you, no conflict possible, works great.

But, the Clojure implementation has two drawbacks. The first is, of course, running arbitrary code under the global lock. From my experience in experimenting with Shielded, small changes in time spent under that lock have noticeable effects on performance. The second drawback is more subtle, but maybe even more serious – if you read from the field after you have defined a commute op on it, Clojure throws an exception. The main advantage of STM, composability, is compromised – if you compose two operations into one, you better not compose one that commutes a field with another that reads from it. They work just fine by themselves, but together, boom.

In Shielded, I wanted to do this differently. For one thing, the commutes are executed just before entering the global lock! They will get a fresh reading stamp, so they can read the latest data, which reduces their chance of conflict. If they still conflict, only they get retried.

The other difference – if after defining a commute you read from that field, Shield detects this (as an STM, the library has to be notified of access to a field anyway) and automatically executes the commute sooner, right then and there, and you see it’s result. This guarantees safe composability. However, the commute is not a commute any more. It’s not a commutable increment if you know the result is 4. If another transaction jumps in and writes in 4 before you commit, your increment would be lost unless we repeat it. And since you read and used the number in your transaction, the entire transaction is compromised. The key is not to look – we can repeat just the commute, but only if it was not read.

Using Shielded, any kind of commutable operation can be defined, and the system takes care of consistency. If the rest of the transaction does not interfere with the operation, it will behave as a commute, minimizing conflicts. If the rest of the transaction does interfere, a commute is gracefully reduced into an ordinary part of your transaction, ensuring consistency.

For examples of various commutes, and how it all behaves in Shielded, check out the method ComplexCommute() in BasicTests.cs. You may notice how methods Append() and Clear() of a ShieldedSeq are both defined as commutables – they won’t conflict if you don’t read the sequence. That means no conflicts when adding things to queues.

Choosing the optimal granularity for locks, not forgetting to take a lock wherever it is needed, avoiding deadlocks – all of these problems disappear when using an STM. It just works, safely and consistently. And, with commutable operations support added in the mix, your code is automatically as concurrent as logically possible!