Cloud computing has become popular. A storage subsystem is a key component in many cloud computing infrastructures, and many systems have adopted so-called “NoSQL” databases, where data is often organized in a key-value structure. Adoption of these databases is driven by the need to store unstructured data, such as pictures, videos, or documents.
Similar to traditional single-machine relational databases, a transactional storage model is still the default in NoSQL databases. However, for the purposes of availability, performance, and fault tolerance, storage may now be replicated on many servers distributed around the world. Each replica would be allowed to independently process transactions without first synchronizing with other replicas, in certain situations. This replicated model leads to weak consistency models - analogous to Arm’s weak memory consistency model [1, 2, 3] - such as parallel snapshot isolation and causal consistency.
Weak consistency permits observable behaviors forbidden by serializability, the usual behavior of a transactional database operating on a single machine. Despite extensive available documentation for programmers detailed weak behaviors, working with these consistency models can be a difficult task, since most programmers are typically only familiar with transactions under serializability. Moreover, many well-known weak consistency models are originally defined with reference implementations, and implementations over different configurations may refer to the same model. This hinders understanding, makes comparing all the different behavioral models more difficult, and creates a risk of inadvertent data-loss.
In this early-stage research work with Andrea Cerone, Azalea Raad and Philippa Gardner, I propose an operational semantics - a precise, mathematical description of how a storage system should behave - that is parametric in the choice of the consistency model.
We use a multi-versioning abstract key-value store (kv-store) to model the overall state of a database. An example, kv-store is given in Figure 1, which contains one key, k, with two versions carrying values 0 and 1 respectively. Each version is associated with metadata, namely the transaction identifier writing the version, for example, t0, and a set of transaction identifiers reading the version, for example {t1}. This metadata is used when defining consistency models later. Formally, a kv-store is a mapping from keys to lists of versions, for example, K(k) = [ (0, t0, {t1}), (1, t1, Ø) ].
Figure 1: An example kv-store
In our semantics, we focus exclusively on client observations, that is, what somebody interacting with a storage system can observe.
Figure 2: An example that two clients having different views on a kv-store.
Each client has a partial view, v, on this abstract kv-store. For example, in Figure 2, client B is aware of both values of the kv-store, while client A is only able to observe the leftmost. Formally, a view is a mapping from keys to sets of indices in the kv-store, for example, vB(k) = { 0, 1 }.
Our kv-store abstraction hides details of any particular storage implementation, making it applicable to a wide-range of different storage technologies: for example, the two versions carrying values 0 and 1 presented in Figure 2, may in fact be hosted on different shards, or even duplicated among several replicas. In the latter case, client A and B, may connect to distinct replicas that experience temporary network partition, or similar; this can be represented by different views on the abstract kv-store in our semantics.
Henceforth, we use an increment transaction on this kv-store, inc(k), as a running example to explain our semantics in intuitive terms. This is defined by:
transaction { x = read(k); write(k, x+1); }
Suppose, now, that both clients, A and B, invoke inc(k).
inc(k)
As a transaction is atomic, by definition, we assume that client B executes inc(k) to completion and collects its effects. Specifically, client B reads value 1 of key k (the latest value in client B’s view) and writes back 2; this can be represented as a set, FB = { read(k,1), write(k,2) }, which we call a fingerprint.
k
FB = { read(k,1), write(k,2) }
The particular consistency model in use now determines if client B with view vB is allowed to commit fingerprint to the kv-store in what we call the execution test. In this particular case where view vB includes all existing versions of k, the execution test passes, yielding a new kv-store and a view for client B. This is presented in Figure 3.
vB
Figure 3: Client B commits increment transaction.
However, Figure 3 captures the fact that this update is not yet observable by client A, who still observes only the leftmost value.
Suppose now that client A also executes inc(k) given their view, and as a result, client A wants to commit the fingerprint, FA = { read(k,0), write(k,1) }, to the database. What happens now varies according to the consistency model in use. In causal consistency, client A’s fingerprint will pass the check and yield a new kv-store shown in Figure 4. Of note here is that the increment from client B is lost, as the final value of k is 1 instead of 3 after two increments. This is known as a lost update anomaly and may be undesirable, depending on the application.
FA = { read(k,0), write(k,1) }
Figure 4: Lost update anomaly
In contrast, when it comes to parallel snapshot isolation where the lost update anomaly is forbidden, the execution test fails, and client A cannot commit their fingerprint to the database. Instead, client A must advance their view to include all values prior to executing inc(k). As a result, client A reads 2 and writes back 3.
What does an execution test look like? An execution test states that, prior to updating, the initial view, v, must be closed with respect to a relation, R, defined over the initial kv-store, K, and the fingerprint, F. Specifically, the relation R is a combination of other, more primitive relations, which we call WW, WR, RW, and SO. Here:
v
R
K
F
WW
WR
RW,
SO
WR(k)
WW(k)
RW(k)
Figure 5 gives a graphical intuition of the WW, WR, RW relations on a kv-store:
WW, WR, RW
Figure 5: WW, WR, and RW relations (values are omitted)
WW, WR,
RW
In some consistency models, the fingerprint is used to project the general relation on a kv-store to specific keys. We now explain the notion of closed with respect to a relation R. Let take Figures 5 and R = WW as an example. A view, v, is closed with respect to WW can be described as:
In the case of Figure 5, this means that if a view includes the second value of key k, then it must also include the first version of k under the relation R = WW. Note that a transaction may update several keys. This means that an edge on a key, e.g. (t0, t2) ∈ WW(k), may affect the view on a different key.
Given this, we can now define the execution tests for consistency models, for example, causal consistency. Causal consistency (CC) can be captured by the relation RCC = WR ∪ SO. Client A’s view, vA, in Figure 3 is closed with respect to RCC = WR ∪ SO. Let apply the two conditions above. View vA contains (indicated by the arrow) only one value written by t0 (condition 1) and there is no other transaction, t, such that (t, t0) ∈ RCC (condition 2). Hence, this view is immediately closed with respect to RCC = WR ∪ SO.
RCC
t
Given these formal semantics, programmers can derive the behaviors of their code executing under weak consistency models, and database implementers can check if their implementations are correct. We have shown a few examples of these two tasks via hand-writing proofs. In future works, we are planning to develop tools in order to work with real-world examples.
A conference paper describing this work will be presented at ECOOP 2020, and the pre-print can be found here (distributed under Creative Commons License CC-BY 3.0, ECOOP 2020). Full details of our semantics and execution tests for different consistency models, including parallel snapshot isolation, consistency prefix, snapshot isolation and serializability, are provided. If you have any questions about the work, please reach out to me.
Contact Shale Xiong
Read the Full Paper
If you are interested in hearing more from our security team at Arm Research, look at our other blogs, ranging from securing home IoT devices to hardware side-channel mitigations.
[1] Memory Model Tool
[2] A working example of how to use the herd7 Memory Model Tool
[3] How to generate litmus tests automatically with the diy7 tool