## Introduction

The article explains the most widespread myths of CAP theorem. One of the reason is to analyze recent Spanner, TrueTime & The CAP Theorem article and to make clear understanding about terms involved in the theorem and discussed a lot under different contexts.

We consider that article closer to the end, armed with the concepts and knowledge. Before that, we analyze the most common myths associated with the CAP theorem.

## Myth 1: A Means Availability

**A**, of course, derived from the word "Availability". However, what does it mean specifically? What kind of availability?

It is not a simple question. Availability in different contexts means different things. Moreover, here we must distinguish at least two different contexts in which it can be used:

- The availability of real service. This availability is expressed as a percentage: measured by the total time per year of inactivity and a ratio, expressed as a percentage, meaning the probability of availability for a relative long period.
- Availability within the model of CAP theorem.

The CAP theorem uses the concept with closest meaning as *total availability*:

For a distributed system to be continuously available, every request received by a non-failing node in the system must result in a response.

In this definition, there are a few points I would like to emphasize:

*Non-failing node*. It is clear that the failing node cannot respond. However, the thing is that if all nodes are failing nodes, from the definition point of view such service is available. In principle, you can fix the definition, adding that at least one node is a non-failing node.*Must result*. The theorem does not say exactly when it should happen. It does not know about any timings at all. It is completely obvious that a node cannot respond instantaneously. It is sufficient to reply at some point of time.

From a user perspective, if we had 100 nodes and 99 have failed while the remaining one continues to respond with the rate one request per hour that service is hardly available (context 1). However, from the perspective of CAP theorem, everything is fine and the system is available (context 2).

Therefore, **A** is not availability in the conventional sense, but a so-called *total availability* and that kind of availability can be insufficient and dissatisfactory for the user.

## Myth 2: P Means the Partition Tolerance

The above definition can be found in almost all articles. To understand what is wrong, we have to look at the problem under a different angle.

Let us take any system that exchanges messages. Consider how messages are transmitted between the actors - system objects. These messages can either be transferred to another actor or can be dropped. There are two cases:

- There is no possibility of losing the messages in the system.
- There is a possibility of losing the messages in the system.

It is easy to conclude that the list above is exhaustive. At this point, you should pay attention that each item describes the properties of the system. At the same time, we have not even started describing the algorithm. This fact has far-reaching consequences.

If we consider the first case when the messages are never lost it means that in this situation *network split* is simply impossible. Indeed, every time a message from each actor can be guaranteed to be transferred, there is no sense to talk about the *network split*.

In the second case, the opposite is true: because of losses, there is a possibility that a segment of the actors is separated from other segment, i.e. there was a loss of connectivity between groups of actors. In this case, we say that *network split* has happened.

It should be noted that the property of the possibility of isolation of actor groups from each other is a direct consequence of the second case.

If we consider the real network, it is not difficult to conclude that it falls under the second case. At the same time, we have not started to think about the algorithm, and we already have the ability of losing the connectivity between groups of actors. **P** is about of a simple fact that the *network split may happen*. It is not a property of the algorithm; it is the property of the physical layer of our system where the algorithm operates.

Why the network split is so important? The reason is that other issues do not cause so much trouble in comparison with network split significantly increasing distributed algorithms complexity.

As the conclusion of the discussed myth consider the quote from Aphyr: Percona XtraDB Cluster:

Partition tolerance does not require every node still be available to handle requests. It just means that partitions may occur. If you deploy on a typical IP network, partitions will occur; partition tolerance in these environments is not optional.

Thus, if we consider a system that works with an unreliable network a violation of network connectivity is not an exceptional situation. **P** in this context means that **network split may happen**.

## Myth 3: AC Systems Do Not Exist

According to previous consideration, it should be obvious that you cannot build AC system because there is no completely robust networks capable of transmitting data without any loss. You could immediately propose a scheme with redundant components. However, if the probability of packet loss in the line > 0 additional lines cannot reduce the probability to be equal to zero based on simple mathematical consideration. If so then as described above network split may occur.

However, who said that CAP theorem describes distributed systems only? CAP is a theoretical model that can be applied to a wide class of problems. For example, you can take a multi-core processor:

- Each core behaves as an actor.
- Actors (cores) exchange messages (information).

This is enough to apply CAP theorem.

Consider **A**. Are cores available? Of course, yes: at any time, you can go to any core and obtain any information from memory you want.

What about **P**? The processor ensures that data will be transferred to the other core without any issues. If this for some reason does not happen then that processor is considered to be defective. Thus, the letter **P** is absent.

Consistency question resolves in the following way. The memory model describes *sequential consistency*, which is the highest level of consistency in such a system. At the same time, the processor usually implements cache coherence protocols such as MESI or MOESI thereby providing a predetermined level of consistency.

Thus, the modern processor is **AC** system with guaranteed message delivery between the cores.

## Myth 4: C is consistency

**C** without a doubt means the consistency. However, what kind of consistency should we consider? E.g., eventual consistency is one of the form of consistency. So what should we have in mind?

There are a lot of consistency models, you can look at the picture taken from Consistency in Non-Transactional Distributed Storage Systems:

Those consistencies are applied to the distributed non-transactional system only! If you consider distributed transactional system consistencies you can just bury the idea to look into this.

The original article about the CAP theorem uses consistency model known as linearizability. Linearizability briefly speaking means the following: if there is any action (no matter, read, write, or mixed action or actions), the result of this action is available immediately right after reply receiving.

The question arises instantly: do other forms of consistency fall under the CAP theorem?

To answer this question, let's consider the picture taken from the article Highly Available Transactions:

Red circles denote the *unavailable models*. They are applicable under the CAP theorem meaning that it is impossible to simultaneously achieve both **A** and **P** using those forms of consistencies. However, there are other models with sufficient consistency level for a wide range of tasks, which nevertheless can be made simultaneously with the **AP** obtaining CAP system without any obstacles. A typical example: *Read Committed* (RC) and *Monotonic Atomic View* (MAV) allow achieving all three letters in the CAP, and no one can say that those models are weak consistency models. Consistency models that violates CAP theorem are called *highly available models*.

Thus, speaking of consistency, we mean a broad group of consistency models, called *unavailable models*.

## Myth 5: CP Systems are not Highly Available

After the preceding paragraph, it seems quite logical but it is fundamentally wrong. Recall that **A** stands for *total availability* rather than availability within the nines. Is it possible to make the **CP** system highly available?

Here it is necessary to separate the model and hardware, that is, theory and practice.

First, let us think in terms of the model. Availability under the CAP theorem means *total availability*, i.e., any alive node must respond. Nevertheless, why do you need it? After all, we could rewrite the logic of the client completely. Instead of operating with single node, we could consider connection to a consensus group of nodes and choose the most recent value based on many responses from different nodes. Thus, the system from the client perspective is highly available for both reads and writes and is consistent due to applying consensus algorithm.

In reality, there is always a non-zero probability that only minority nodes are available. This is easily seen, because if there is a nonzero probability of one node failure, then there is non-zero probability of another node(s) failure. Moreover, it is not the worst can that may happen. In addition to hardware failures, the failure of various network equipment may take place. I think it is not necessary to remind that all those failures have a non-zero probability. All these probabilities are accumulated given a certain sometimes very small number of nines in availability. It is clear that the more redundant hardware we have the greater number of nines we obtain. I still did not take into account the software application itself, which has a non-zero probability of bugs...

Thus in practice we always have availability less than 100%. All science is to achieve the greatest possible number of nines. In this aspect CAP theorem is useless. Because it is about completely different notions and models.

So the idea of having highly available system does not contradict the fact that this is not **A**, and therefore **CP** can be highly available.

## Myth 6: CP Systems Have Low Performance, High Latency and Are not Scalable

Obviously, the higher level of consistency the less performant system we have. Nevertheless, it turns out that even the *strict consistency* or *Strong-1SR* (the highest level of consistency) with exactly once semantics can be used in real-time systems. I have an experimental proof of this fact but here I would like to give some practical considerations in favor of it.

The idea is to use a set of independent fault-tolerant entities. You can run them anywhere, they can work in parallel and their number is limited by the size of the cluster. On top of the entities we could create transactional layer which connects different parts together allowing operating them transparently. This is how Spanner and other distributed scalable systems work.

Thus, we might achieve scalability and performance for **CP** systems.

## Myth 7: AP Systems are Easy to Use due to Scalability

**AP** systems allow implementing simple scaling schemes, but only in theory. In practice, you have to solve the issues related to the weak consistency.

Real systems show that the correctly implemented client based on such system is nontrivial task and sometimes it is even impossible to implement. The reason is that if the system does not provide some basic guarantees to preserve data consistency the subsequent processing is transformed into a very fascinating charade: has the operation been applied? do you know what others may see? is it possible to obtain consistent data snapshot? do different clients see the same data set? etc.

Despite the relative simplicity of such systems, the usage complexity from the client’s perspective increases dramatically.

## Article Analysis

And now let's proceed with Spanner, TrueTime & The CAP Theorem. Let's start from the beginning:

The CAP theorem [Bre12] says that you can only have two of the three desirable properties of:

- C: Consistency, which we can think of as serializability for this discussion;
- A: 100% availability, for both reads and updates;
- P: tolerance to network partitions.

The first thing you should pay attention to is the link [Bre12] called CAP Twelve Years Later: How the "Rules" Have Changed dated May 2012. It contains some thoughts related to the theorem but not the CAP theorem itself.

In addition to that, we have discussed all letters and we have applied at least myth #2 in the quote above.

Once you believe that partitions are inevitable, any distributed system must be prepared to forfeit either consistency (AP) or availability (CP), which is not a choice anyone wants to make.

The first part sounds quite reasonable according to our discussion, but the last part looks strange and applies myths #5, #6 and #7.

Then the reasonable words have been written:

The actual theorem is about 100% availability, while the interesting discussion here is about the tradeoffs involved for realistic high availability

It seems the author would like to say that Spanner highly available **CP** system avoiding myth #6. Unfortunately, it is not the case and author continues with paragraph:

## Spanner claims to be consistent and available.

Spanner claims to be consistent and highly available, which implies there are no partitions and thus many are skeptical.

Of course, we are skeptical because it does not imply that. In accordance to considerations in myth #5, the so-called *highly available* system does not mean **A**, and thus does not mean the absence of **P**.

Based on a large number of internal users of Spanner, we know that they assume Spanner is highly available.

The phrase itself is remarkable. It turns out that if internal users will say "we assume that it is highly available", it follows immediately that something takes place in practice without any assumptions.

To be more precise author adds:

If the primary causes of Spanner outages are not partitions, then CA is in some sense more accurate.

According to my understanding, the article logic is the following. If we have a failure that is not associated with *network split* then we could treat such system as **CA** in some sense (!). In other words if the probability of other failures are more than the network failure then we may drop **P**.

In that sense myths statements look more reasonable.

Later on, the author provides the definition of used notion "effectively CA":

... to claim

effectively CAa system must be in this state of relative probabilities: 1. At a minimum it must have very high availability in practice (so that users can ignore exceptions), and 2. as this is about partitions it should also have a low fraction of those outages due to partitions.Spanner meets both.

The questions immediately appear:

- What is the level of high availability is sufficient "in practice"? 5 nines? 6 nines? maybe 9 nines? There is certain arbitrariness that does not allow correctly concluding about belonging to that definition. "ignore user exception" completes the ambiguity.
- Where is
**P**?**P**means that*network splits*may happen regardless of the probability (see myth #2). Should we redefine**P**as well?

Finally:

Spanner reasonably claims to be an “effectively CA” system despite operating over a wide area, as it is always consistent and achieves greater than 5 9s availability... Even then outages will occur, in which case Spanner chooses consistency over availability.

It is obviously contradicts to the **CA** system and common sense: in such system there is no choice because both properties are chosen as we have seen above in the example described in myth #3. The presence of such statement just says that it is not **CA** completely.

I did not expect to see conflicting paragraphs in this article.

## The Last Myth: CAP Theorem is Outdated

The popularity of this topic led to the fact that many people no longer understand the meaning of terms; they have become blurred, emasculating to have quite vulgar understanding. Speculation on the terms, redefinition and misunderstanding - this is an incomplete list of generic spots this distressful theorem.

At the time, the pendulum swung the other way and people had started to forget the theorem. Articles tried to conclude that CAP theorem is outdated and asked to stop its use. Even the author of the theorem begins to substitute the concepts and distort the original intent.

Those attacks in the direction of the theorem repeatedly underscore its relevance, exposing a new faces unknown so far.

## Conclusion

At one time, the CAP theorem introduced interesting concepts and new understandings. The theoretically impossibility of creation of a certain kind of systems allows to concentrate on developing the tasks of a particular class avoiding implementing unsolvable systems. In the context of distributed systems, it makes sense to consider either **AP** or **CP** systems.

Such theorems are not obsolete. It cannot become obsolete as well as classical mechanics despite the presence of relativistic effects and quantum mechanics. It just needs to find its rightful place. We must remember about it and move on.

And the thing is that this theorem is a special case of a more general fundamental property:

The CAP Theorem, in this light, is simply one example of the fundamental fact that you cannot achieve both safety and liveness in an unreliable distributed system.

- C:
**Safety** - A:
**Liveness** - P:
**Unreliable distributed system**

*Grigory Demchenko, YT Software Engineer*

## References

Spanner, TrueTime & The CAP Theorem

Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web

Jepsen: Percona XtraDB Cluster

Consistency in Non-Transactional Distributed Storage Systems

Survey on consistency conditions

Highly Available Transactions: Virtues and Limitations (Extended Version)

Spanner: Google’s Globally-Distributed Database

CAP Twelve Years Later: How the "Rules" Have Changed

Please stop calling databases CP or AP

Perspectives on the CAP Theorem

YT: Why Yandex Has Its Own MapReduce System and How It Works (in Russian)

## No comments :

## Post a Comment