CBDC – How Dangerous is Programmability?

Among Central Bankers, the issuance of a Central Bank Digital Currency (CBDC) is a topic of great interest. The Bank for International Settlement (BIS) has recently published a report showing that many central banks are conducting research and experiments and a small number are already deploying pilot projects. Research projects have been set up by European Central Bank (ECB), the US Federal Reserve, and the Bank of England. Experimental projects have been launched by the Sveriges (Swedish) Riksbank and the Bank of Israel, while pilot projects were initiated most notably by the People’s Bank of China (PBOC) and the Bahamas, which announced that its digital “Sand Dollar” was already being used by the public. Other central banks—Australia and Norway—are comfortable with “monitoring developments” from the sidelines.

What is a CBDC?

There is a general agreement among central bankers that a CBDC is, as the ECB describes it, “a central bank liability offered in digital form for use by citizens and businesses for their retail payments. It would complement the current offering of cash and wholesale central bank deposits.” The ECB report also calls it, more colloquially, a “digital banknote.”

There is also agreement that a CBDC is neither a “cryptocurrency” nor a so-called “stablecoin.” However, the agreement ends there, because when it comes to CBDC, the “devil really is in the detail.” Thus, there exists a range of opinions as to what a CBDC should really look like and how one should be implemented. This is not surprising, as the introduction of a digital currency will naturally have long term consequences across any economy.

Why should a central bank issue a CBDC?

There is also little unanimity among central bankers as to what benefits (and risks) would come from issuing a CBDC. Again, this is not surprising, as different benefits and risks will arise depending on the economic and technical architectures adopted.

For many central banks the stated rationale includes the belief that if they don’t issue a digital currency a private company such as Facebook with its Libra, now Diem, currency, will undermine their central role in the economy. Other reasons include: reducing the cost of printing cash, especially as use of cash is already falling in many jurisdictions; increasing financial inclusion and reaching the “unbanked”; improving the stability and resilience of the payments system; and more immediate and efficient transmission of monetary policy.

Implementing a CBDC—by changing the relationship between the central bank and the commercial banking sector—creates risks too. These risks include the increased possibility of “bank runs,” where depositors flee from commercial banks to the central bank in times of stress, and increased operational risk when central banks not only issue CBDCs but also operate the CBDC payment system—a big change from their current role.

These issues are being actively debated among central bankers but no agreed—or even preferred—operational and technology models have emerged.

My recent paper explores another risk of CBDC, “programmability” or “programable money.” This might seem like a minor technical detail, compared to issues such as banking system disintermediation, but programmability has the potential to derail implementation and approval for CBDCs.

What is Programmability?

As Alexander Lee of the Federal Reserve recently wrote, the term “programmable money” remains ill-defined. Lee differentiates between “programmable money” and “programmability.” He notes that there are two natural components of the definition: 1) a “digital form of money” and 2) “programmability” which is a “mechanism for specifying the automated behavior of that money through a computer program.” There is little new about programmability per se, as Lee notes “given that various combinations of similar technology for payments automation have existed for decades.”

Before considering how “programmability” applies to CBDC, it is worth considering one of the many examples of how the concept is used today. For example, provided that such a procedure was arranged beforehand, when a payment exceeding the available account balance is made, a bank may make the payment but place the difference in an “overdraft account,” then create a charge for exceeding the balance, and initiate a process to charge interest for the “loan” as long as the overdraft persists. All of this is done automatically by a computer program specifically developed for this purpose by the bank.

It is important to note that each part of this process is covered by a formal legal contract signed (sometimes electronically) between the customer and the bank; by formal codes of conduct adopted by the banking industry; by national consumer legislation; and importantly, by laws relating to dispute resolution, where—if the bank is at fault—the actions will be reversed and—where necessary—restitution is made. In its original definition, this has been labeled a “smart contract” or a combination of automated and legal processes to achieve a particular (in this case financial) objective.

So, if programmability has been around for decades, what exactly is new? To address that, we have to move into the world of cryptocurrencies, in particular the second largest cryptocurrency Ether, which is managed through the Ethereum blockchain.

The key innovation of Ethereum is that computer programming code is stored on the same network as the electronic money, not separately. The code is written in a special programming language called Solidity, in the form of “smart contracts.” And—in keeping with the philosophy of Blockchain—programming code can be viewed by anyone and cannot be changed (so-called “immutability,” unless the smart contract has a “self-destruct” clause).

Lee calls this combination of tightly coupled electronic money and accompanying programming code “coherent.” The purpose of this combination of code and data is that anyone can see the code and interested parties can agree that the smart contract is valid. An obvious problem is that one cannot guarantee that all mistakes or bugs in a particular set of code have been detected and removed. Any bug remaining in the code may unexpectedly blow up with unintended, and potentially catastrophic, consequences. Before discussing those unintended consequences however, it is worth considering what use programmability could have in CBDC’s implementation.

Benefits of Programmability

There have been many claims, for example in a report by the Deutsche Bundesbank, that adding programmability to a CBDC could bring a plethora of economic benefits including automated payments, such as paying toll road usage; automated checking of money laundering; automated collection of taxes; and distribution of consumer support in emergencies. 

However, many of the claimed benefits either already exist or could be developed within existing systems. Most notably, these benefits could be achieved by utilizing the Instant Payments Systems (IPS), such as the FedNow system which currently is being implemented by the US Federal Reserve.

Nonetheless, rather than discuss the merits of particular smart contracts, let’s assume that there will indeed be some smart contracts and, for this purpose, those contracts would be tightly coupled to the underlying digital money as in the Ethereum model. Such contracts need not necessarily be implemented through Ethereum but could use other technologies, such as conventional databases or Distributed Ledger Technologies (DLT). This discussion, however, does not distinguish between these technologies but is “technology agnostic.”

What could go wrong?

In the latest Basel III banking regulations, released by Basel Committee on Banking Supervision, Operational Risk is defined as: 

“the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. This definition includes legal risk, but excludes strategic and reputational risk.”

Here, we are considering losses resulting from “inadequate or failed” technology (defined as “systems” in BCBS’s banking regulation) related to the technology used in operating CBDCs. This relates to Systemic Operational Risk Events (SOREs), or losses that can occur across the financial system rather than in one, or a small number of firms. A SORE could occur for example when a payment system, such as a CBDC, encounters a problem that impacts multiple users. In the past year, there have been several events (i.e., SOREs) that have impacted users of Ethereum networks. It should be noted that the Ethereum network is not regulated under Basel, but the examples illustrate the problems that could occur in smart contracts within a regulated CBDC.

In June 2021, a serious problem was identified in the Polygon Network which is described as “a protocol and a framework for building and connecting Ethereum-compatible blockchain networks”. The problem related to transactions in a so-called “stablecoin” called IRON, which is defined as token that is partially collateralized by the US Dollar. In one of the smart contracts on the Polygon network, $262 million of collateral was locked up and could no longer be accessed, and holders of the programmable money based on that collateral could not access their investments. Since the smart contract was “immutable,” nothing could be done to rectify the situation. However, this situation was not truly a bug but a design problem in the logic of one of the smart contracts. In other words, it was an accident waiting to happen.

Another dire problem occurred in August 2021 in the Poly Network, in which $600 million-worth of cryptocurrency was stolen, supposedly by a hacker. As it turned out, this loss of money was not caused by a hacker but a bona-fide user who detected flaws in the highly complex, convoluted smart contracts used on Poly. The flaws that the user detected made it possible to take control of part of the network and to siphon off the funds. But like the robber who recently stole a Van Gogh painting from an Amsterdam museum, the user found that the value of the funds was so considerable that it was impossible to get rid of it without being detected. Thus, the hacker chose to return the loot to the network. 

The outcome would have been very different had the user planned the heist better. Again, this problem was an accident waiting to happen as the flaws were present but undetected until someone stumbled across them.

In August 2021, a serious problem occurred on the Ethereum network causing the blockchain to “fork,” that is to split in two, with one set of users working on one fork and the remainder on the other. As a result, there were essentially two distinct currencies in operation. Again, this problem was a (different) accident waiting to happen because it was caused by some miners running outdated software—one group of miners was using one set of rules and another group was using different rules. In short, the integrity of the currency was dependent not on the properties of the currency, but on independent operators behaving diligently. Failure to do so will cause problems.

Unfortunately, these three recent examples are not unique, as illustrated by a recent academic report on “Vulnerabilities, Attacks, and Defenses” of Ethereum smart contracts. The report detected serious deficiencies, in summary:

  1. “Authentication and authorization failures in Ethereum smart contracts are a major problem…;
  2. External dependence makes it hard, if not impossible, to assure the security of Ethereum smart contracts …;
  3.  Incompetent Ethereum smart contract programming introduces many new kinds of vulnerabilities …;
  4. The unreliability of Solidity makes Ethereum smart contracts vulnerability-prone, highlighting the importance of reliable programming languages…;
  5.  Arbitrary choices of parameters in Ethereum specification and implementation cause many vulnerabilities in Ethereum …; and
  6.  Vulnerabilities in Ethereum are harder to cope with than vulnerabilities in other systems, hinting that Ethereum blockchain is inherently more complex.”

It should be noted that while Ethereum is slammed in this report, the problems are not related merely to Ethereum but to similar architectures that rely on transparent, tightly-coupled smart contracts. 

What If?

The systemic operational risk events and losses described above did not a have a widespread impact but were restricted to investors in the cryptocurrency markets—a notoriously risky environment. But what if similar problems could occur in the environment of a national CBDC, such as those proposed for the USA, European Union, China, etc.? What if such problems could be caused by “bad actors” – individuals, companies, or national governments?

Because the smart contracts used in such architectures to process programmable money are transparent (even if stored in intermediate compiled bytecode), it would be possible for a knowledgeable expert to copy those readily available contracts and to build a working copy of a country’s CBDC in the laboratory. If such code was then inspected by an expert team of forensic technologists looking for flaws in the logic, which then would be used to test out potential attacks that could be switched on in an instant, possibly wrecking or at least seriously disrupting the economy.

Such attacks could potentially “freeze” a country’s programmable money (as in the first example above), could lead to stealing of money (as in the second example), or spread multiple versions of code and thus splitting the currency. A coordinated combination of simultaneous attacks could seriously disrupt commerce in any country dependent on a CBDC.

Thus, any programmable money scheme that relies on smart contracts not only creates economic difficulties but, more importantly, poses risks to national security.  For regulators, this should mean discouraging the use of such architectures and even banning them until any unintended consequences can be understood. This does not mean that CBDCs should be discouraged but that any economic or technical architecture that opens a currency to attack should be ruled out as a candidate for implementation.

Patrick McConnell has taught Strategic, Operational and Technology Risk Management at Macquarie University, Sydney and Trinity College, Dublin.

This post is adapted from his paper, “Strategic and Technology Risks: The Case of Co-operative Bank” available on SSRN.

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *