Blog > 2020

Merging formal methods and agile development to build Cardano

IOHK formal methods director Philipp Kant lays out our methodology for building software with flexibility and precision

9 April 2020 Philipp Kant 7 mins read

Merging formal methods and agile development to build Cardano

Form and function

IOHK is building Cardano into a global financial and social operating system. This enormous task requires both quick iteration and absolute precision. It is why IOHK has chosen to combine the speed of agile development with high assurance code and formal methods. Fusing flexibility and formality led our engineers to pioneer this modern development philosophy.

IOHK believes firmly in research, formal methods, functional programming, and building in a rigorous manner. As a competitor in the blockchain development industry, we also have to consistently demonstrate progress and create value for our global community of stakeholders. This means we can’t compromise on robustness or on development speed and flexibility. In an ever-changing marketplace this is a challenge, so our developers have to strike a balance.

Agility versus formality

The start-up standard for developing technology has been to build a minimum viable product quickly and then continually iterate until it is ready for the mass market. This is known as an agile process. It is a great way of showing that a project is advancing while eventually building a fully functional product. However, an agile methodology assumes there will be bugs and weaknesses in each step of development that can be ironed out later. This is fine if there is no value at risk – but, with virtual currencies, there is an enormous amount of money and stakeholder trust on the line.

Building a digital asset on a blockchain provides several challenges to overcome in terms of organizing a development process. As a proof-of-stake cryptocurrency, Cardano is a distributed system in an adversarial environment where consistent performance is critical. The protocol has to maintain security in the face of malicious actors attempting sabotage. This means that no one can afford to build quickly and deal with problems later.

Trust is essential for a currency to be accepted and correctness proofs are an important way to increase the veracity of a system. This is why the code should not only be correct, but there should be evidence of its correctness, such as extensive meaningful tests and mathematical proofs. In a young industry like cryptocurrencies, IOHK engineers have to anticipate the addition of new features while maintaining the correctness guarantees established in the initial version. The platform can only scale globally if it is able to grow while maintaining security and utility for everyone. This is why Cardano developers streamlined their methodology, combining a variety of tools ranging from property-based testing all the way to machine verifiable proofs, to create high assurance software even in the presence of changing requirements.

Research to code

The methodology begins with scientific research. To date, IOHK has released more than 60 research papers that have contributed to creating the platform. Each paper examines a critical aspect of blockchain technology from first principles. How do we gain consensus in a decentralized way? How is a smart contract designed? What is the right reward structure to incentivize good behavior? IOHK researchers examine these questions, and submit their answers to scientific journals and conferences. These papers contain proofs that must pass rigorous peer review. Then, to ensure that the quality of our software does justice to the science, it is developed using formal methods.

In essence, this means that IOHK engineers specify what the code should do mathematically. That way, they can ensure that when the code is run, it contains the desired properties designed into it. The code is written in Haskell, a high-assurance functional programming language with a strong type system. While Haskell is a great tool for implementing reliable software, it is not foolproof, so the code still needs to be tested. A great way to write tests is using QuickCheck, which allows developers to state properties that should always hold in a program. QuickCheck then generates test cases, and searches for minimal counterexamples that violate those properties.

In code that interacts with the external world, in particular network applications, it can be hard to find minimal counterexamples. This is because the order of execution is not deterministic: it can change every time the software is run. The same code can be run hundreds of times, and only fail once. We can get around this by using simulations with deterministic execution order. Running tests in simulation allows us to reliably find and fix a class of bugs in testing, which would otherwise only occur randomly in production.

Bridging the gap

To get a picture of the development methodology employed for Cardano, let’s consider the metaphor of bridge building. When a civil engineer builds a bridge, a large portion of their time is spent behind a desk. The civil engineer plans a design, calculates the statics, and orders geographic surveys. During that time, nothing happens at the building site. An observer would be unable to see any progress being made. For building bridges, this is the correct approach. If the planning is not accurate, it is difficult and expensive to correct problems at a later stage. Ultimately, the result would be a delayed bridge at a higher cost, or one which fails completely. Lack of visible progress is a good price to pay for a functional and safe bridge.

When building software, making changes in later stages is much easier than in construction. That is what enables the common agile development approach. If an agile developer was building a bridge, they would construct a pillar in one rapid sprint and then the next in a second sprint. The gap between the pillars would be spanned in a final sprint and, if things didn’t hold up, the developers would add on one more sprint to fix any issues. While progress would be demonstrable at the building site, the final product would likely have a great deal of problems built into it. This creates clean up work at the end of the project which could have been avoided by better planning at an earlier stage. Furthermore, the minimum viable product would likely be given to a small group of people for a test drive with the expectation that it would fail in order to alert developers of bugs. Needless to say, it is best that bridges aren’t designed in this way.

When hearing the words 'formal methods’, a lot of people in software development think about the civil engineering approach, which is dubbed ‘waterfall’ and generally shunned. This is a common but unfortunate misunderstanding. Indeed, using appropriate formal techniques allows us to have our cake, and eat it too: to have an overall design (a deliberate design, not an accidental one from fitting together pieces developed in sprints), to show progress continuously, and to retain the ability to react to changing requirements.

A key technique employed by IOHK developers is executable specifications. These are high level designs, which abstract over low level details, written in a language that the computer can understand and execute. Executable specifications can be used as prototypes to show progress, get feedback from users, and test assumptions. On top of that, lower level details can be added via successive refinements. Our developers build the bridge to solve the biggest problems first then add pillars to reinforce it at a later time. In a software system, the pillars would be features like saving data to disk, or using performant algorithms, which are needed for a final product, but which are not essential to demonstrate the overall functionality.

Using executable specifications, we get the benefits of proper planning without sacrificing flexibility. IOHK developers can fix what the system should look like on a large scale, and then implement suitable components as needed. Continuous testing guarantees that each component fits the overall design. This helps prevent problems that are common in a late integration approach. With this methodology, we get the best of both worlds: we can use a top-down design (avoiding late integration troubles, having a good handle on the overall design at all times), and have working code early (demonstrating progress, and allowing for tests and feedback through the whole process).

Future proof

Ultimately, the method of construction should be determined by what is being built. IOHK is building a global social and financial operating system which requires rigor and speed. Formal versus agile is a false dichotomy. Instead, we’re continuing to develop our methodology which fuses the best of both approaches: formal techniques within an agile delivery framework, with robust, higher assurance code upon which we can build for all our futures.

Architecting Shelley: an interview with Duncan Coutts

A fireside chat with Duncan Coutts, Cardano's chief technical architect, about Haskell and delivering Shelley

7 April 2020 Eric Czuleger 6 mins read

Architecting Shelley: an interview with Duncan Coutts

Duncan Coutts has been an important guide on the road to the Cardano Shelley mainnet. Long time supporters of IOHK are likely familiar with his signature long hair, beard, and penchant for drinking tea while discussing decentralization in front of a white board. He recently sat down for an interview to discuss the upcoming Byron reboot, the Haskell Shelley testnet, and the conclusion of the pre-Shelley development cycle. Coutts, who has been working with IOHK since 2016, brings a wealth of knowledge from working with the Haskell programming language for nearly 20 years and helping found the Well-Typed consultancy.

What’s your role at IOHK?

I’m the chief technical architect for the Cardano project and I’m primarily responsible for the design and implementation of the node. This means that I collaborate with the teams that work on consensus, ledger, networking, and other things. Ultimately, I work to bring everyone together around the same design after a discussion with the team leaders. The design of Cardano is the product of joint work by many individuals working together.

What does the Haskell programming language bring to Cardano?

Haskell is an enabler. It makes it easier for us to follow the approach that we believe is right, which is driven by computer science. We know how to do things properly; computer science tells us how. We just need to pick the appropriate techniques to do that. Haskell makes that easier.

It’s a good fit for Cardano because it suits the high-assurance, specification-driven software that is vital for a blockchain. Haskell helps us find systematic ways of avoiding mistakes. In essence, it’s a better mousetrap.

You’ve been working with Haskell for a long time. How have you seen the landscape of functional programming change?

People take it seriously now. When I started as an undergraduate in 1999, I thought that Haskell was amazing. Other students thought, ‘Wow that’s totally impractical. How will you ever get a job?’

At the time, functional programming was an academic curiosity. There wasn’t any prebuilt code and it wasn’t machine readable, which meant that Haskell wasn’t usable for a wide range of people. There wasn’t the tooling, range of libraries, or experience. That has changed over the years: the tooling got better, the libraries got better. IOHK has helped develop the infrastructure for building and distributing open-source Haskell code and the number of libraries exploded. That, combined with more teaching and a gradual change of attitude in the industry, means that people take it more seriously now. Haskell hasn’t changed as much as the industry around us has.

What’s the biggest change from an industry point of view?

There are two things. The first is that attitudes are changing, albeit slowly. People are changing their opinions about what they consider a sensible language choice. Previously, everything had to be in C or Java or maybe Python, but eventually good ideas make progress, even if it takes a long time. You can make a lot of progress by just recognizing that a good idea is a good idea. The mainstream does pick up on important developments, even if it does take 10 or 15 years. The industry has not embraced functional programming wholesale yet, but individual programmers have taken up various ideas. That makes Haskell look less radical.

If you look at a language like Rust, it has some of the clever type systems of Haskell, although it doesn’t have any functional programming ideas. Even Java and C++ have some functional programming ideas in them these days, so Haskell is not quite so far from the mainstream as it used to be.

The second major change has been performance, which is getting much better. We’ve recently become competitive with Java in terms of performance. It makes people say, ‘Wow, Haskell is so fast,’ but that’s because they’re comparing it to Python and PHP rather than C. So that’s another way of saying that Haskell has improved slightly, but the industry environment around it has changed as well.

You have been heavily involved in the Byron reboot which was kicked off last week. Why was this work important?

The Byron reboot is the culmination of over 18 months of hard work across multiple IOHK development teams, and constitutes a complete overhaul of the node infrastructure with 100% fresh code. The reboot introduces an extensible, modular design within the node itself, separating out the ledger, consensus, and networking components, as well as improvements and new functionality in the wallet backend and the Cardano explorer.

For Daedalus users, the Byron reboot will see us moving to a regular update cadence [see our recent piece on Daedalus Flight for more on that], after which they should find that Daedalus is faster, more reliable, and uses less memory. A lot of the issues users have experienced with Daedalus in the past were due to the underlying node, rather than Daedalus itself. The Byron reboot will go a long way to improving things, and users should see Daedalus syncing and restoring wallets within minutes, even when downloading the entire Cardano blockchain.

As the chief architect, your job is to lay the foundation for Cardano’s future. What have you focused on to achieve this?

The most important aspect in terms of flexibility for the future is keeping different functions separate. One of the big improvements of the Byron reboot is that the ledger rules will be totally independent of the consensus implementation; this modularity means that the ledger rules are perfectly clean mathematical functions, which is a core aspect of functional programming.

As a result, everything is easier to test, tweak, and change, both now and in the future. The consensus algorithm isn’t entangled with the details of the ledger rules, so we can alter the ledger rules without changing the consensus implementation. This makes integrating Plutus and smart contracts functionality much easier and will also help in the future when we are adding treasury and governance features.

The consensus implementation itself has also been parameterized so that we can transition from the Ouroboros Classic consensus protocol to BFT and then Praos, which also provides flexibility for future versions of the protocol that haven't been developed yet.

Shelley is a big step towards the future of Cardano, but what is the significance of compiling Haskell into JavaScript and WebAssembly?

We’re interested in compiling to JavaScript or WebAssembly because of Plutus. We want to have Plutus contracts or Plutus applications that can be distributed to users, which would include custom interfaces and custom logic with the user rather than in a server. Compiling to JavaScript allows us to do that; you can compile the Plutus code once and distribute it to users on different platforms.


Thanks to Duncan Coutts for his time. As chief technical architect, he’s a cornerstone of the Cardano project and has been fundamental to the ongoing success of the platform. For more interviews with the team, stay tuned to our social channels and the IOHK blog.

We need you for a Daedalus testing program!

A call for the Cardano community to help shape IOHK’s ada wallet for the Shelley era

1 April 2020 Anthony Quinn 7 mins read

We need you for a Daedalus testing program!

The Daedalus team is opening up IOHK’s wallet testing program to ada owners. The aim is to seek the help of a broad range of people who can test – on a rolling basis – the latest interface features being readied for the next release version of Daedalus. This will be a fully-functioning version of the wallet, called Daedalus Flight, so you will be able to spend and receive ada as usual – and give the team valuable feedback to improve the experience with each release.

We put some questions to Daedalus Product Manager, Darko Mijić, to tease out the details for people who want to take part.

Darko, can you expand on the thinking behind the program?

I’ve been working at IOHK for almost four years. It’s been amazing to see the way the Incentivized Testnet has galvanized the Cardano community in the past few months. The community helped us develop the stake pools for Shelley, and, of course, to test and develop a version of the Daedalus wallet for the ITN. So, building on that strategy, we came up with the idea of Daedalus Flight. This is a pre-release version of the Daedalus wallet that users can test with real ada transactions on the mainnet. People can join the program by downloading and using the Flight wallet alongside their usual Daedalus – or Yoroi – wallet to find and report issues so we can fix them before the actual release. We’ve seen that people want to help – and they will gain early access to the new features of the next production Daedalus release for the mainnet.

It’s similar to beta testing, or the ‘canary’ approach Google uses for its Chrome browser. Adding this tactic to our release process creates another valuable source of testing data. We will be able to improve the quality assurance of our new software versions and move more quickly into full production by first trialing these changes in Daedalus Flight with a subset of participating users.

But is it for everyone?

This program is especially important because it will give us a bigger sample of users who are using Daedalus in various ways on a wide variety of software and hardware configurations. We want to bring confident ada owners on board to help us develop the wallet by testing features and suggesting their own ideas. People who volunteer for this must already be familiar with using the Daedalus wallet. People need to be skilled in using computers – copying and backing up files, moving things around between folders – and prepared to deal with any issues that arise. We’re not expecting IT experts, but you do need to be confident about computers and making ada transactions.

How will the process work?

We complete a Daedalus development sprint every two weeks and quality assurance is a continuous effort during the development of every feature. At the end of each sprint, we will now create a new Daedalus Flight release, and, starting with the first flight candidate, we do our tests internally. But testing in a lab for a mass-market desktop product is not a real test. Under the new flight process, we will release a series of flight candidates and work on testing, finding issues and fixing issues with members of the Cardano community who have joined the program to help us. Flight candidate releases will be delivered through the Daedalus newsfeed, after an initial download from a new Flight option on the official Daedalus website, which is the only source of the Flight wallet.

Once we reach our final candidate, and we are confident that the release ‘can fly’, we will be releasing it to production for all Daedalus users.

This process will repeat every two weeks. There will be instances when we don’t release the production version after completing a flight release. This will happen when we are building big, multi-sprint features that cannot be delivered partially.

When you download Daedalus Flight, you can compare balances and transaction histories and see that all your wallet data has moved across. If there’s a problem, you just report it and go back to using your usual wallet while we fix things and release another candidate.

Why is this on the mainnet? Won’t it put our ada at risk?

Your ada will not be at risk. The Flight wallet is a real wallet and can do everything your usual wallet can. We’re adding features to make things easier for users. We could have done this on a testnet but the real test is where the real wallets live, on the mainnet. ‘Real’ user testing only happens after a wallet update is released to the mainnet. This flight process is secure because it is a completely separate wallet installation. We import your wallets from your production version of Daedalus. The production version of Daedalus stays untouched and fully functional.

So, we keep our present wallets?

Yes, you keep all your present wallets. You can use them alongside each Daedalus Flight candidate.

How will we give feedback?

There is an option to open a support request directly from the Daedalus interface. These requests will be handled separately by the support desk, and you can attach a wallet log so we can investigate your problem. If you just want to suggest an idea, you can do that too by clicking ‘Support request’ in the help menu.

Can you give us a sneak preview of the features?

Well, first of all, there will be Yoroi support in the new Flight wallet. Yoroi users will be able to use the same wallet both in Flight and Yoroi. Transactions will be in sync. Then, there will be a transaction filtering function in your transaction history so you can filter transactions by type, date, and ada amount. There will also be a warning in the transaction confirmation window making sure users understand that Daedalus Flight transactions are real ada transactions on the mainnet. Other neat touches will include parallel wallet restoration and a resync wallet function. Later on, we will be adding cool features like hardware wallet support.

How long will the Daedalus Flight program last?

The program started with the Byron reboot on March 31, with the first flight release and its candidate #1 release. If we discover and fix issues, we will issue a second candidate. When we get to a fully stable candidate, we will release it to all Daedalus users. April 6 is the planned date for the first production-ready Daedalus, but this depends on the results and user data from Daedalus Flight.

The next release and its series of candidates can be expected for April 14, and then every two weeks. Because this is a new product, we will try this process out for a month or two. If successful, we plan to make this permanent practice in the long term. It is important to note that the special Daedalus version for the Incentivized Testnet will still exist – that is a completely separate product.

Any final message?

People should be clear that the Daedalus Flight wallet is used with your mainnet ada. It’s a real wallet with all the core functions of Daedalus. It’s just the new features, which are not to do with transactions, that we’re testing. Going back to the Chrome comparison, think of it as buying something from Amazon using a beta test release of a web browser; you are doing real things with test software.

We hope you’ll be as excited as we are by this innovation – and can help our Daedalus development really fly! And remember, for the latest information and news, visit the official Daedalus website.

What the Byron reboot means for Cardano

New code delivers big benefits for the network and Daedalus users as we prepare for Shelley

30 March 2020 Tim Harrison 5 mins read

What the Byron reboot means for Cardano

The Byron reboot is a series of updates to multiple components of the Cardano network. Namely the Cardano node, but also the Cardano explorer, the wallet backend, and the Daedalus wallet itself. The first part of the Byron reboot – a totally new node implementation – has already been deployed to some relay nodes on the network, and the next few weeks will see core network nodes and more relays being incrementally upgraded to the new system. Soon, users will be able to experience the node improvements directly via a new version of the Daedalus wallet.

Why do we need a reboot?

To prepare for the future, and for Shelley. The node that launched at the start of the Byron era was only ever going to take us so far. The new node implementation has been designed from the ground up to support not only imminent Shelley features, such as delegation and decentralization, but anything else that the future has in store. The improved design is modular, separating the ledger, consensus, and network components of the node, allowing any one of them to be changed, tweaked, and upgraded without affecting the others.

The reboot has also been an opportunity to apply evidence-based formal methods and testing to every single aspect of the node. Rather than try and make these substantial improvements to the existing code, it was more effective to work from scratch. All critical elements of the new node have been formally specified, and the final implementation tested against those specifications. Basic code quality and performance are now significantly higher and more robust across the board, as well as being easier to test and verify going forward.

What’s involved?

Improvements to the Cardano node, of course, but work has also been underway to improve the Cardano explorer, the wallet backend, and Daedalus itself. The new explorer has been redesigned to make it easier to use, as well as having an updated visual design and more information available. The improved wallet backend and associated services, collectively known as Adrestia, will allow exchanges and third-party developers to engage with the Cardano network using a collection of independent, self-contained libraries. The newly extended APIs have been explicitly designed to meet the needs of larger exchanges, enabling ada to be supported on even more platforms. And finally, a new and improved Daedalus release will see Yoroi wallet support, transaction filtering, parallel wallet restoration – not to mention some significant performance improvements.

How will it impact the Cardano network?

At the most fundamental level, the Byron reboot will improve the performance of the entire Cardano network. Transaction throughput capacity will increase, and the network will be able to handle much higher demand and transactions per second. The node improvements also make the node more efficient and reliable in terms of memory usage, increasing the viability of running a Cardano node on a low-spec machine or in poor network conditions, which, in turn, enables more users to participate in the Cardano network across the world.

What does it mean for Daedalus users?

First, there’ll be a new rolling release of Daedalus that is designed to allow users to test the new node functionality and wallet backend, and to provide us with feedback. We’ll be sharing more about how that will work in our monthly product update on Tuesday, March 31. Once the wallet has been user tested, feedback will be implemented in a new production version of Daedalus. Many of the issues users have experienced with Daedalus in the past were due to the underlying node, rather than Daedalus itself. The new node will go a long way to improving performance, and users should see Daedalus syncing and restoring wallets at significantly improved speeds.

What’s next?

The reboot commences with the deployment of a CLI and associated APIs to exchanges and wallet partners. The new node update will then be rolled out to more core and relay nodes on the Cardano mainnet within the next few weeks, followed by other reboot and continuing Daedalus improvements in the months ahead. The goal is to gradually and sustainably migrate the entire Cardano blockchain to working on the new node implementation, without any disruption or loss of service. After that comes the Haskell Shelley testnet, which will include onboarding stake pool operators from the Incentivized Testnet to help them set up and prepare for running their pools on the Shelley mainnet.

The Byron reboot is the culmination of over 18 months of work by several IOHK development teams and represents a significant investment in the network-critical infrastructure required to support the Shelley era of Cardano. For more details on the reboot and how it will all roll out, tune in to March’s product update on the IOHK Crowdcast stream on March 31. Cardano product director Aparna Jue and members of her team will join me to discuss the latest on the reboot and the next steps for the project.

Enter the Hydra: scaling distributed ledgers, the evidence-based way

Learn about Hydra: the multi-headed ledger protocol

26 March 2020 Prof Aggelos Kiayias 10 mins read

Enter the Hydra: scaling distributed ledgers, the evidence-based way

Scalability is the greatest challenge to blockchain adoption. By applying a principled, evidence-based approach, we have arrived at a solution for Cardano and networks similar to it: Hydra. Hydra is the culmination of extensive research, and a decisive step in enabling decentralized networks to securely scale to global requirements.

What is scalability and how do we measure it?

Scaling a distributed ledger system refers to the capability of providing high transaction throughput, low latency, and minimal storage per node. These properties have been repeatedly touted as critical for the successful deployment of blockchain protocols as part of real-world systems. In terms of throughput, the VISA network reportedly handles an average of 1,736 payment transactions per second (TPS) with the capability of handling up to 24,000 TPS and is frequently used as a baseline comparison. Transaction latency is clearly desired to be as low as possible, with the ultimate goal of appearing instantaneous to the end-user. Other applications of distributed ledgers have a wide range of different requirements in terms of these metrics. When designing a general purpose distributed ledger, it is natural to strive to excel on all three counts.

Deploying a system that provides satisfactory scaling for a certain use case requires an appropriate combination of two independent aspects: adopting a proper algorithmic design and deploying it over a suitable underlying hardware and network infrastructure.

When evaluating a particular algorithmic design, considering absolute numbers in terms of specific metrics can be misleading. The reason is that such absolute quantities must refer to a particular underlying hardware and network configuration which can blur the advantages and disadvantages of particular algorithms. Indeed, a poorly designed protocol may still perform well enough when deployed over superior hardware and networking.

For this reason, it is more insightful to evaluate the ability of a protocol to reach the physical limits of the underlying network and hardware. This can be achieved by comparing the protocol with simple strawman protocols, in which all the design elements have been stripped away. For instance, if we want to evaluate the overhead of an encryption algorithm, we can compare the communication performance of two end-points using encryption against their performance when they simply exchange unencrypted messages. In such an experiment, the absolute message-per-second rate is unimportant. The important conclusion is the relative overhead that is added by the encryption algorithm. Moreover, in case the overhead approximates 0 for some configuration of the experimental setup, we can conclude that the algorithm approximates the physical limits of the underlying network’s message-passing ability for that particular configuration, and is hence optimal in this sense.

Hydra – 30,000-feet view

Hydra is an off-chain scalability architecture for distributed ledgers, which addresses all three of the scalability challenges mentioned above: high transaction throughput, low latency, and minimal storage per node. While Hydra is being designed in conjunction with the Ouroboros protocol and the Cardano ledger, it may be employed over other systems as well, provided they share the necessary salient characteristics with Cardano.

Despite being an integrated system aimed at solving one problem – scalability – Hydra consists of several subprotocols. This is necessary as the Cardano ecosystem itself is heterogenous and consists of multiple entities with differing technical capabilities: the system supports block producers with associated stake pools, high-throughput wallets as used by exchanges, but also end-users with a wide variety of computational performance and availability characteristics. It is unrealistic to expect that a one-shoe-fits-all, single-protocol approach is sufficient to provide overall scalability for such a diverse set of network participants.

The Hydra scalability architecture can be divided into four components: the head protocol, the tail protocol, the cross-head-and-tail communication protocol, as well as a set of supporting protocols for routing, reconfiguration, and virtualization. The centerpiece is the 'head' protocol, which enables a set of high-performance and high-availability participants (such as stake pools) to very quickly process large numbers of transactions with minimal storage requirements by way of a multiparty state channel – a concept that generalizes two-party payment channels as implemented in the context of the Lightning network. It is complemented by the 'tail' protocol, which enables those high-performance participants to provide scalability for large numbers of end-users who may use the system from low-power devices, such as mobile phones, and who may be offline for extended periods of time. While heads and tails can already communicate via the Cardano mainchain, the cross-head-and-tail communication protocol provides an efficient off–chain variant of this functionality. All this is tied together by routing and configuration management, while virtualisation facilitates faster communication generalizing head and tail communication.

The Hydra head protocol

The Hydra head protocol is the first component of the Hydra architecture to be publicly released. It allows a set of participants to create an off-chain state channel (called a head) wherein they can run smart contracts (or process simpler transactions) among each other without interaction with the underlying blockchain in the optimistic case where all head participants adhere to the protocol. The state channel offers very fast settlement and high transaction throughput; furthermore, it requires very little storage, as the off-chain transaction history can be deleted as soon as its resulting state has been secured via an off–chain 'snapshot' operation.

Even in the pessimistic case where any number of participants misbehave, full safety is rigorously guaranteed. At any time, any participant can initiate the head's 'closure' with the effect that the head's state is transferred back to the (less efficient) blockchain. We emphasize that the execution of any smart contracts can be seamlessly continued on-chain. No funds can be generated off-chain, nor can any single, responsive head participant lose any funds.

The state channels implemented by Hydra are isomorphic in the sense that they make use of the same transaction format and contract code as the underlying blockchain: contracts can be directly moved back and forth between channels and the blockchain. Thus, state channels effectively yield parallel, off-chain ledger siblings. In other words, the ledger becomes multi-headed.

Transaction confirmation in the head is achieved in full concurrency by an asynchronous off-chain certification process using multi-signatures. This high level of parallelism is enabled by use of the extended UTxO model (EUTxO). Transaction dependencies in the EUTxO model are explicit, which allows for state updates without unnecessary sequentialization of transactions that are independent of each other.

Experimental validation of the Hydra head protocol

As a first step towards experimentally validating the performance of the Hydra head protocol, we implemented a simulation. The simulation is parameterized by the time required by individual actions (validating transactions, verifying signatures, etc.), and carries out a realistic and timing-correct simulation of a cluster of distributed nodes forming a head. This results in realistic transaction confirmation time and throughput calculations.

We see that a single Hydra head achieves up to roughly 1,000 TPS, so by running 1,000 heads in parallel (for example, one for each stake pool of the Shelley release), we should achieve a million TPS. That’s impressive and puts us miles ahead of the competition, but why should we stop there? 2,000 heads will give us 2 million TPS – and if someone demands a billion TPS, then we can tell them to just run a million heads. Furthermore, various performance improvements in the implementation can improve the 1,000 TPS single head measurement, further adding to the protocol’s hypothetical performance.

So, can we just reach any TPS number that we want? In theory the answer is a solid yes, and that points to a problem with the dominant usage of TPS as a metric to compare systems. While it is tempting to reduce the complexity of assessing protocol performance to a single number, in practice this leads to an oversimplification. Without further context, a TPS number is close to meaningless. In order to properly interpret it, and make comparisons, you should at least ask for the size of the cluster (which influences the communication overhead); its geographic distribution (which determines how much time it takes for information to transit through the system); how the quality of service (transaction confirmation times, providing data to end users) is impacted by a high rate of transactions; how large and complicated the transactions are (which has an impact on transaction validation times, message propagation time, requirements on the local storage system, and composition of the head participants); and what kind of hardware and network connections were used in the experiments. Changing the complexity of transactions alone can change the TPS by a factor of three, as can be seen in the figures in the paper (refer to Section 7 – Simulations).

Clearly, we need a better standard. Is the Hydra head protocol a good protocol design? What we need to ask is whether it reaches the physical limits of the network, not a mere TPS number. Thus, for this first iteration of the evaluation of the Hydra head protocol, we used the following approach to ensure that the data we provide is properly meaningful:

  1. We clearly list all the parameters that influence the simulation: transaction size, time to validate a single transaction, time needed for cryptographic operations, allocated bandwidth per node, cluster size and geographical distribution, and limits on the parallelism in which transactions can be issued. Without this controlled environment, it would be impossible to reproduce our numbers.
  2. We compare the protocol’s performance to baselines that provide precise and absolute limits of the underlying network and hardware infrastructure. How well we approach those limits tells us how much room there would be for further improvements. This follows the methodology explained above using the example of an encryption algorithm.

We use two baselines for Hydra. The first, Full Trust, is universal: it applies to any protocol that distributes transactions amongst nodes and insists that each node validate transactions one after the other – without even ensuring consensus. This yields a limit on TPS by simply adding the message delivery and validation times. How well we approach this limit tells us what price we are paying for consensus, without relying on comparison with other protocols. The second baseline, Hydra Unlimited, yields a TPS limit specifically for the head protocol and also provides the ideal latency and storage for any protocol. We achieve that by assuming that we can send enough transactions in parallel to completely amortize network round-trip times and that all actions can be carried out when needed, without resource contention. The baseline helps us answer the question of what can be achieved under ideal circumstances with the general design of Hydra (for a given set of values of the input parameters) as well as evaluate confirmation latency and storage overhead against any possible protocol. More details and graphs for those interested can be found in our paper (again, Section 7 – Simulations).

What comes next?

Solving the scalability question is the holy grail for the whole blockchain space. The time has come to apply a principled, evidence-based approach in designing and engineering blockchain scalability solutions. Comparing scalability proposals against well-defined baselines can be a significant aide in the design of such protocols. It provides solid evidence for the appropriateness of the design choices and ultimately leads to the engineering of effective and performant distributed ledger protocols that will provide the best possible absolute metrics for use cases of interest. While the Hydra head protocol is implemented and tested, we will, in time, release the rest of the Hydra components following the same principled approach.

As a last note, Hydra is the joint effort of a number of researchers, whom I'd like to thank. These include Manuel Chakravarty, Sandro Coretti, Matthias Fitzi, Peter Gaži, Philipp Kant, and Alexander Russel. The research was also supported, in part, by EU Project No.780477, PRIVILEDGE, which we gratefully acknowledge.