Blog > Authors > Duncan Coutts

Cardano reaches development milestone

Engineers conclude Byron phase as Shelley work progresses

22 March 2019 Duncan Coutts 6 mins read

Cardano reaches development milestone

The release of Cardano 1.5 marks the start of the shift from the Cardano Byron era to the Shelley era and is an excellent opportunity to describe the ongoing work for Shelley and how the transition will happen. Roughly six months ago, we switched almost all our development efforts to the Shelley code base, and work has been progressing quickly ever since. The last major work in the Byron code base was completed for Cardano 1.4, and for 1.5 we limited work to only those changes required for a smooth transition to Shelley.

The Shelley code base is not just an extension to the Byron code base, but an entirely new foundation. For the Shelley era, we have taken the opportunity to strip back and rebuild the system, as well as including the new staking and delegation functionality. As a result, we have been able to remedy a number of architectural limitations in the Byron code, as well as engage in the semi-formal software development approach that I keep discussing in my videos.

In particular, we now have formal mathematical specifications of the validation rules for the Byron and Shelley blockchain, and will present these specifications at the IOHK summit in April. When development is complete, we will be able to provide evidence that the code correctly implements our specifications. This is an exciting step-change in system quality and will be a first for our industry.

A seamless transition

We must, of course, manage the transition from Byron to Shelley very carefully. It is not just a significant change in the rules, but also a migration from one code base to another. We have gone to great lengths to ensure that the transition process will be as smooth as possible.

While you might expect the Shelly transition to involve a single hard fork, it will actually comprise two. It is worth emphasizing that while these are technically hard forks, they will not be disruptive in the way hard forks often are. The changes have been designed to use our existing update system and be minimally disruptive. For Daedalus users, it will be very much like any other update.

Current Protocol

For both hard forks, we will deploy an update which includes the rules of the new era in an unactivated state, to be activated several weeks later. This is key to avoiding disruption at the hard fork: no software is updated at the moment of the hard fork itself. The software update happens earlier, and once everyone is ready we can smoothly activate the change.

The only difference between a hard fork and a regular update is that updating is compulsory between the software release and the hard fork activation. For Daedalus users, this happens via the standard software update system. Exchanges will have to upgrade manually, but they have several weeks to do so.

Why two hard forks?

For technical reasons, the transition from Byron to Shelley is more straightforward if we go via an intermediate transitional era. There is one hard fork to enter the transitional era and then a second one to begin the Shelley era proper. The Byron era uses Ouroboros Classic, and the Shelley era uses Ouroboros Genesis (which is an extension of Ouroboros Praos). Both of these are complex protocols. For a single implementation of a full node to manage a hard fork smoothly it is necessary for it to implement the rules both before and after the hard fork. A direct hard fork from Byron to Shelley would require a single implementation to understand Ouroboros Classic, Ouroboros Genesis, and all of the other validity rules – which is a very complicated prospect indeed.

Not only that, but the Byron version of Ouroboros Classic has some additional complexity that would need to be replicated in a new implementation to preserve perfect consensus. Instead, we are using Ouroboros BFT, a simple variant of Ouroboros, for the transitional era. This means that the Byron code base only has to understand Ouroboros Classic and Ouroboros BFT, while the Shelley code base only has to understand Ouroboros BFT and Ouroboros Genesis. Neither one has to understand both Ouroboros Classic and Ouroboros Genesis. In particular, this means that the new Shelley code base does not need to replicate every detail of the Byron implementation of Ouroboros Classic, achieving a genuine reduction in complexity – and in software development, complexity is the enemy.

A transitional era

So this explains what the Cardano 1.5 release is really for: it is the release in which the Byron code base begins to understands Ouroboros BFT, allowing us to complete the first managed hard fork in a few weeks' time. After the hard fork, we will be in the transitional era using Ouroboros BFT and will be able to start deploying the new code base over time as it is developed. This is the new code base that will be used for the Shelley releases later, but is initially still using Ouroboros BFT for perfect compatibility during the transition.

During this transitional period, we will also run a testnet for delegation and staking. Initially, this testnet will use a subset of the Shelley rules, but we will update it over time until the full Shelley rules are implemented and any other issues uncovered by the testnet resolved.

Once we are satisfied with the full implementation of the Shelley rules, then we will deploy an update of the new code base on mainnet. A few weeks later we will activate the hard fork and then we are finally in the Shelley era on mainnet!

In summary, the Cardano 1.5 release is exciting not because of any major features, or the numerous incremental improvements in Daedalus, but because it is the milestone that marks the beginning of the end for Cardano Byron and the start of the transition process into Cardano Shelley.

Artwork, Creative Commons IOHK | Agency & Dimitris Ladopoulos

Cardano 1.1.0 software update

A major package of improvements and fixes is ready for users

7 March 2018 Duncan Coutts 4 mins read

Cardano 1

Cardano 1.1.0 software update

The software update today is the first major release for Cardano since the mainnet was launched at the end of September and it consists of a great deal of work from the development team. The release contains a few new features that are aimed at improving the user experience. And it also contains a set of important fixes for many of the bugs that were identified since the last release, Cardano SL 1.0.3. Here is Charles Hoskinson, CEO of IOHK, with a video update about this release, and below we outline the most significant changes delivered. Users will notice the changes take effect tomorrow.

The team has been working hard to address the issues some users have experienced with Daedalus and this update contains fixes for some of the problems.

With this release, Daedalus will detect when the time on a user's machine is out of sync with the global time and will display an error message asking the user to fix the issue. Before this feature was added when there was a time difference of 20 seconds or more, the Cardano node was unable to connect to the network and validate the blockchain, and Daedalus would be held on the loading screen with the "Connecting to network" message. This feature will eliminate the problem of users being held on the loading screen because of the time difference issue.

Several other instances of the user being stuck on the "Connecting to network" screen were fixed. Many issues that can lead to this have been partially or completely fixed. Problem areas include node shutdown, networking and block retrieval mechanisms.

A new "Support request" feature enables users to report a problem directly from Daedalus. This will automatically include log files along with the problem report. By always including log files, this feature will help the development team to investigate and solve the problems that users are experiencing. This feature is accessible from the main user interface and from the loading screen when there is a delay while connecting to the network or when blockchain syncing stops.

Blockchain retrieval performance and reliability has been gradually improved, in particular bugs have been fixed that caused significant slowdown in syncing to the network after reaching 99.9%, and caused occasional network disconnections. Handling of whether Daedalus is connected or disconnected is improved, and a lost internet connection is now detected and brings the user to the loading screen to indicate that wallet is not currently operational.

In addition to individual fixes, importantly, this major release is the first time-based release containing significant new code, and represents an improvement in our development process. All previous releases of Cardano were scope-based, i.e. the goal was to deliver a particular scope and often the release was repeatedly postponed because of inaccurate estimations on having the scope ready for release.

There is much debate among software developers on which release process is preferable. As was outlined in our previous blog post, the Cardano team has chosen time-based releases. We had a significant backlog of work to resolve to be able to release our development branch to the mainnet – a substantial amount of testing had to be performed because of large amount of new code. But with the release of 1.1.0, we have made a major step forward.

There will be two more time-based releases in the next couple of months containing more improvements, fixes and new features for Cardano. New features for the Shelley phase of development will begin to be released in Q2 and continue through Q3. For more information see the Cardano Roadmap.

Artwork,
Creative Commons
Mike Beeple

How we test Cardano

The importance of testing our cryptocurrency systems

30 August 2017 Duncan Coutts 8 mins read

How we test Cardano - Input Output

How we test Cardano

Testing is of course critically important to a cryptocurrency because the correctness and robustness of the system are what you rely on to keep your money safe, and ensure that you can spend it when you need to. So as you would expect, as our development team is getting ready for the Cardano mainnet release, testing is one of the main things that is on our minds.

There are many different ways in which we test Cardano and in this post we will talk about several.

Testing can be divided into two main kinds: functional and non-functional:

  • Functional testing is about checking that all the system's components meet their specifications.

    Functional testing is done with components on their own, in which case we call it unit testing or component testing. It is also done with all the components together, in which case we call it integration testing or system testing.

    These kinds of tests are typically of the form: given some scenario, and certain inputs, the component or system produces the correct output or takes the correct next action.

  • Non-functional testing reveals "how" the system behaves, including the performance of the system, the resources it uses and how the system behaves when under great load or attack.

We have a few major parts of the Cardano system: the core, the wallet backend and the Daedulus frontend. Different parts of the system are appropriate to test in different ways.

Public testnets


The most visible form of testing is of course the public testnets where we ask users to try the system out. This is a kind of beta testing. This is just the tip of the iceberg compared to all the testing we do internally, but it is still very useful because it covers a different set of problems compared to our internal tests.

Users have a huge variety of desktop computers, both in hardware and configuration. It is impossible for us to test all the combinations that our users have. So having lots of real users try out the system really helps to find those strange combinations where something does not work well, and gives us the confidence that we will not bump into similar problems for mainnet.

A testnet release helps us test usability of the system: our websites, the installers, the Daedulus interface and how many cryptocurrency concepts people need to know to use the system.

There is no escaping the fact that a public testnet release is in some ways more "real" than any test situations we can construct artificially. Though we can certainly push the system to breaking point using our internal stress tests, there are complexities of a real world deployment that are hard to replicate in an artificial test.

Finally, it also helps our team practice making public releases, which helps us work out the kinks in our processes so that we can avoid problems during the mainnet release or later updates. And it's not just our developers and technical operations teams, a successful launch also depends on our communications and support teams. The very process of getting questions, feedback and problem reports from users during the testnet phases helps us to make sure that our support teams have the right procedures in place so that we can be confident that they can help everyone effectively during and after the mainnet launch.

Automatic unit and component testing


We have an increasing collection of fully automatic functional tests that cover various important parts of the logic in the core and wallet backend. These are functional tests in that they check that each component meets its specification.

These tests are run automatically by our continuous integration system, which means they are run before any change to the code is accepted into our master branch. This helps to protect us against introducing regressions.

Wherever possible we make use of property based testing, rather than simple individual unit tests. Classic unit tests for a component tend to simply use a specific set of inputs and check that a specific output is produced. To comprehensively test a component in this style often requires a large number of specific pairs of input and expected output. This is laborious and tends to miss corner cases that programmers do not think of. By contrast, property based testing involves taking the component's high level specification and reformulating the specification as an executable property. That means that for any specific inputs the property can actually be executed to check that the property is true for those inputs. These properties are expressed in the same programming language as the code being tested. The technique then involves checking the property on hundreds or thousands of test inputs. The technique is to use systematic random generation to produce test inputs. This means that programmers do not have to think of lots of test inputs and it avoids human bias. So it tends to give much better test coverage with less effort.

Specifically, we use the QuickCheck system for property based testing. Perhaps the greatest advantage is that it makes developers think in terms of the specification and properties of their code, rather than individual inputs and outputs. This is a much higher level way of thinking about code and helps to produce simpler more reliable code.

System level tests and performance tests


While all the unit and component testing gives us confidence that each part of the system works ok on its own, system level tests are to check if the parts all work together as a whole.

For this we have to set up a cluster of machines and configure them to run the blockchain protocol together. The main functional test that we use works like this: we have a special transaction generator program that constructs tens of thousands of transactions and submits them to nodes in the cluster. The code is instrumented to record certain key events in a log file, such as when each transaction reaches each node. We let this run for around an hour. At the end of the run we have a tool that analyses the blockchain and the log files from all the nodes. This checks that all the transactions that were submitted did make it into the blockchain. It also checks if there were any unexpected forks in the blockchain or missing blocks. In normal conditions there will be no forks or missing blocks.

We can use the same basic approach to test the system when we deliberately attack it, such as taking out nodes, or preventing nodes from talking to each other for a while. In this case we expect temporary forks or missing blocks, but we can check that the system recovers properly.

We use the same basic approach for non-functional performance tests. We adjust the transaction generator to submit transactions at a higher rate to stress the system and see how high we can push the throughput before it hits a bottleneck. We can also check that even though the system has hit its maximum capacity it continues to function in a stable way.

Throughput, meaning transactions per second, is important but so is latency. By latency we mean how long it takes for a transaction to get into the blockchain. Our analysis tool can also determine the distribution of latency. A low latency with little variance shows us that transactions are flowing smoothly to the nodes that create blocks and that those nodes are creating blocks on time.

Frontend testing


Our Daedalus frontend team have a fully automated set of tests that cover every function of the user interface. In turn this also tests every interaction between the wallet frontend and wallet backend. So this also gives us an automatic integration test for the combination of the wallet frontend and backend.

Frontend testing is a bit different from most other testing. Most testing works by a test program directly using a program interface, whereas frontend testing requires interacting with an actual human interface. User interface testing frameworks simulate what a real user does: clicking buttons and typing in web form boxes.

The result is actually rather fascinating to watch: it's as if an invisible robot is sitting at the computer typing and clicking very quickly to set up accounts, send transactions and all the other things.

Daedalus acceptance tests – a fully automated set of tests that cover every function of the user interface.

Security auditing


Counterintuitively, when it comes to cryptography and security -- which cryptocurrencies of course rely on completely -- testing is in fact not very effective. Testing usually shows us that the expected things do work, but it's hard to use normal testing to show that unexpected things cannot happen. And showing that some hacker cannot subvert the system is just the kind of thing that is hard to test for.

The solution is not testing but auditing by experts in cryptography and security. This means experts carefully reviewing the designs to check that the arguments for why the system should be safe are sound, and also reviewing the code to make sure the code matches up with the design.

Of course, the basic design for the proof of stake blockchain used in Cardano has already been peer reviewed by academic cryptographers. There are other parts of the system that we have had to develop in the last year -- beyond just the blockchain -- and the most security critical parts of those have been reviewed by our research team, and also by an external security audit team. Additionally, the security audit team have reviewed many of the most important parts of the code to check that the code matches the design.

Conclusion


A cryptocurrency system is a surprisingly complex piece of software and it has to work correctly, be robust to deliberate attacks and have good performance. Of course Cardano is a new from-scratch cryptocurrency, not based on any existing system, so all of it has to be carefully tested or reviewed.

Hopefully this post has given you some insight into how much is involved in testing Cardano, and how serious we are about security, robustness and performance.

Cryptocurrencies need a safeguard to prevent another DAO disaster

High assurance brings mission-critical security to digital funds

12 May 2017 Duncan Coutts 13 mins read

Cryptocurrencies need a safeguard to prevent another DAO disaster - Input Output

Cryptocurrencies need a safeguard to prevent another DAO disaster

TL;DR You want to avoid the next DAO-like disaster: so you want confidence that the system underpinning your cryptocurrency doesn’t have a hidden flaw that could be triggered at any time and render your assets worthless. To get that confidence you need a high assurance implementation of the system operating your cryptocurrency. Formal methods (mathematical specifications and proofs) are the best way to build high assurance software systems, and that is what we are aiming to do with the software behind the cryptocurrencies we build.

How can you sleep at night?

A gold bar or a wodge of cash stashed in a safe has the rather nice property that it doesn’t just evaporate overnight. Money managed by computer software is not inherently so durable. Software flaws can be revealed without warning and can destroy the trust in whole systems.

We only have to look around us to see the prevalence of software flaws. The IT trade press is full of news of data breaches, critical security patches, zero-day exploits etc. At root these are almost all down to software flaws. Standard software development practices inevitably lead to this state of affairs.

With the DAO in particular, the flaw was in the implementation of the smart contract that defined the fund, not directly in Ethereum itself. So the implementation of contracts and the design of smart contract languages is certainly an important issue, but the next flaw could be somewhere else. It’s hard to know.

So how are we to sleep soundly at night? How can we be confident that our cryptocurrency coins are not just going to evaporate overnight? What we need is assurance. Not to be confused with insurance. Assurance is evidence and rational arguments that a system correctly does what it is supposed to do.

Systems with high assurance are used in cases where safety or a lot of money is at stake. For example we rightly demand high assurance that aircraft flight control systems work correctly so we can all trust in safely getting from A to B.

If as a community we truly believe that cryptocurrencies are not a toy and can and should be used when there are billions at stake then it behoves us to aim for high assurance implementations. If we do not have that aim, are we really serious or credible? And then in the long run we must actually achieve high assurance implementations.

In this post we’ll focus on the software aspects of systems and how formal methods help with designing high assurance software. Formal methods can be very useful in aspects of high assurance system design other than software, but that’ll have to wait for some other blog post.

What does assurance look like?

While we might imagine that assurance is either “yes“ or “no“ – you have it or you don’t – it actually makes sense to talk about degrees of assurance. See for example the summaries of the assurance levels, EAL1 to EAL7, in the CC security evaluation standard. The degree of assurance is about risk: how much risk of system failure are you prepared to tolerate? Higher assurance means a lower risk of failures. Of course all else being equal you would want higher assurance, but there is inevitably a trade-off. Achieving higher levels of assurance requires different approaches to system development, more specialised skills and extra up-front work. So the trade-off is that higher assurance is perceived to come with greater cost, longer development time and fewer features in a system. This is why almost all normal commercial software development is not high assurance.

There are two basic approaches to higher assurance software: the traditional approach focused on process and the modern approach focused on evidence, especially formal mathematical evidence.

Historically, going back to the 1980s and before, the best we could do was essentially to think hard and to be very careful. So the assurance standards were all about rigorously documenting everything, especially the process by which the software was designed, built and tested. The evidence at the end is in the form of a big stack of documents that essentially say “we’ve been very methodical and careful”.

Another approach comes from academic computer science – starting in the 80’s and becoming more practical and mature ever since. It starts from the premise that computer programs are – in principle – mathematical objects and can be reasoned about mathematically. When we say “reason about” we mean mathematical proofs of properties like “this program satisfies this specification”, or “this program always computes the same result as that program”. The approach is that as part of the development process we produce mathematical evidence of the correctness of the software. The evidence is (typically) in the form of a mathematical specification along with proofs about some useful properties of the specification (eg security properties); and proofs that the final code (or critical parts thereof) satisfy the specification. If this sounds like magic then bear with me for a moment. We will look at a concrete example in the next section.

One advantage of this approach compared to the traditional approach is that it produces evidence about the final software artefacts that stands by itself and can be checked by anyone. Indeed someone assessing the evidence does not need to know or care about the development process (which also makes it more compatible with open-source development). The evidence does not have to rely on document sign-offs saying essentially “we did careful code review and all our tests pass”. That kind of evidence is great, but it is indirect evidence and it is not precise or rigorous.

In principle this kind of mathematical approach can give us an extremely high level of assurance. One can use a piece of software called a proof assistant (such as Coq or Isabelle) which provides a machine-readable logical language for writing specifications and proofs – and it can automatically check that the proofs are correct. This is not the kind of proof where a human mathematician checking the proof has to fill in the details in their head, but the logician’s kind of proof that is ultra pernickety with no room left for human error.

While this is perhaps the pinnacle of high assurance it is important to note that cryptocurrencies are not going to get there any time soon. It’s mostly down to time and cost, but also due to some annoying gaps between the languages of formal proof tools and the programming languages we use to implement systems.

But realistically, we can expect to get much better evidence and assurance than we have today. Another benefit of taking an approach based on mathematical specification is that we very often end up with better designs: simpler, easier to test, easier to reason about later.

Programming from specifications

In practice we do not first write a specification then write a program to implement the spec and then try to prove that the program satisfies the specification. There is typically too big a gap between the specification and implementation to make that tractable. But it also turns out that having a formal specification is a really useful aid during the process of designing and implementing the program.

The idea is that we start with a specification and iteratively refine it until it is more or less equivalent to an implementation that we would be happy with. Each refinement step produces another specification that is – in a particular formal sense – equivalent to the previous specification, but more detailed. This approach gives us an implementation that is correct by construction, since we transform the specification into an implementation, and provided that each refinement step is correct then we have a very straightforward argument that the implementation is correct. These refinement steps are not just mechanical. They often involve creativity. It is where we get to make design decisions.

To get a sense of what all this means, let’s look at the example of Ouroboros. Ouroboros is a blockchain consensus protocol. Its key innovation is that it does not rely on Proof of Work, instead relying on Proof of Stake. It has been developed by a team of academic cryptography researchers, led by IOHK Chief Scientist Aggelos Kiayias. They have an academic paper, Ouroboros describing the protocol and mathematical proofs of security properties similar to that which Bitcoin achieves. This is a very high level mathematical description aimed for peer review by other academic cryptographers.

This is a great starting point. It is a relatively precise mathematical description of the protocol and we can rely on the proofs of the security properties. So in principle, if we could prove an implementation is equivalent (in the appropriate way) to the description in the paper, then the security proofs would apply to our implementation, which is a great place to be.

So how do we go from this specification to an implementation following the “correct by construction” approach? First we have to make the protocol specification from the paper more precise. It may seem surprising that we have to make a specification more precise than the one the cryptographers wrote, but this because it was written for other human cryptographers and not for machines. For the refinement process we need to be more like the pernickety logicians. So we have to take the protocol specification written in terms of English and mathematical symbols and redefine it in some suitable logical formalism that doesn’t leave any room for ambiguity.

We then have to embark on the process of refinement. The initial specification is the most abstract and least detailed. It says what must be done but has very little detail about how. If I have a more detailed specification that is a refinement of the initial specification then what that means intuitively is: if you are happy with the initial specification then you would be happy with the new specification. You can have different refinements of the same specification: they differ in details that are not covered in the original specification. Refinement also has a quite specific formal meaning, though it depends on exactly what formalism you’re using. In process calculi, refinement is described in terms of possible observed behaviours. One specification is a refinement of the other if the set of possible observed behaviours are equivalent to the other. Formally the kind of equivalence we need is what is known as a bisimulation.

In the case of Ouroboros we start with a very abstract specification. In particular it says very little about how the network protocol works: it describes things in terms of a reliable network broadcast operation. Of course real networks work in terms of unreliable unicast operations. There are many ways to implement broadcast. The initial specification doesn’t say. And it rightly doesn’t care. Any suitable choice will do. This is an example where we get to make a design choice.

The original specification also describes the protocol in terms of broadcasting entire blockchains. That is the whole chain back to the genesis block. This is not intended to be realistic. It is described this way because it makes the proofs in the paper easier. Obviously a real implementation needs to work in terms of sending blocks. So this is another case for refinement. We have to come up with a scheme where protocol participants broadcast and receive blocks and show how this is equivalent to the version that broadcasts chains. This is an interesting example because we are changing the observed behaviour of protocol participants: in one version we observe them broadcasting chains and in the other broadcasting blocks. The two do not match up in a trivial way but we should still be able to prove a bisimulation.

There are numerous other examples like this: cases where the specification is silent on details or suggests unrealistic things. These all need to be refined to get closer to something we can realistically implement. When do we move from specification to implementation? That line is very fuzzy. It is a continuum, which comes back to the point that both specifications and programs are mathematical objects. With Ouroboros the form of specification is such that at each refinement step we can directly implement the specification – at least as a simulation. In a simulation it’s perfectly OK to broadcast whole chains or to omit details of the broadcast algorithm since we can simulate reliable broadcast directly. Being able to run simulations lets us combine the refinement based approach with a test or prototype based approach. We can check we’re going in the right direction, or establish some kinds of simulated behaviour and evaluate different design decisions.

There are also appropriate intermediate points in the refinement when it makes sense to think about performance and resource use. We cannot think about resource use with the original high-level Ouroboros specification. Its description in terms of chain broadcast makes a nonsense of any assessment of resources use. On the other hand, by the time we have fully working code is too late in the design process. There is a natural point during the refinement where we have a specification that is not too detailed but concrete enough to talk about resource use. At this point we can make some formal arguments about resource use. This is also an appropriate point to design policies for dealing with overload, fairness and quality of service. This is critical for avoiding denial of service attacks, and is not something that the high-level specification covers.

Of course any normal careful design process will cover all these issues. The point is simply that these things can integrate with a formal refinement approach that builds an argument, step by step, as to why the resulting design and implementation do actually meet the specification.

Finally it’s worth looking at how much flexibility this kind of development process gives us with the trade-off between assurance and time and effort. At the low end we could take this approach and not actually formally prove anything, but just try to convince ourselves that we could if we needed to. This would mean that the final assurance argument looks like the following. We have cryptographers check that the protocol description in their paper is equivalent to our description in our logical formalism. This isn’t a proof, just mathematicians saying they believe the two descriptions are equivalent. Then we have all the intermediate specifications in the sequence of refinements. Again, there are no formal proofs of refinement here, but the steps are relatively small and anyone could review them along with prose descriptions of why we believe them to be proper refinements. Finally we would have an implementation of the most refined specification, which should match up in a 1:1 way. Again, computer scientists would need to review these side by side to convince themselves that they are indeed equivalent.

So this gives us some intermediate level of assurance but the development time isn’t too exorbitant and there is a relatively clear path to higher assurance. To get higher assurance we would reformulate the original protocol description using a proof assistant. Then instead of getting a sign-off from mathematicians about two descriptions being equivalent, we could prove the security properties directly with the new description using the proof assistant. For each refinement step the task is clear: prove using the proof assistant that each one really is a refinement. The final jump between the most detailed refined specification and equivalent executable code is still tricky because we have to step outside the domain of the proof assistant.

With the current state of proof tools and programming language tools we don’t have a great solution for producing a fully watertight proof that a program described in a proof assistant and in a similar programming language are really equivalent. There are a number of promising approaches that may become practical in the next few years, but they’re not quite there yet. So for the moment this would still require some manual checking. Really high assurance still has some practical constraints: for example we would need a verified compiler and runtime system. This illustrates the point that assurance is only as good as the weakest link and we should focus our efforts on the links where the risks are greatest.

Direction of travel

As a company, IOHK believes that cryptocurrencies are not a toy, and therefore believes that users are entitled to expect proper assurance.

As a development team we have the ambition, skills and resources to make an implementation with higher assurance. We are embarking on the first steps of this formal development process now and over time we will see useful results. Our approach means the first tangible results will offer a degree of assurance and we will be able to improve this over time.