Blog > 2018 > January

Symphony of Blockchains

29 January 2018 Rouska Lundin 4 mins read

Symphony of Blockchains - Input Output

The Symphony of Blockchains is an interactive, visual and auditory exploration of Bitcoin, cryptocurrency and the blockchain. It is an ongoing research initiative with a singular aim: to help bring about greater understanding of both blockchain technology and the ever expanding (and contracting) cryptocurrency marketplace. The term ‘blockchain’ is being increasingly found in everyday language, with little explanation or understanding of the technology and its implication for the future. This work aims to explain both blockchain technology and its most visible application — cryptocurrencies. Through this visualisation we explain the concept underpinning blockchain as well as the individual transactional detail and ultimately the health of any cryptocurrency.

As the technology becomes more pervasive and it impacts on everyday life more, it’s important that we attempt to explain it in a meaningful way.

At Kuva, a design studio of artists, designers and technologists, we help define new metaphors through which to understand these technologies.

Symphony of Blockchains

In Symphony we explore the blockchain of Bitcoin as a physical structure. We examine its inherent underlying qualities by encapsulating data as crystalline forms connected in space, that are immutable and persistent. Using this as a metaphor provides a means to understand the Bitcoin blockchain. Blocks take on the properties of the data, their size, colour and orientation represent various qualities. Blocks are orientated in a spiral tracing back through time, each periodic rotation representing a day in the life of the blockchain. Their size and colour represent the total value of transactions made.

Symphony of Blockchains - Visualisation

Symphony also explores the blockchain as an auditory experience. We ask a simple question: ‘what does the blockchain sound like?’. Using the frequency and timing of Bitcoin transactions as a foundation, the audio extends the crystalling structures by encoding as an sound based entity.

The background sound is an ambient soundscape created from real recordings of computer power supplies and fans to emulate the sound of Bitcoin mining.The intensity of the sound varies with the hashrate of the network.

The audio of the Merkle tree is based on the transactions of the block. A repeating loop is set to run every musical measure (a segment of time corresponding to a specific number of beats in which each beat is represented by a particular note value) Transactions are arranged in ascending order based on the time they were made. The timescale of a block from the earliest to the latest mined, is mapped from 0 to 30 seconds.

Symphony of Blockchains - Block

Each transaction sound is triggered and set to loop based on the mapped time value (quantized to the nearest 32nd note). As the master loop repeats, notes accumulate and build up a pattern.

The note of each transaction sound is based on the position on the y-axis, to the nearest note in the Aeolian mode.

When it came to the user experience we wanted to ensure it was effortless to explore. The concepts and technologies we’re attempting to explain are complex enough. We didn’t want users having to fathom out a complex navigation system on top of it all.

Once the blockchain is loaded users simply scroll up or down to move forward or back in time through the blockchain. Using their pointer (or finger on mobile devices) they can easily select an individual node or block in the chain to investigate it further. Once accessed, the user is presented with a view of the unique Merkle tree that identifies that specific block. In addition to the Merkle tree view, the user is presented with a plethora of information giving detail about the transaction the selected block represents. To exit the block view the user simply clicks away.

A walkthrough of the experience

On the Ouroboros Design: How rigour and engineering are essential for critical infrastructure

11 January 2018 Prof Aggelos Kiayias 7 mins read

On the Ouroboros Design- How rigour and

A blog post on the Steemit website appeared recently making a number of claims regarding Ouroboros. The article contains several factual inaccuracies. For instance, it is claimed that “DPOS” in the Ouroboros paper stands for “delegated proof of stake”, while in fact, DPOS means “dynamic proof of stake”, or that the protocol requires a "2/3+" ratio of parties being honest, while in reality it just requires an honest majority, i.e. the stake controlled by parties following the protocol is more than half the total stake. For the benefit of those that are interested in the Ouroboros protocol and who appreciate its general philosophy, we feel it is appropriate to provide here a response to this article making along the way a few broader points. While pointing out inaccuracies in the blog, we take the opportunity to highlight some of the general approaches followed in the design of Ouroboros and in the related research efforts that are currently underway at IOHK.

Ouroboros is a proof of stake (PoS) protocol that uses delegation in the spirit of the PoS idea as discussed in the Bitcoin forum starting from 2011. The references that influenced its design are listed in our paper. PoS is a powerful concept that has inspired a number of other efforts prior, concurrent and post the first Ouroboros paper. Among all other implemented PoS blockchain systems that carry real assets, Ouroboros is unique in that it was designed in tandem with a formal security model and a mathematical proof that it implements a robust transaction ledger. This marks a fundamental shift in the methodology of blockchain system design.

Blockchain systems are in a period of transition from curiosities to critical infrastructure; as such, the all too typical software industry approach of releasing a “minimum viable product” as early as possible and then fixing bugs as they appear, is not appropriate. Failures of critical infrastructure have a significant impact on people’s lives and thus require rigorous engineering discipline to the highest possible standards. Dependability, rather than maximum performance according to some arbitrarily chosen metric, is the primary goal. Performance is important, of course, but the performance required is a function of the ultimate application domain, and from the point of view of dependability it is the worst-case performance that is important, not the ideal-scenario peak rate.

Like all other protocols in the blockchain space, Ouroboros requires some degree of synchronisation. The block production interval has to be consistent with the likely time to complete the required information exchanges. The 20-second slot time in Ouroboros represents a conservative choice for a block of transactions to traverse the diameter of a peer-to-peer network, where the peers may be significantly geographically distributed, the system is operating at peak transaction load and the interconnection is significantly less than perfect. It is improbable for a block of transactions to consistently traverse a global network much faster than that, and as a result any solution that does significantly better (or claims to do significantly better) is either wrong, or provides a weaker level of decentralisation or security, i.e. it solves an easier problem than Ouroboros. There is a tradeoff between achieving a robust, global, participatory service that delivers sustained effective performance even under an adversarial attack, and creating a high performance, limited participation (in geographical scope or network resource requirement) solution that makes overly optimistic assumptions on network stability.

Irreversibility, the property that transactions persist and are immutable in a blockchain protocol, has to be presented as a function of the level of the adversarial strength. This is true in Nakamoto’s Bitcoin paper and also in the Ouroboros paper, see Section 10.1 for the actual time needed for confirmation of transactions. Thus, one should be very wary of statements about irreversibility that do not quantify the level of adversarial power. For instance, Ouroboros will confirm a transaction with 99.9% assurance in just five minutes against an adversary holding 10% of the total stake, which in today’s market cap in the Cardano blockchain would amount to more than two billion dollars. Byzantine agreement protocols can provide a more “black and white” irreversibility, in other words the protocol can be guaranteed to be irreversible within a certain time window provided an honest majority or supermajority exists depending on the protocol. Nevertheless, the performance and decentralisation penalty suffered is very high if the level of adversity is allowed to come close to the 1/2 barrier, which is the level of adversity that Ouroboros can withstand.

The issue of possible dominance of the consensus process by a small group of stakeholders holding a large proportion of the stake is important but is not applicable to the current release of the Cardano system (the Byron release). What we have proved for Ouroboros is that it can facilitate a “fair” transaction ledger (where fairness here means that the ledger can fairly record all significant actions that are performed by the protocol participants despite the presence of an adversary). This enabled us to neutralise a number of rational protocol deviations (e.g. the equivalent of selfish mining attacks in the PoS setting) and provide a Nash equilibrium argument showing how the protocol can support many different types of mechanisms for incentivising participant behaviour. Currently, IOHK Research is actively working to finalise the incentive structure that will be incorporated in the Shelley release of Cardano, where stake pools will be supported and delegation behaviour will be properly incentivised so that it offers effective decentralisation of power. The crux of our methodology is the engineering of a novel reward mechanism for rational participants that provides appropriate incentives to partition their delegation rights. The objectives are first, to avoid concentration of power to a small group of participants – as it could happen by a naïve reward mechanism in a Pareto distributed stakeholder population – and second, to provide appropriate incentives to ensure a desired number of delegates. We are very excited about this work; it will be the first of its kind in the area and, as before, we will be disseminating it widely including full technical details, as well as submitting it for peer review.

This brings us to the final distinguishing advantage of the philosophy of Cardano. Scientific peer review has been refined over centuries. The way it is implemented by the International Cryptology Conference (also called Crypto), where Ouroboros was presented, and the other top conferences in the area, strives to remove conflicts of interest and produce the highest level of objectivity. The method of reviewing is known as "double blind”, i.e. papers are submitted anonymously and reviewers are experts that also remain anonymous to the authors. The committee of experts that reviews submitted papers each year is formed by two program co-chairs that are appointed by the International Association of Cryptologic Research, the pre-eminent organisation of cryptology research that was founded in 1982.

Being invited to serve in the committee as an expert is an important recognition of an individual’s long-term commitment to the area of cryptography (and even a precise count of how many times one has served is maintained). Blockchain protocols fit perfectly within the cryptography scientific literature and thus scientific peer review is to be done by this community. Of course, we welcome reviews from anyone. That is why we make public very detailed whitepapers with precise and specific claims that leave no uncertainty about what is being claimed, and we appreciate any factual discussion about any of these claims. We strongly encourage other projects to submit their work for scientific peer review as well. They will enjoy the benefits of thorough, well-founded and objective critique and they will have the opportunity to showcase any advantages and novelty that their approach possesses.

The hotel that became the world’s first business to accept Ada

I visited Hotel Ginebra in Barcelona to stay a night and meet the manager

5 January 2018 Olga González 7 mins read

The hotel that became the world’s first business to accept Ada - Input Output

The hotel that became the world’s first business to accept Ada

Hotel Ginebra is a boutique hotel in Catalunya Square, which if you have visited Barcelona you will know is a prime location in the heart of the city. Guests can wake up in the morning to a great view of the city, but what they may not know is that the hotel has earned its place in history. It recently became the first business in the world to accept Ada, the Cardano cryptocurrency launched at the end of September. I went to check out the hotel, to meet the manager, and to learn the story of how it came to accept Ada. Alfred Moesker is one of those who fell in love with this city. Born in the Netherlands, he had owned a hotel in Rome for years before he came to Barcelona and with his partner Yvonne Daniels took over Hotel Ginebra in 2013. Alfred and Yvonne are innovative managers who believe Ada can bring big benefits to the hotel industry, as well as to the wider world. I decided to make use of the offer to pay in Ada and found it was easy and efficient: the first of many transactions to be done, I am sure about that!

I asked Alfred what he felt about being the first business to accept Ada in payment.

“Incredibly proud,” he said. “For reasons we even fail to completely understand ourselves, we are more proud about it as we have been about anything else we have done for a long time. But working with Ada just feels right to us and it is a project that we feel we can identify with, not just something we just use and then throw away later.

In the short time since my visit, Alfred says quite a few people – including from Japan – have walked into the hotel inquiring if they can pay with Ada if they were to book a room. In a follow-up email he said: “To see how excited they get when we tell them ‘yes, you can’ is fantastic to witness.”

The hotel is in a typically Catalan building, with a regal style on the outside and a modernist feeling once you go in. It has been completely reformed on the inside and the rooms are cozy and quiet; they range from small, single rooms to big suites, so the hotel is very versatile. There are also great common spaces for guests (the use of which is included in the room price), computers with internet access, break rooms with tables and sofas, and snack rooms with tea, coffee and pastries available at all times. The staff are wonderful, kind and efficient, there’s a 24-hour reception service and they will greet you with a nice glass of wine and a chocolate once you’re settled in your room. If the location and the well-kept inside wasn’t enough to make it a great stay, the staff totally top it!

View from Hotel Ginebra, Barcelona
View of Catalunya Square from Hotel Ginebra

It’s not hard to see why Alfred and Yvonne moved to Barcelona. The hotel overlooks Plaça de Catalunya’s famous fountains and the main streets: La Rambla, where there are restaurants, museums, and shops, before you finally get to the beach; Passeig de Gràcia, if you love haute couture or want to have a taste of fancy Barcelona take a stroll around here, and Portal de l’Àngel, which leads to Barcelona Cathedral and the gothic district, and where you will find the famous Santa Llúcia fair if you’re visiting in winter). What all these places have in common, aside from being Barcelona’s heart and soul, is that they’re a three-minute walk from Hotel Ginebra.

Alfred came across Ada after reading an online article. He was just learning about Bitcoin at the time and was put off by its huge price volatility, fluctuating values not being nice and predictable for a business. He wondered instead whether alt coins might have a role to play in his business. A friend warned him that cryptocurrency was “like the wild west” because it was an emerging technology, but he preserved with his research and came across Cardano and the whiteboard videos.

“I liked the idea behind cryptocurrency,” he said. The peer to peer principle definitely is very attractive to us personally. We have had our share of bad experiences with banks and the idea that there is something else out there that can make the system a bit fairer just sounded all too appealing.

“Then I read about the Cardano project which seemed the opposite of exciting. It featured a photo of Jeremy Wood and Charles Hoskinson, both clearly the academic type – bordering on "nerdy" in a good way! When we read how they had been carefully constructing this project in order to take out the flaws of other cryptocurrencies and offer a much more "balanced" product, we thought ‘bingo’!!!”

Alfred liked the fact that Ada had been designed to be used as a cryptocurrency and also that it was part of Cardano, a system that would “lay the foundations for many projects in the crypto industry for many years to come”. “We get a strong sense that Ada is a product that will do good in the world and that has been set up for the right reasons,” he said.

Ada could also bring big benefits to the hotel industry in Alfred’s view.

“Accepting a cryptocurrency in general opens up a new segment of the travelling global population,” he said. “Accepting Ada will very importantly give us a chance to also go back to peer to peer relations with our potential customers. Nowadays the big reservation portals have so much commercial power that you are pretty much at their mercy.”

“Imagine if and when Ada has been distributed globally. A lot more people, that now do not have a credit card or have credit rating problems and can’t easily make a hotel reservation, by using ADA will be able to make a reservation in one minute. In our opinion it will promote financial global equality, at least that seems very likely from what we see happening with the hotel reservation process and hotel industry.”

Alfred and Yvonne
Alfred and Yvonne at Hotel Ginebra

All in all, visiting the hotel and meeting Alfred was a very positive experience. I spent a wonderful night in a nice hotel with kind staff in the center of the city, waking up in Barcelona’s heart at a good price and used a safe, direct form of payment. Either if you’re travelling for business or on holiday, alone or with friends or family, I can totally recommend it.

Alfred was very excited about Cardano’s potential and proud to be a part of the community.

His words about what attracted him to Cardano and its community were:

“It is hard to define…it is just one of those things you come across in your life sometimes and you just know it’s right, even when you only understand a fraction of the “why” at the time.

“What we do know is that it seems to connect us also on a personal level to what is apparently a pretty special group of people who also like this project and what more can you wish for in life?”

This article was contributed by a guest blogger and you can show appreciation for this blog by donating Ada to Olga at the following address:

DdzFFzCqrhtD7LSVkenGw1e14Tb86TS1PsFiTZDL9LPbz8sX7D1ZspzKPn5XmkbrxBhb7BpJrHqtNjgECYrJVw3LwWAF98LUQwiaMUfC
Please be aware that this is an Ada address and only funds sent in Ada will be received.

Telescopic Proof Refinement

2 January 2018 Rebecca Valentine 12 mins read

Telescopic Proof Refinement - Input Output

Telescopic Proof Refinement

In the third post in this series (part 1, part 2) on proof refinement, I'm going to show you how to properly handle bidirectionality in an elegant way. The technique we'll use is the replacement of lists and functions with a data structure called a telescope. This post will use Haskell exclusively, because of the limitations of JavaScript in presenting these things elegantly. I've put together repl.it REPLs for this blog post so you can play around with the code. You can find them here: Addition 1, Addition 2, Addition 3, Lambda Calculus.

Consider the type that represented a successful decomposition of a problem judgment into subproblems, when working in the proof system for addition: ([Judgment], [Nat] -> Nat). The list of judgments represents the subproblems, and the function represents how to compute the result of the main problem from the results of the subproblems. This was problematic to generalize though, because it meant that all of the subproblems had to be independent. You couldn't use the result of solving an earlier subproblem to state what a later problem was. Information flowed strictly from subproblems out to main problems, never into other subproblems.

This of course was because the subproblems were all given at the same time, and the results were all simultaneous arguments to the function that computed the main result. For instance, we might have a decomposition that looked like ([j0,j1,j2], f) where f = \[r0,r1,r2] -> ..., and we would solve these basically as f [solve j0, solve j1, solve j2]. What could we do to make it possible to have dependence of later problems are earlier results, though? Well, we could produce subproblems and consume results one at a time. In fact, instead of the above pair, why not have (j0, \r0 -> (j1, \r1 -> (r2, \r2 -> ...))). This has the type (Judgment, Nat -> (Judgment, Nat -> (Judgment, Nat -> Nat))), which is specific to the case where the list of subproblems has exactly three subproblems in it. This doesn't generalize well, but we can notice the obvious recursive pattern and instead define

data Problems = Done Nat
              | SubProblem Judgment (Nat -> Problems)

Here, we're either done, and we have a resulting number, or we have a subproblem to solve, and a way of getting from the result of solving it to some more problems. Now of course, decomposing actually produced a Maybe ([Judgment], [Nat] -> Nat), so we really ought to define this type to account for the Nothing case as well:

data Problems = Fail
              | Done Nat
              | SubProblem Judgment (Nat -> Problems)

Our decompositions now will look mostly the same, but slightly different:

decomposePlus12 :: Nat -> Nat -> Problems
decomposePlus12 Zero    y = Done y
decomposePlus12 (Suc x) y = SubProblem (Plus12 x y) (\z -> Done (Suc z))

decomposePlus13 :: Nat -> Nat -> Problems
decomposePlus13 Zero z = Done z
decomposePlus13 (Suc x) (Suc z) = SubProblem (Plus13 x z) (\z -> Done z)
decomposePlus13 _ _ = Fail

decompose :: Judgment -> Problems
decompose (Plus12 x y) = decomposePlus12 x y
decompose (Plus13 x z) = decomposePlus13 x z

Finding a proof is pretty easy now too, because we can just define it in in terms of a second function that handles problems more generally. Dropping the reconstruction of a proof tree, we have:

findProof :: Judgment -> Maybe Nat
findProof j = solveProblems (decompose j)

solveProblems :: Problems -> Maybe Nat
solveProblems Fail = Nothing
solveProblems (Done x) = return x
solveProblems (SubProblem j f) =
  do x <- findProof j
     solveProblems (f x)

The interesting thing here is how we solve problems. If we fail, well, we've failed, so there's nothing to return. If we've finished, we've finished and so there's a number to return. But what if we have a subproblem? Well, we simply find a proof for it, computing the result as x, and then use the result of that to get more problems to solve, and solve those.

Generalizing

Having established the general shape of this approach, we can now move on to generalizing the pattern involved. The first move we'll make is to observe that we might want to generalize the type of judgments to index for the type of their result. After all, we might also want to include predicates in the class of possible judgments, where there are no useful return values at all, just (). So we can generalize Judgment, and in term, Problems, like so:

data Judgment r where
  Plus12 :: Nat -> Nat -> Judgment Nat
  Plus13 :: Nat -> Nat -> Judgment Nat

data Problems r where
  Fail :: Problems r
  Done :: r -> Problems r
  SubProblem :: Judgment s -> (s -> Problems r) -> Problems r

As soon as we do this, we discover that Problems looks an awful lot like a monad, and indeed, it is!

instance Functor Problems where
  fmap f Fail = Fail
  fmap f (Done x) = Done (f x)
  fmap f (SubProblem p g) = SubProblem p (fmap f . g)

instance Applicative Problems where
  pure = Done
  pf <*> px = pf >>= \f -> px >>= \x -> return (f x)

instance Monad Problems where
  return = Done
  Fail >>= g = Fail
  Done x >>= g = g x
  SubProblem p f >>= g = SubProblem p (\x -> f x >>= g)

This monad instance basically just codes up concatenation of problems. With lists of judgments, we can just concatenate them, but what to do with the functions that construct results? Here instead we say that if we have one sequence of problems that produces some result, and from that result, we can compute another sequence of problems, well we can just dig around in the first sequence and replace its Done node (which ends the sequence of problems with the result) by the problems we would get. We thus get a single big sequence of problems.

This monadic interfaces also gives us a really elegant way of writing these telescopes:

subProblem :: Judgment r -> Problems r
subProblem j = SubProblem j (\x -> Done x)

decomposePlus12 :: Nat -> Nat -> Problems Nat
decomposePlus12 Zero    y = return y
decomposePlus12 (Suc x) y =
  do z <- subProblem (Plus12 x y) 
     return (Suc z)

decomposePlus13 :: Nat -> Nat -> Problems Nat
decomposePlus13 Zero z = return z
decomposePlus13 (Suc x) (Suc z) =
  subProblem (Plus13 x z)
decomposePlus13 _ _ = Fail

Let's add in a full ternary predicate version of our Plus to see how this works with other kinds of returned values:

data Judgment r where
  Plus12 :: Nat -> Nat -> Judgment Nat
  Plus13 :: Nat -> Nat -> Judgment Nat
  Plus123 :: Nat -> Nat -> Nat -> Judgment ()

decomposePlus123 :: Nat -> Nat -> Nat -> Problems ()
decomposePlus123 Zero y z =
  if y == z
     then return ()
     else Fail
decomposePlus123 (Suc x) y (Suc z) =
  subProblem (Plus123 x y z)

Readers who are especially familiar with functional programming idioms will observe that this is a variety of free monad construct, namely, the call-response tree variety.

And now, what parts of this really depend on the problem domain of addition? Well, clearly Judgment, because that defines what the problems are in the first place. And of course, as a result of that, the various decomposition functions. But not much else, provided we have some means of abstracting over those. Namely: the Problems type can be generalized away from Judgment, and findProof can be generalized away from the implementation of decompose, by way of a type class.

data Problems (j :: * -> *) (r :: *) where
  Fail :: Problems j r
  Done :: r -> Problems j r
  SubProblem :: j s -> (s -> Problems j r) -> Problems j r

subProblem :: j r -> Problems j r
subProblem j = SubProblem j (\x -> Done x)

class Decomposable j where
  decompose :: j r -> Problems j r

findProof :: Decomposable j => j r -> Maybe r
findProof j = solveProblems (decompose j)

solveProblems :: Decomposable j => Problems j r -> Maybe r
solveProblems Fail = Nothing
solveProblems (Done x) = Just x
solveProblems (SubProblem j f) =
  do x <- findProof j
     solveProblems (f x)

Having abstracted this far, we now can extract this into a little library and use this for bidirectional proof systems in general. Let's now tackle the simply typed lambda calculus.

Simply Typed LC

Because we've extracted out the proof refinement toolkit, we need to only give definitions for the judgments and decomposition of our lambda calculus. This is a great simplification from the setting before. We can now express that type checking is a judgment that produces no interesting information, but that type synthesis will give us some type information:

data Judgment r where
  Check :: [(String,Type)] -> Program -> Type -> Judgment ()
  Synth :: [(String,Type)] -> Program -> Judgment Type

Our decompositions are now more interesting as well, and hopefully a bit more insightful:

decomposeCheck :: [(String,Type)] -> Program -> Type -> Problems Judgment ()
decomposeCheck g (Pair m n) (Prod a b) =
  do subProblem (Check g m a)
     subProblem (Check g n b)
decomposeCheck g (Lam x m) (Arr a b) =
  subProblem (Check ((x,a):g) m b)
decomposeCheck g m a =
  do a2 <- subProblem (Synth g m)
     if a == a2
        then return ()
        else Fail


decomposeSynth :: [(String,Type)] -> Program -> Problems Judgment Type
decomposeSynth g (Var x) =
  case lookup x g of
    Nothing -> Fail
    Just a -> return a
decomposeSynth g (Ann m a) =
  do subProblem (Check g m a)
     return a
decomposeSynth g (Fst p) =
  do t <- subProblem (Synth g p)
     case t of
       Prod a b -> return a
       _ -> Fail
decomposeSynth g (Snd p) =
  do t <- subProblem (Synth g p)
     case t of
       Prod a b -> return b
       _ -> Fail
decomposeSynth g (App f x) =
  do t <- subProblem (Synth g f)
     case t of
       Arr a b ->
         do subProblem (Check g x a)
            return b
       _ -> Fail
decomposeSynth g m = Fail


instance Decomposable Judgment where
  decompose (Check g m a) = decomposeCheck g m a
  decompose (Synth g m) = decomposeSynth g m

And we're done! That is the full definition of the type checker for the simply typed lambda calculus with pairs and functions! It has the benefit of being fairly straightforward to read.

Conclusion

This wraps up the series of blog posts on proof refinement. One limitation to this approach is that errors are uninformative, but we can actually modify this toolkit to provide not just informative errors (Either instead of Maybe), but highly informative context-aware errors that know what subproblems are being worked on. Another limitation is that the above toolkit only works for when there is at most one result from the bottom-up direction. That is to say, either a judgment has no proofs, and so there's no bottom-up result, or it has exactly one proof and thus one bottom-up result. But we might have multiple such results, for instance, we might have instead built a system for addition that has the first two arguments of Plus as the bottom-up results (i.e. solutions for Plus3 c), and we'd like to be able to get out all pairs (x,y) such that x + y = c for fixed c. There are plenty of those pairs, so we had better be able to get some kind of list-like results. We also might imagine some other kind of system where in the course of constructing a proof we need to invent something out of thin air, such as a new name for a variable. In that kind of setting we'd like to have a proof system that could make use of some state for the collection of generated names. I'll look at both of these limitations in future blog posts.

If you have comments or questions, get it touch. I'm @psygnisfive on Twitter, augur on freenode (in #cardano and #haskell).

This post is the third part of a three part series, the first post is Proof Refinement Basics, and the second post is Bidirectional Proof Refinement.