Blog > Authors > Rebecca Valentine

Telescopic Proof Refinement

2 January 2018 Rebecca Valentine 12 mins read

Telescopic Proof Refinement - Input Output

Telescopic Proof Refinement

In the third post in this series (part 1, part 2) on proof refinement, I'm going to show you how to properly handle bidirectionality in an elegant way. The technique we'll use is the replacement of lists and functions with a data structure called a telescope. This post will use Haskell exclusively, because of the limitations of JavaScript in presenting these things elegantly. I've put together repl.it REPLs for this blog post so you can play around with the code. You can find them here: Addition 1, Addition 2, Addition 3, Lambda Calculus.

Consider the type that represented a successful decomposition of a problem judgment into subproblems, when working in the proof system for addition: ([Judgment], [Nat] -> Nat). The list of judgments represents the subproblems, and the function represents how to compute the result of the main problem from the results of the subproblems. This was problematic to generalize though, because it meant that all of the subproblems had to be independent. You couldn't use the result of solving an earlier subproblem to state what a later problem was. Information flowed strictly from subproblems out to main problems, never into other subproblems.

This of course was because the subproblems were all given at the same time, and the results were all simultaneous arguments to the function that computed the main result. For instance, we might have a decomposition that looked like ([j0,j1,j2], f) where f = \[r0,r1,r2] -> ..., and we would solve these basically as f [solve j0, solve j1, solve j2]. What could we do to make it possible to have dependence of later problems are earlier results, though? Well, we could produce subproblems and consume results one at a time. In fact, instead of the above pair, why not have (j0, \r0 -> (j1, \r1 -> (r2, \r2 -> ...))). This has the type (Judgment, Nat -> (Judgment, Nat -> (Judgment, Nat -> Nat))), which is specific to the case where the list of subproblems has exactly three subproblems in it. This doesn't generalize well, but we can notice the obvious recursive pattern and instead define

data Problems = Done Nat
              | SubProblem Judgment (Nat -> Problems)

Here, we're either done, and we have a resulting number, or we have a subproblem to solve, and a way of getting from the result of solving it to some more problems. Now of course, decomposing actually produced a Maybe ([Judgment], [Nat] -> Nat), so we really ought to define this type to account for the Nothing case as well:

data Problems = Fail
              | Done Nat
              | SubProblem Judgment (Nat -> Problems)

Our decompositions now will look mostly the same, but slightly different:

decomposePlus12 :: Nat -> Nat -> Problems
decomposePlus12 Zero    y = Done y
decomposePlus12 (Suc x) y = SubProblem (Plus12 x y) (\z -> Done (Suc z))

decomposePlus13 :: Nat -> Nat -> Problems
decomposePlus13 Zero z = Done z
decomposePlus13 (Suc x) (Suc z) = SubProblem (Plus13 x z) (\z -> Done z)
decomposePlus13 _ _ = Fail

decompose :: Judgment -> Problems
decompose (Plus12 x y) = decomposePlus12 x y
decompose (Plus13 x z) = decomposePlus13 x z

Finding a proof is pretty easy now too, because we can just define it in in terms of a second function that handles problems more generally. Dropping the reconstruction of a proof tree, we have:

findProof :: Judgment -> Maybe Nat
findProof j = solveProblems (decompose j)

solveProblems :: Problems -> Maybe Nat
solveProblems Fail = Nothing
solveProblems (Done x) = return x
solveProblems (SubProblem j f) =
  do x <- findProof j
     solveProblems (f x)

The interesting thing here is how we solve problems. If we fail, well, we've failed, so there's nothing to return. If we've finished, we've finished and so there's a number to return. But what if we have a subproblem? Well, we simply find a proof for it, computing the result as x, and then use the result of that to get more problems to solve, and solve those.

Generalizing

Having established the general shape of this approach, we can now move on to generalizing the pattern involved. The first move we'll make is to observe that we might want to generalize the type of judgments to index for the type of their result. After all, we might also want to include predicates in the class of possible judgments, where there are no useful return values at all, just (). So we can generalize Judgment, and in term, Problems, like so:

data Judgment r where
  Plus12 :: Nat -> Nat -> Judgment Nat
  Plus13 :: Nat -> Nat -> Judgment Nat

data Problems r where
  Fail :: Problems r
  Done :: r -> Problems r
  SubProblem :: Judgment s -> (s -> Problems r) -> Problems r

As soon as we do this, we discover that Problems looks an awful lot like a monad, and indeed, it is!

instance Functor Problems where
  fmap f Fail = Fail
  fmap f (Done x) = Done (f x)
  fmap f (SubProblem p g) = SubProblem p (fmap f . g)

instance Applicative Problems where
  pure = Done
  pf <*> px = pf >>= \f -> px >>= \x -> return (f x)

instance Monad Problems where
  return = Done
  Fail >>= g = Fail
  Done x >>= g = g x
  SubProblem p f >>= g = SubProblem p (\x -> f x >>= g)

This monad instance basically just codes up concatenation of problems. With lists of judgments, we can just concatenate them, but what to do with the functions that construct results? Here instead we say that if we have one sequence of problems that produces some result, and from that result, we can compute another sequence of problems, well we can just dig around in the first sequence and replace its Done node (which ends the sequence of problems with the result) by the problems we would get. We thus get a single big sequence of problems.

This monadic interfaces also gives us a really elegant way of writing these telescopes:

subProblem :: Judgment r -> Problems r
subProblem j = SubProblem j (\x -> Done x)

decomposePlus12 :: Nat -> Nat -> Problems Nat
decomposePlus12 Zero    y = return y
decomposePlus12 (Suc x) y =
  do z <- subProblem (Plus12 x y) 
     return (Suc z)

decomposePlus13 :: Nat -> Nat -> Problems Nat
decomposePlus13 Zero z = return z
decomposePlus13 (Suc x) (Suc z) =
  subProblem (Plus13 x z)
decomposePlus13 _ _ = Fail

Let's add in a full ternary predicate version of our Plus to see how this works with other kinds of returned values:

data Judgment r where
  Plus12 :: Nat -> Nat -> Judgment Nat
  Plus13 :: Nat -> Nat -> Judgment Nat
  Plus123 :: Nat -> Nat -> Nat -> Judgment ()

decomposePlus123 :: Nat -> Nat -> Nat -> Problems ()
decomposePlus123 Zero y z =
  if y == z
     then return ()
     else Fail
decomposePlus123 (Suc x) y (Suc z) =
  subProblem (Plus123 x y z)

Readers who are especially familiar with functional programming idioms will observe that this is a variety of free monad construct, namely, the call-response tree variety.

And now, what parts of this really depend on the problem domain of addition? Well, clearly Judgment, because that defines what the problems are in the first place. And of course, as a result of that, the various decomposition functions. But not much else, provided we have some means of abstracting over those. Namely: the Problems type can be generalized away from Judgment, and findProof can be generalized away from the implementation of decompose, by way of a type class.

data Problems (j :: * -> *) (r :: *) where
  Fail :: Problems j r
  Done :: r -> Problems j r
  SubProblem :: j s -> (s -> Problems j r) -> Problems j r

subProblem :: j r -> Problems j r
subProblem j = SubProblem j (\x -> Done x)

class Decomposable j where
  decompose :: j r -> Problems j r

findProof :: Decomposable j => j r -> Maybe r
findProof j = solveProblems (decompose j)

solveProblems :: Decomposable j => Problems j r -> Maybe r
solveProblems Fail = Nothing
solveProblems (Done x) = Just x
solveProblems (SubProblem j f) =
  do x <- findProof j
     solveProblems (f x)

Having abstracted this far, we now can extract this into a little library and use this for bidirectional proof systems in general. Let's now tackle the simply typed lambda calculus.

Simply Typed LC

Because we've extracted out the proof refinement toolkit, we need to only give definitions for the judgments and decomposition of our lambda calculus. This is a great simplification from the setting before. We can now express that type checking is a judgment that produces no interesting information, but that type synthesis will give us some type information:

data Judgment r where
  Check :: [(String,Type)] -> Program -> Type -> Judgment ()
  Synth :: [(String,Type)] -> Program -> Judgment Type

Our decompositions are now more interesting as well, and hopefully a bit more insightful:

decomposeCheck :: [(String,Type)] -> Program -> Type -> Problems Judgment ()
decomposeCheck g (Pair m n) (Prod a b) =
  do subProblem (Check g m a)
     subProblem (Check g n b)
decomposeCheck g (Lam x m) (Arr a b) =
  subProblem (Check ((x,a):g) m b)
decomposeCheck g m a =
  do a2 <- subProblem (Synth g m)
     if a == a2
        then return ()
        else Fail


decomposeSynth :: [(String,Type)] -> Program -> Problems Judgment Type
decomposeSynth g (Var x) =
  case lookup x g of
    Nothing -> Fail
    Just a -> return a
decomposeSynth g (Ann m a) =
  do subProblem (Check g m a)
     return a
decomposeSynth g (Fst p) =
  do t <- subProblem (Synth g p)
     case t of
       Prod a b -> return a
       _ -> Fail
decomposeSynth g (Snd p) =
  do t <- subProblem (Synth g p)
     case t of
       Prod a b -> return b
       _ -> Fail
decomposeSynth g (App f x) =
  do t <- subProblem (Synth g f)
     case t of
       Arr a b ->
         do subProblem (Check g x a)
            return b
       _ -> Fail
decomposeSynth g m = Fail


instance Decomposable Judgment where
  decompose (Check g m a) = decomposeCheck g m a
  decompose (Synth g m) = decomposeSynth g m

And we're done! That is the full definition of the type checker for the simply typed lambda calculus with pairs and functions! It has the benefit of being fairly straightforward to read.

Conclusion

This wraps up the series of blog posts on proof refinement. One limitation to this approach is that errors are uninformative, but we can actually modify this toolkit to provide not just informative errors (Either instead of Maybe), but highly informative context-aware errors that know what subproblems are being worked on. Another limitation is that the above toolkit only works for when there is at most one result from the bottom-up direction. That is to say, either a judgment has no proofs, and so there's no bottom-up result, or it has exactly one proof and thus one bottom-up result. But we might have multiple such results, for instance, we might have instead built a system for addition that has the first two arguments of Plus as the bottom-up results (i.e. solutions for Plus3 c), and we'd like to be able to get out all pairs (x,y) such that x + y = c for fixed c. There are plenty of those pairs, so we had better be able to get some kind of list-like results. We also might imagine some other kind of system where in the course of constructing a proof we need to invent something out of thin air, such as a new name for a variable. In that kind of setting we'd like to have a proof system that could make use of some state for the collection of generated names. I'll look at both of these limitations in future blog posts.

If you have comments or questions, get it touch. I'm @psygnisfive on Twitter, augur on freenode (in #cardano and #haskell).

This post is the third part of a three part series, the first post is Proof Refinement Basics, and the second post is Bidirectional Proof Refinement.

Bidirectional proof refinement

16 March 2017 Rebecca Valentine 18 mins read

Bidirectional proof refinement - Input Output

In my last blog post, we looked at the very basics of proof refinement systems. I noted at the end that there was a big drawback of the systems as shown, namely that information flows only in one direction: from the root of a proof tree to the leaves. In this post, we'll see how to get information to flow in the opposite direction as well. The code for this blog post can be found on GitHub here.

Addition Again

The addition judgment Plus which we defined last time was a three-argument judgment. The judgment Plus L M N was a claim that N is the sum of L and M. But we might want to instead think about specifying only some of the arguments to this judgment. In a language like Prolog, this is implemented by the use of metavariables which get valued in the course of computation. In the proof refinement setting, however, we take a different approach. Instead, we make use of a notion called bidirectionality, where we specify that a judgment's arguments can come in two modes: input and output. This is somewhat related to viewing thing from a functional perspective, but in a more non-deterministic or relational fashion.

Let's start by deciding that the mode of the first two arguments to Plus is that of input, and the third is output. This way we'll think about actually computing the sum of two numbers. Instead of three Nat inputs, we'll instead use only two. We'll use a judgment name that reflects which two:

-- Haskell

data Judgment = Plus12 Nat Nat
  deriving (Show)
// JavaScript

function Plus12(x,y) {
    return { tag: "Plus12", args: [x,y] };
}

The behavior of the decomposer for Plus12 will have to be slightly different now, though, because there are only two arguments. But additionally, we want to not only expand a goal into subgoals, but also somehow synthesize a result from the results of the subgoals. So where before we had only to map from a goal to some new goals, now we must also provide something that maps from some subresults to a result.

Rather than giving two separate functions, we'll give a single function that returns both the new subgoals, and the function that composes a value from the values synthesized for those subgoals.

-- Haskell
decomposePlus12 :: Nat -> Nat -> Maybe ([Judgment], [Nat] -> Nat)
decomposePlus12 Zero y = Just ([], \zs -> y)
decomposePlus12 (Suc x) y = Just ([Plus12 x y], \[z] -> Suc z)

decompose :: Judgment -> Maybe ([Judgment], [Nat] -> Nat)
decompose (Plus12 x y) = decomposePlus12 x y
// JavaScript

function decomposePlus12(x,y) {
    if (x.tag === "Zero") {
        return Just([[], zs => y]);
    } else if (x.tag === "Suc") {
        return Just([[Plus12(x.arg,y)], zs => Suc(zs[0])]);
    }
}

function decompose(j) {
    if (j.tag === "Plus12") {
        return decomposePlus12(j.args[0], j.args[1]);
    }
}

The decompose function is somewhat redundant in this example, but the shape of the problem is preserved by having this redundancy. The same is true of the Maybe in the types. We can always decompose now, so the Nothing option is never used, but I'm keeping it here to maintain parallelism with the general framework.

Building a proof tree in this setting proceeds very similarly to how it did before, except now we need to make use of the synthesis functions in parallel with building a proof tree:

-- Haskell

findProof :: Judgment -> Maybe (ProofTree, Nat)
findProof j =
  case decompose j of
    Nothing -> Nothing
    Just (js, f) -> case sequence (map findProof js) of
      Nothing -> Nothing
      Just tns -> let (ts, ns) = unzip tns
                  in Just (ProofTree j ts, f ns)
// JavaScript

function unzip(xys) {
    var xs = [];
    var ys = [];

    for (var i = 0; i < xys.length; i++) {
        xs.push(xys[i][0]);
        ys.push(xys[i][1]);
    }

    return [xs,ys]
}

function findProof(j) {
    var mjs = decompose(j);

    if (mjs.tag === "Nothing") {
        return Nothing;
    } else if (mjs.tag === "Just") {
        var js = mjs.arg[0]
        var f = mjs.arg[1]
        var mtns = sequence(js.map(j => findProof(j)));

        if (mtns.tag === "Nothing") {
            return Nothing;
        } else if (mtns.tag === "Just") {
            var tsns = unzip(mtns.arg);
            return Just([ProofTree(j, tsns[0]), f(tsns[1])]);
        }
    }
}

Now when we run this on some inputs, we get back not only a proof tree, but also the resulting value, which is the Nat that would have been the third argument of Plus in the previous version of the system.

Let's now add another judgment, Plus13, which will synthesize the second argument of the original Plus judgment from the other two. Therefore, the judgment Plus13 L N means L subtracted from N. We'll add this judgment to the existing Judgment declarations.

-- Haskell

data Judgment = Plus12 Nat Nat | Plus13 Nat Nat
  deriving (Show)
// JavaScript

function Plus13(x,z) {
    return { tag: "Plus13", args: [x,z] };
}

We can now define a decomposition function for this judgment. Notice that this one is partial, in that, for some inputs, there's no decomposition because the first argument is larger than the second. We'll also extend the decompose function appropriately.

-- Haskell

decomposePlus13 :: Nat -> Nat -> Maybe ([Judgment], [Nat] -> Nat)
decomposePlus13 Zero z = Just ([], \xs -> z)
decomposePlus13 (Suc x) (Suc z) = Just ([Plus13 x z], \[x] -> x)
decomposePlus13 _ _ = Nothing

decompose :: Judgment -> Maybe ([Judgment], [Nat] -> Nat)
decompose (Plus12 x y) = decomposePlus12 x y
decompose (Plus13 x z) = decomposePlus13 x z
// JavaScript

function decomposePlus13(x,z) {
    if (x.tag === "Zero") {
        return Just([[], xs => z]);
    } else if (x.tag === "Suc" && z.tag === "Suc") {
        return Just([[Plus13(x.arg, z.arg)], xs => xs[0]]);
    } else {
        return Nothing;
    }
}

function decompose(j) {
    if (j.tag === "Plus12") {
        return decomposePlus12(j.args[0], j.args[1]);
    } else if (j.tag === "Plus13") {
        return decomposePlus13(j.args[0], j.args[1]);
    }
}

If we now try to find a proof for Plus13 (Suc Zero) (Suc (Suc (Suc Zero))), we get back Suc (Suc Zero), as expected, because 1 subtracted from 3 is 2. Similarly, if we try to find a proof for Plus13 (Suc (Suc Zero)) (Suc Zero), we find that we get no proof, because we can't subtract a number from something smaller than it.

Type Checking Again

We'll aim to do the same thing with the type checker as we did with addition. We'll split the HasType judgment into two judgments, Check and Synth. The Check judgment will be used precisely in those cases where a type must be provided for the proof to go through, while the Synth judgment will be used for cases where the structure of the program is enough to tell us its type.

-- Haskell

data Judgment = Check [(String,Type)] Program Type
              | Synth [(String,Type)] Program
  deriving (Show)
// JavaScript

function Check(g,m,a) {
    return { tag: "Check", args: [g,m,a] };
}

function Synth(g,m) {
    return { tag: "Synth", args: [g,m] };
}

Additionally, because of these changes in information flow, we'll remove some type annotations from the various program forms, and instead introduce a general type annotation form that will be used to shift between judgments explicitly.

-- Haskell

data Program = Var String | Ann Program Type
             | Pair Program Program | Fst Program | Snd Program
             | Lam String Program | App Program Program
  deriving (Show)
// JavaScript

function Var(x) {
    return { tag: "Var", arg: x };
}

function Ann(m,a) {
    return { tag: "Ann", args: [m,a] };
}

function Pair(m,n) {
    return { tag: "Pair", args: [m,n] };
}

function Fst(m) {
    return { tag: "Fst", arg: m };
}

function Snd(m) {
    return { tag: "Snd", arg: m };
}

function Lam(x,m) {
    return { tag: "Lam", args: [x,m] };
}

function App(m,n) {
    return { tag: "App", args: [m,n] };
}

The last change that we'll make will be to change the notion of synthesis a little bit from what we had before. In the addition section, we said merely that we needed a function that synthesized a new value from some subvalues. More generally, though, we need that process to be able to fail, because we wish to place constraints on the synthesized values. To capture this, we'll wrap the return type in Maybe.

-- Haskell

decomposeCheck
  :: [(String,Type)]
  -> Program
  -> Type
  -> Maybe ([Judgment], [Type] -> Maybe Type)
decomposeCheck g (Pair m n) (Prod a b) =
  Just ([Check g m a, Check g n b], \as -> Just undefined)
decomposeCheck g (Lam x m) (Arr a b) =
  Just ([Check ((x,a):g) m b], \as -> Just undefined)
decomposeCheck g m a =
  Just ( [Synth g m]
       , \[a2] -> if a == a2 then Just undefined else Nothing
       )

decomposeSynth
  :: [(String,Type)]
  -> Program
  -> Maybe ([Judgment], [Type] -> Maybe Type)
decomposeSynth g (Var x) =
  case lookup x g of
    Nothing -> Nothing
    Just a -> Just ([], \as -> Just a)
decomposeSynth g (Ann m a) =
  Just ([Check g m a], \as -> Just a)
decomposeSynth g (Fst p) =
  Just ( [Synth g p]
       , \[t] -> case t of
                   Prod a b -> Just a
                   _ -> Nothing
       )
decomposeSynth g (Snd p) =
  Just ( [Synth g p]
       , \[t] -> case t of
                   Prod a b -> Just b
                   _ -> Nothing
       )
decomposeSynth g (App f x) =
  Just ( [Synth g f, Synth g x]
       , \[s,t] -> case s of
                     Arr a b | a == t -> Just b
                     _ -> Nothing)
decomposeSynth _ _ = Nothing

decompose :: Judgment -> Maybe ([Judgment], [Type] -> Maybe Type)
decompose (Check g m a) = decomposeCheck g m a
decompose (Synth g m) = decomposeSynth g m
// JavaScript

function eq(x,y) {
  if (x instanceof Array && y instanceof Array) {
    if (x.length != y.length) { return false; }

    for (var i = 0; i < x.length; i++) {
      if (!eq(x[i], y[i])) { return false; }
    }

    return true;
  } else if (x.tag === y.tag) {
    if (x.args && y.args) {
      return eq(x.args,y.args);
    } else if (x.arg && y.arg) {
      return eq(x.arg,y.arg);
    } else if (!x.arg && !y.arg) {
      return true;
    } else {
      return false;
    }
  }
}

function decomposeCheck(g,m,a) {
    if (m.tag === "Pair" && a.tag === "Prod") {
        return Just([ [Check(g, m.args[0], a.args[0]),
                       Check(g, m.args[1], a.args[1])],
                      as => Just(undefined)]);
    } else if (m.tag === "Lam" && a.tag === "Arr") {
        return Just([ [Check([[m.args[0], a.args[0]]].concat(g),
                             m.args[1],
                             a.args[1])],
                      as => Just(undefined)]);
    } else {
        return Just([ [Synth(g,m,a)],
                      as => eq(a,as[0]) ? Just(undefined) : Nothing ])
    }
}

function decomposeSynth(g,m) {
    if (m.tag === "Var") {
        var ma = lookup(m.arg, g);

        if (ma.tag === "Nothing") {
            return Nothing;
        } else if (ma.tag === "Just") {
            return Just([[], as => Just(ma.arg)]);
        }
    } else if (m.tag === "Ann") {
        return Just([ [Check(g, m.args[0], m.args[1])],
                      as => Just(m.args[1]) ]);
    } else if (m.tag === "Fst") {
        return Just([ [Synth(g, m.arg)],
                      as => as[0].tag === "Prod" ?
                              Just(as[0].args[0]) :
                              Nothing ]);
    } else if (m.tag === "Snd") {
        return Just([ [Synth(g, m.arg)],
                      as => as[0].tag === "Prod" ?
                              Just(as[0].args[1]) :
                              Nothing ]);
    } else if (m.tag === "App") {
        return Just([ [Synth(g, m.args[0]), Synth(g, m.args[1])],
                      as => as[0].tag === "Arr" &&
                              eq(as[0].args[0], as[1]) ?
                                Just(as[0].args[1]) :
                                Nothing ]);
    } else {
        return Nothing;
    }
}

function decompose(j) {
    if (j.tag === "Check") {
        return decomposeCheck(j.args[0], j.args[1], j.args[2]);
    } else if (j.tag === "Synth") {
        return decomposeSynth(j.args[0], j.args[1]);
    }
}

Finally, the findProof function has to be augmented slightly to deal with the new Maybe in the return type of the synthesizing function:

-- Haskell

findProof :: Judgment -> Maybe (ProofTree, Type)
findProof j =
  case decompose j of
    Nothing -> Nothing
    Just (js,f) -> case sequence (map findProof js) of
      Nothing -> Nothing
      Just tsas ->
        let (ts,as) = unzip tsas
        in case f as of
             Nothing -> Nothing
             Just a -> Just (ProofTree j ts, a)
// JavaScript

function findProof(j) {
    var mjs = decompose(j);

    if (mjs.tag === "Nothing") {
        return Nothing;
    } else if (mjs.tag === "Just") {
        var js = mjs.arg[0]
        var f = mjs.arg[1]
        var mtns = sequence(js.map(j => findProof(j)));

        if (mtns.tag === "Nothing") {
            return Nothing;
        } else if (mtns.tag === "Just") {
            var tsns = unzip(mtns.arg);
            var mn = f(tsns[1]);

            if (mn.tag === "Nothing") {
                return Nothing;
            } else if (mn.tag === "Just") {
                return Just([ProofTree(j, tsns[0]), mn.arg]);
            }
        }
    }
}

If we now try to find proofs for some typical synthesis and checking examples, we find what we expect:

findProof (Check [] (Lam "p" (Fst (Var "p"))) (Arr (Prod Nat Nat) Nat))

findProof (Check [] (Lam "p" (Fst (Var "p"))) (Arr Nat Nat))

findProof (Synth [("p",Prod Nat Nat)] (Fst (Var "p")))

Conclusion

The bidirectional style in this post has some interesting limitations and drawbacks. Consider the case of the rule for App: it has to synthesize the type of both the function and the argument and then compare them appropriately. This means that certain programs require additional type annotations to be written. But this is somewhat unnecessary, because once we synthesize the type of the function, we can use that information to check the argument. In the next blog post, I'll try to explain how to do this in a more elegant way, the combines both the upward and downward flows of information.

If you have comments or questions, get in touch. I'm @psygnisfive on Twitter, augur on freenode (in #haskell).

This post is the second part of a three part series, the first part is here, and the third part is here.

Proof refinement basics

7 March 2017 Rebecca Valentine 19 mins read

Proof Refinement Basics - Input Output

Proof refinement basics

In this blog post, I'm going to discuss the overall structure of a proof refinement system. Such systems can be used for implementing automatic theorem provers, proof assistants, and type checkers for programming languages. The proof refinement literature is either old or hard to understand, so this post, and subsequent ones on the same topic, will present it in a more casual and current way. Work on proof refinement as a methodology for building type checkers and interactive proof systems has a long history, going back to LCF, HOL, Nuprl, and Coq. These techniques never really penetrated into the mainstream of programming language implementation, but perhaps this may change.

The GitHub repo for this project can be found here.

Prologue

As part of my work for IOHK, I've been designing and implementing a programming language called Plutus, which we use as the scripting language for our blockchains. Because IOHK cares deeply about the correctness of its systems, Plutus needs to be held to a very high standard, and needs to enable easy reasoning about its behavior. For that reason, I chose to make Plutus a pure, functional language with a static type system. Importantly, the language has a formal type theoretic specification.

To implement the type checker for the language, I used a fairly standard technique. However, recently I built a new framework for implementing programming languages which, while different from the usual way, has a more direct connection to the formal specification. This greater similarity makes it easier to check that the implementation is correct.

The framework also makes a number of bugs easier to eliminate. One class of bugs that arose a number of times in the first implementation was that of metavariables and unification. Because the language supports a certain about of unification-driven type inference, it's necessary to propagate updates to the information state of the type checking algorithm, so that everything is maximally informative and the algorithm can run correctly. Propagating this information by hand is error prone, and the new framework makes it possible to do it once and for all, in a bug-free way.

The framework additionally makes some features easier to program. For example, good error reporting is extremely important. But good error messages need to have certain information about where the error occurs. Not just where in the source code, but where in the logical structure of the program. All sorts of important information about the nature of the error, and the possible solutions, depend on that. The framework I developed also makes this incredibly easy to implement, so much so that it's build right into the system, and any language implemented using it can take advantage of it.

Proofs and Proof Systems

In the context of modern proof theory and type theory, a proof can be seen as a tree structure with labeled nodes and the class of proof trees which are valid is defined by a set of rules that specify how a node in a tree may relate to its child nodes. The labels, in the context of proofs, are usually hypothetical judgments, and the rules are inference rules, but you can also use the same conceptual framework for many other things.

An example of a rule is for any choice of A and B, if you can show that A is true and that B is true, then it's permissible to conclude A∧B is true. This is usually written with the notation

This rule explains one way that it's ok to have a node labeled

A∧B is true
in a proof tree, namely, when its child nodes are A is true and B is true. So this rule justifies the circled node in the following tree:

When used as a tool for constructing a proof, however, we think about rule use in a directional fashion. We start from the conclusion — the parent node — because this is what we'd like to show, and we work backwards to the premises of the rule — the child nodes — to justify the conclusion. We think of this as a dynamic process of building a proof tree, not simply having one. From that perspective, the rule says that to build a proof tree with the root labeled A&B is true, we can expand that node to have two child nodes A is true and B is true, and then try to build those trees.

In the dynamic setting, then, this rule means that we may turn the first tree below into the second:

Mathshttps://ucarecdn.com/651f5f86-0dde-4cdd-8c2f-850e62377635/)

 

In this setting, though, It's important to be aware of a distinction between two kinds of nodes: nodes which may appear in finished proof trees as is, and nodes which must still expanded before they can appear in a finished proof tree. In the above trees, we indicated this by the fact that the nodes which may appear in finished proof trees have a bar above them labeled by the name of the rule that justified them. Nodes which must still be expanded had no such bar, and we call them goals.

Building a Proof Refinement System

Let's now build a proof refinement system for a toy system, namely the system of equality (x == y) and addition (x + y = z) as relations among natural numbers. We'll implement it in both Haskell and JavaScript to give a sense of how things work. We'll start by just making the proof refiner system, with no interactivity nor any automatic driving of the proof system. We'll have to rely on the REPL to manage these things for us.

So the first thing we need is a definition of the naturals. In Haskell, we'll just use a normal algebraic data type, but in JavaScript, we'll use an object with a mandatory tag key and an optional arg key, together with helpers:

-- Haskell

data Nat = Zero | Suc Nat
  deriving (Show)
// JavaScript

var Zero = { tag: "Zero" };

function Suc(n) {
    return { tag: "Suc", arg: n };
}

Similarly, we'll define the judgments in question:

-- Haskell

data Judgment = Equal Nat Nat | Plus Nat Nat Nat
  deriving (Show)
// JavaScript

function Equal(x,y) {
    return { tag: "Equal", args: [x,y] };
}

function Plus(x,y,z) {
    return { tag: "Plus", args: [x,y,z] };
}

We'll next need some way to decompose a statement such as Plus (Suc Zero) (Suc Zero) Zero into subproblems that would justify it. We'll do this by defining a function that performs the decomposition, returning a list of new goals. But, because the decomposition might fail, we'll wrap this in a Maybe.

-- Haskell

decomposeEqual :: Nat -> Nat -> Maybe [Judgment]
decomposeEqual Zero Zero = Just []
decomposeEqual (Suc x) (Suc y) = Just [Equal x y]
decomposeEqual _ _ = Nothing

decomposePlus :: Nat -> Nat -> Nat -> Maybe [Judgment]
decomposePlus Zero y z = Just [Equal y z]
decomposePlus (Suc x) y (Suc z) = Just [Plus x y z]
decomposePlus _ _ _ = Nothing
// JavaScript

var Nothing = { tag: "Nothing" };

function Just(x) {
    return { tag: "Just", arg: x };
}

function decomposeEqual(x,y) {
    if (x.tag === "Zero" && y.tag == "Zero") {
        return Just([]);
    } else if (x.tag === "Suc" && y.tag === "Suc") {
        return Just([Equal(x.arg, y.arg)]);
    } else {
        return Nothing;
    }
}

function decomposePlus(x,y,z) {
    if (x.tag === "Zero") {
      return Just([Equal(y,z)]);  
    } else if (x.tag === "Suc" && z.tag === "Suc") {
        return Just([Plus(x.arg, y, z.arg)]);
    } else {
        return Nothing;
    }
}

If we explore what these mean when used, we can build some intuitions. If we want to decompose a judgment of the form Plus (Suc Zero) (Suc Zero) (Suc (Suc (Suc Zero))) in Haskell/Plus(Suc(Zero), Suc(Zero), Suc(Suc(Suc(Zero)))) in JavaScript, we simply apply the decomposePlus function to the three arguments of the judgment:

-- Haskell

decomposePlus (Suc Zero) (Suc Zero) (Suc (Suc (Suc Zero)))
  = Just [Plus Zero (Suc Zero) (Suc (Suc Zero))]
// JavaScript

decomposePlus(Suc(Zero), Suc(Zero), Suc(Suc(Suc(Zero))))
  = Just([Plus(Zero, Suc(Zero), Suc(Suc(Zero)))])

We'll define a general

decompose
function that will inspect a judgment and then call the appropriate decomposition function:

-- Haskell

decompose :: Judgment -> Maybe [Judgment]
decompose (Equal x y) = decomposeEqual x y
decompose (Plus x y z) = decomposePlus x y z
// JavaScript

function decompose(j) {
    if (j.tag === "Equal") {
        return decomposeEqual(j.args[0], j.args[1]);
    } else if (j.tag === "Plus") {
        return decomposePlus(j.args[0], j.args[1], j.args[2]);
    }
}

Now let's define what constitutes a proof tree:

-- Haskell

data ProofTree = ProofTree Judgment [ProofTree]
  deriving (Show)
// JavaScript

function ProofTree(c,ps) {
    return { tag: "ProofTree", args: [c,ps] };
}

The first argument of the constructor ProofTree is the label for the node, which is the conclusion of inference that justifies the node. The second argument is the sub-proofs that prove each of the premises of inference. So for example, the proof tree

would be represented by

-- Haskell

ProofTree (Plus (Suc (Suc Zero)) (Suc Zero) (Suc (Suc (Suc Zero))))
  [ ProofTree (Plus (Suc Zero) (Suc Zero) (Suc (Suc Zero)))
      [ ProofTree (Plus Zero (Suc Zero) (Suc Zero))
          [ ProofTree (Equal (Suc Zero) (Suc Zero))
              [ ProofTree (Equal Zero Zero)
                  []
              ]
          ]
      ]
  ]
// JavaScript

ProofTree(Plus(Suc(Suc(Zero)), Suc(Zero), Suc(Suc(Suc(Zero)))),
  [ ProofTree(Plus(Suc(Zero), Suc(Zero), Suc(Suc(Zero))),
      [ ProofTree(Plus(Zero, Suc(Zero), Suc(Zero)),
          [ ProofTree(Equal(Suc(Zero), Suc(Zero)),
              [ ProofTree(Equal(Zero,Zero)
                  [])
              ])
          ])
      ])
  ])

We now can build a function, findProof, which will use decompose to try to find a proof for any judgment that it's given:

-- Haskell

findProof :: Judgment -> Maybe ProofTree
findProof j =
  case decompose j of
    Nothing -> Nothing
    Just js -> case sequence (map findProof js) of
      Nothing -> Nothing
      Just ts -> Just (ProofTree j ts)
// JavaScript

function sequence(mxs) {
    var xs = [];

    for (var i = 0; i < mxs.length; i++) {
        if (mxs[i].tag === "Nothing") {
            return Nothing;
        } else if (mxs[i].tag === "Just") {
            xs.push(mxs[i].arg);
        }
    }

    return Just(xs);
}

function findProof(j) {
    var mjs = decompose(j);

    if (mjs.tag === "Nothing") {
        return Nothing;
    } else if (mjs.tag === "Just") {
        var mts = sequence(mjs.arg.map(j => findProof(j)));
        if (mts.tag === "Nothing") {
            return Nothing;
        } else if (mts.tag === "Just") {
            return Just(ProofTree(j, mts.arg));
        }
    }
}

If we now run this on some example triples of numbers, we find that it returns Justs of proof trees exactly when the statement is true, and the trees show just how the statement is true. Try it on the above example for suc(suc(zero)) + suc(zero) = suc(suc(suc(zero))).

The choice above to use the Maybe type operator reflects the idea that there is at most one proof of a judgment, but possible also none. If there can be many, then we need to use List instead, which would give us alternative definitions of the various functions:

-- Haskell

decomposeEqual :: Nat -> Nat -> [[Judgment]]
decomposeEqual Zero Zero = [[]]
decomposeEqual (Suc x) (Suc y) = [[Equal x y]]
decomposeEqual _ _ = []

decomposePlus :: Nat -> Nat -> Nat -> [[Judgment]]
decomposePlus Zero y z = [[Equal y z]]
decomposePlus (Suc x) y (Suc z) = [[Plus x y z]]
decomposePlus _ _ _ = []

decompose :: Judgment -> [[Judgment]]
decompose (Equal x y) = decomposeEqual x y
decompose (Plus x y z) = decomposePlus x y z

findProof :: Judgment -> [ProofTree]
findProof j =
  do js <- decompose j
     ts <- sequence (map findProof js)
     return (ProofTree j ts)
// JavaScript

function bind(xs,f) {
    if (xs.length === 0) {
        return [];
    } else {
        return f(xs[0]).concat(bind(xs.slice(1), f))
    }
}

function sequence(xss) {
    if (xss.length === 0) {
        return [[]];
    } else {
        return bind(xss[0], x =>
                 bind(sequence(xss.slice(1)), xs2 =>
                   [[x].concat(xs2)]));
    }
}

function findProof(j) {
    return bind(decompose(j), js =>
             bind(sequence(js.map(j2 => findProof(j2))), ts =>
               [ProofTree(j,ts)]));
}

In fact, Haskell lets us abstract over the choice of Maybe vs List, provided that the type operator is a monad.

A Type Checker with Proof Refinement

Let's now move on to building a type checker. We'll first define the types we're going to use. We'll have natural numbers, product types (pairs), and arrow types (functions). We'll also use a BNF definition as a reference:

<type> ::= "Nat" | <type> "×" <type> | <type> "→" <type>
-- Haskell

data Type = Nat | Prod Type Type | Arr Type Type
  deriving (Show)
// JavaScript

var Nat = { tag: "Nat" };

function Prod(a,b) {
    return { tag: "Prod", args: [a,b] };
}

function Arr(a,b) {
    return { tag: "Arr", args: [a,b] };
}

Next we must specify what qualifies as a program, which we'll be type checking:

<program> ::= <name> | "〈" <program> "," <program> "〉"
            | "fst" <program> | "snd" <program>
            | "λ" <name> "." <program> | <program> <program>
-- Haskell

data Program = Var String
             | Zero | Suc Program
             | Pair Program Program | Fst Type Program | Snd Type Program
             | Lam String Program | App Type Program Program
  deriving (Show)
// JavaScript

function Var(x) {
    return { tag: "Var", arg: x };
}

var Zero = { tag: "Zero" };

function Suc(m) {
    return { tag: "Suc", arg: m };
}

function Pair(m,n) {
    return { tag: "Pair", args: [m,n] };
}

function Fst(a,m) {
    return { tag: "Fst", args: [a,m] };
}

function Snd(a,m) {
    return { tag: "Snd", args: [a,m] };
}

function Lam(x,m) {
    return { tag: "Lam", args: [x,m] };
}

function App(a,m,n) {
    return { tag: "App", args: [a,m,n] };
}

The programs built with Fst, Snd, and App have more arguments that usual because we want to do only type checking, so we need to supply information that's not present in the types when working in a bottom up fashion. We'll see how we can avoid doing this later.

We'll also make use of a judgment HasType G M A, where G is a type context that assigns Types to variables (represented by a list of string-type pairs), M is some program, and A is some type.

-- Haskell

data Judgment = HasType [(String,Type)] Program Type
  deriving (Show)
// JavaScript

function HasType(g,m,a) {
    return { tag: "HasType", args: [g,m,a] };
}

We can now define the decompose functions for these. We'll just use the Maybe type for the decomposers here, because we don't need non-deterministic solutions. Type checkers either succeed or fail, and that's all.

-- Haskell

decomposeHasType
  :: [(String,Type)] -> Program -> Type -> Maybe [Judgment]
decomposeHasType g (Var x) a =
  case lookup x g of
    Nothing -> Nothing
    Just a2 -> if a == a2
               then Just []
               else Nothing
decomposeHasType g Zero Nat =
  Just []
decomposeHasType g (Suc m) Nat =
  Just [HasType g m Nat]
decomposeHasType g (Pair m n) (Prod a b) =
  Just [HasType g m a, HasType g n b]
decomposeHasType g (Fst b p) a =
  Just [HasType g p (Prod a b)]
decomposeHasType g (Snd a p) b =
  Just [HasType g p (Prod a b)]
decomposeHasType g (Lam x m) (Arr a b) =
  Just [HasType ((x,a):g) m b]
decomposeHasType g (App a m n) b =
  Just [HasType g m (Arr a b), HasType g n a]
decomposeHasType _ _ _ =
  Nothing

decompose :: Judgment -> Maybe [Judgment]
decompose (HasType g m a) = decomposeHasType g m a
// JavaScript

function lookup(x,xys) {
    for (var i = 0; i < xys.length; i++) {
        if (xys[i][0] === x) {
            return Just(xys[i][1]);
        }
    }

    return Nothing;
}

function decomposeHasType(g,m,a) {
    if (m.tag === "Var") {
        var ma2 = lookup(m.arg, g);

        if (ma2.tag === "Nothing") {
            return Nothing;
        } else if (ma2.tag === "Just") {
            return eq(a,ma2.arg) ? Just([]) : Nothing;
        }
    } else if (m.tag === "Zero" && a.tag === "Nat") {
      return Just([]);
    } else if (m.tag === "Suc" && a.tag === "Nat") {
      return Just([HasType(g, m.arg, Nat)]);
    } else if (m.tag === "Pair" && a.tag === "Prod") {
        return Just([HasType(g, m.args[0], a.args[0]),
                     HasType(g, m.args[1], a.args[1])])
    } else if (m.tag === "Fst") {
        return Just([HasType(g, m.args[1], Prod(a,m.args[0]))]);
    } else if (m.tag === "Snd") {
        return Just([HasType(g, m.args[1], Prod(m.args[0],a))]);
    } else if (m.tag === "Lam" && a.tag === "Arr") {
        return Just([HasType([[m.args[0],a.args[0]]].concat(g),
                             m.args[1],
                             a.args[1])]);
    } else if (m.tag === "App") {
        return Just([HasType(g, m.args[1], Arr(m.args[0],a)),
                     HasType(g, m.args[2], m.args[0])]);
    } else {
        return Nothing;
    }
}

function decompose(j) {
    if (j.tag === "HasType") {
        return decomposeHasType(j.args[0], j.args[1], j.args[2]);
    }
}

Using the same definitions of proof trees, sequencing, and finding a proof that we had earlier for Maybe, we can now perform some type checks on some programs. So for example, we can find that the program λx.x does indeed have the type Nat → Nat, or that it does not have the type (Nat × Nat) → Nat.

Conclusion

That about wraps it up for the basics of proof refinement. One thing to observe about the above system is that its unidirectional: information flows from the bottom to the top, justifying expansions of goals into subtrees. No information flows back down from subtrees to parent nodes. In my next blog post, I'll discuss how we might go about adding bidirectional information flow to such a system. This will allow us to perform type checking computations where a judgment might return some information to be used for determining if a node expansion is justified. The practical consequence is that our programs become less cluttered with extraneous information.

If you have comments or questions, get it touch. I'm @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell).

This post is the first part of a three part series, the second part is here, and the third part is here.