Blog > 2020 > September

Being lazy without getting bloated

Haskell nothunks library goes a long way towards making memory leaks a thing of the past

24 September 2020 Edsko de Vries 25 mins read

Being lazy without getting bloated

In our Developer Deep Dive series of occasional technical blogs, we invite IOHK’s engineers to discuss their latest work and insights.

Haskell is a lazy language. The importance of laziness has been widely discussed elsewhere: Why Functional Programming Matters is one of the classic papers on the topic, and A History of Haskell: Being Lazy with Class discusses it at length as well. For the purposes of this blog we will take it for granted that laziness is something we want. But laziness comes at a cost, and one of the disadvantages is that laziness can lead to memory leaks that are sometimes difficult to find. In this post we introduce a new library called nothunks aimed at discovering a large class of such leaks early, and helping to debug them. This library was developed for our work on the Cardano blockchain, but we believe it will be widely applicable in other projects too.

A motivating example

Consider the tiny application below, which processes incoming characters and reports how many characters there are in total, in addition to some per-character statistics:

import qualified Data.Map.Strict as Map

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
    }
  deriving (Show)

type Stats = Int

update :: AppState -> Char -> AppState
update st c = st {
      total = total st + 1
    , indiv = Map.alter (Just . aux) c (indiv st)
    }
  where
    aux :: Maybe Stats -> Stats
    aux Nothing  = 1
    aux (Just n) = n + 1

initAppState :: AppState
initAppState = AppState {
      total = 0
    , indiv = Map.empty
    }

main :: IO ()
main = interact $ show . foldl' update initAppState

In this version of the code, the per-character statistics are simply how often we have seen each character. If we feed this code ‘aabbb’, it will tell us that it saw 5 characters, 2 of which were the letter ‘a’ and 3 of which were ‘b’:

# echo -n aabbb | cabal run example1
AppState {
    total = 5
  , indiv = fromList [('a',2),('b',3)]
  }

Moreover, if we feed the application a ton of data and construct a memory profile,

dd if=/dev/zero bs=1M count=10 | cabal run --enable-profiling example1 -- +RTS -hy

we see from Figure 1 that the application runs in constant space.

Figure 1. Memory profile for the first example

So far so good. But now suppose we make an innocuous-looking change. Suppose, in addition to reporting how often every character occurs, we also want to know the offset of the last time that the character occurs in the file:

type Stats = (Int, Int)

update :: AppState -> Char -> AppState
update st c = -- .. as before
  where
    aux :: Maybe Stats -> Stats
    aux Nothing       = (1     , total st)
    aux (Just (n, _)) = (n + 1 , total st)

The application works as expected:

# echo -n aabbb | cabal run example2
AppState {
    total = 5
  , indiv = fromList [('a',(2,1)),('b',(3,4))]
  }

and so the change is accepted in GitHub's PR code review and gets merged. However, although the code still works, it is now a lot slower.

# time (dd if=/dev/zero bs=1M count=100 | cabal run example1)
(..)
real    0m2,312s

# time (dd if=/dev/zero bs=1M count=100 | cabal run example2)
(..)
real    0m15,692s

We have a slowdown of almost an order of magnitude, although we are barely doing more work. Clearly, something has gone wrong, and indeed, we have introduced a memory leak (Figure 2).

Figure 2. Memory profile for example 2

Unfortunately, tracing a profile like this to the actual problem in the code can be very difficult indeed. What’s worse, although our change introduced a regression, the application still worked fine and so the test suite probably wouldn’t have failed. Such memory leaks tend to be discovered only when they get so bad in production that things start to break (for example, servers running out of memory), at which point you have an emergency on your hands.

In the remainder of this post we will describe how nothunks can help both with spotting such problems much earlier, and debugging them.

Instrumenting the code

Let’s first see what usage of nothunks looks like in our example. We modify our code and derive a new class instance for our AppState:

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
    }
  deriving (Show, Generic, NoThunks)

The NoThunks class is defined in the nothunks library, as we will see in detail later. Additionally, we will replace foldl' with a new function:

repeatedly :: forall a b. (NoThunks b, HasCallStack)
           => (b -> a -> b) -> (b -> [a] -> b)
repeatedly f = ..

We will see how to define repeatedly later, but, for now, think of it as 'foldl' with some magic sprinkled on top’. If we run the code again, the application will throw an exception almost immediately:

# dd if=/dev/zero bs=1M count=100 | cabal run example3
(..)
example3: Unexpected thunk with context
  ["Int","(,)","Map","AppState"]
CallStack (from HasCallStack):
  error, called at shared/Util.hs:22:38 in Util
  repeatedly, called at app3/Main.hs:38:26 in main:Main

The essence of the nothunks library is that we can check if a particular value contains any thunks we weren’t expecting, and this is what repeatedly is using to make sure we’re not inadvertently introducing any thunks in the AppState; it’s this check that is failing and causing the exception. We get a HasCallStack backtrace telling us where we introduced that thunk, and – even more importantly – the exception gives us a helpful clue about where the thunk was:

["Int","(,)","Map","AppState"]

This context tells us that we have an AppState containing a Map containing tuples, all of which were in weak head normal form (not thunks), but the tuple contained an Int which was not in weak head normal form: a thunk.

From a context like this it is obvious what went wrong: although we are using a strict map, we have instantiated the map at a lazy pair type, and so although the map is forcing the pairs, it’s not forcing the elements of those pairs. Moreover, we get an exception the moment we introduce the thunk, which means that we can catch such regressions in our test suite. We can even construct minimal counter-examples that result in thunks, as we will see later.

Using nothunks

Before we look at how the library works, let’s first see how it’s used. In the previous section we were using a magical function repeatedly, but didn’t see how we could define it. Let’s now look at this function:

repeatedly :: forall a b. (NoThunks b, HasCallStack)
           => (b -> a -> b) -> (b -> [a] -> b)
repeatedly f = go
  where
    go :: b -> [a] -> b
    go !b []     = b
    go !b (a:as) =
        let !b' = f b a
        in case unsafeNoThunks b' of
              Nothing    -> go b' as
              Just thunk -> error . concat $ [
                  "Unexpected thunk with context "
                , show (thunkContext thunk)
                ]

The only difference between repeatedly and foldl' is the call to unsafeNoThunks, which is the function that checks if a given value contains any unexpected thunks. The function is marked as ‘unsafe’ because whether or not a value is a thunk is not normally observable in Haskell; making it observable breaks equational reasoning, and so this should only be used for debugging or in assertions. Each time repeatedly applies the provided function f to update the accumulator, it verifies that the resulting value doesn’t contain any unexpected thunks; if it does, it errors out (in real code such a check would only be enabled in test suites and not in production).

One point worth emphasizing is that repeatedly reduces the value to weak head normal form (WHNF) before calling unsafeNoThunks. This is, of course, what makes a strict fold-left strict, and so repeatedly must do this to be a good substitute for foldl'. However, it is important to realize that if repeatedly did not do that, the call to unsafeNoThunks would trivially and immediately report a thunk; after all, we have just created the f b a thunk! Generally speaking, it is not useful to call unsafeNoThunks (or its IO cousin noThunks) on values that aren’t already in WHNF.

In general, long-lived application state should never contain any unexpected thunks, and so we can apply the same kind of pattern in other scenarios. For example, suppose we have a server that is a thin IO layer on top of a mostly pure code base, storing the application state in an IORef. Here, too, we might want to make sure that that IORef never points to a value containing unexpected thunks:

newtype StrictIORef a = StrictIORef (IORef a)

readIORef :: StrictIORef a -> IO a
readIORef (StrictIORef v) = Lazy.readIORef v

writeIORef :: (NoThunks a, HasCallStack)
           => StrictIORef a -> a -> IO ()
writeIORef (StrictIORef v) !x = do
    check x
    Lazy.writeIORef v x

check :: (NoThunks a, HasCallStack) => a -> IO ()
check x = do
    mThunk <- noThunks [] x
    case mThunk of
      Nothing -> return ()
      Just thunk ->
        throw $ ThunkException
                  (thunkContext thunk)
                  callStack

Since check already lives in IO, it can use noThunks directly, instead of using the unsafe pure wrapper; but otherwise this code follows a very similar pattern: the moment we might introduce a thunk, we instead throw an exception. One could imagine doing a very similar thing for, say, StateT, checking for thunks in put:

newtype StrictStateT s m a = StrictStateT (StateT s m a)
  deriving (Functor, Applicative, Monad)

instance (Monad m, NoThunks s)
      => MonadState s (StrictStateT s m) where
  get    = StrictStateT $ get
  put !s = StrictStateT $
      case unsafeNoThunks s of
        Nothing -> put s
        Just thunk -> error . concat $ [
            "Unexpected thunk with context "
          , show (thunkContext thunk)
          ]

Minimal counter-examples

In some applications, there can be complicated interactions between the input to the program and the thunks it may or may not create. We will study this through a somewhat convoluted but, hopefully, easy-to-understand example. Suppose we have a server that is processing two types of events, A and B:

data Event = A | B
  deriving (Show)

type State = (Int, Int)

initState :: State
initState = (0, 0)

update :: Event -> State -> State
update A (a, b)    = let !a' = a + 1 in (a', b)
update B (a, b)
  | a < 1 || b < 1 = let !b' = b + 1 in (a, b')
  | otherwise      = let  b' = b + 2 in (a, b')

The server’s internal state consists of two counters, a and b. Each time we see an A event, we just increment the first counter. When we see a B event, however, we increment b by 1 only if a and b haven’t reached 1 yet, and by 2 otherwise. Unfortunately, the code contains a bug: in one of these cases, part of the server’s state is not forced and we introduce a thunk. (Disclaimer: the code snippets in this blog post are not intended to be good examples of coding, but to make it obvious where memory leaks are introduced. Typically, memory leaks should be avoided by using appropriate data types, not by modifying code.)

A minimal counter-example that will demonstrate the bug would therefore involve two events A and B, in any order, followed by another B event. Since we get an exception the moment we introduce an exception, we can then use a framework such as quickcheck-state-machine to find bugs like this and construct such minimal counter-examples.

Here’s how we might set up our test. Explaining how quickcheck-state-machine (QSM) works is well outside the scope of this blog post; if you’re interested, a good starting point might be An in-depth look at quickcheck-state-machine. For this post, it is enough to know that in QSM we are comparing a real implementation against some kind of model, firing off ‘commands’ against both, and then checking that the responses match. Here, both the server and the model will use the update function, but the ‘real’ implementation will use the StrictIORef type we introduced above, and the mock implementation will just use the pure code, with no thunks check. Thus, when we compare the real implementation against the model, the responses will diverge whenever the real implementation throws an exception (caused by a thunk):

data T

type instance MockState   T = State
type instance RealMonad   T = IO
type instance RealHandles T = '[]

data instance Cmd T f hs where
  Cmd :: Event -> Cmd T f '[]

data instance Resp T f hs where
  -- We record any exceptions that occurred
  Resp :: Maybe String -> Resp T f '[]

deriving instance Eq   (Resp T f hs)
deriving instance Show (Resp T f hs)
deriving instance Show (Cmd  T f hs)

instance NTraversable (Resp T) where
  nctraverse _ _ (Resp ok) = pure (Resp ok)

instance NTraversable (Cmd T) where
  nctraverse _ _ (Cmd e) = pure (Cmd e)

sm :: StrictIORef State -> StateMachineTest T
sm state = StateMachineTest {
      runMock    = \(Cmd e) mock ->
        (Resp Nothing, update e mock)
    , runReal    = \(Cmd e) -> do
        real <- readIORef state
        ex   <- try $ writeIORef state (update e real)
        return $ Resp (checkOK ex)
    , initMock   = initState
    , newHandles = \_ -> Nil
    , generator  = \_ -> Just $
        elements [At (Cmd A), At (Cmd B)]
    , shrinker   = \_ _ -> []
    , cleanup    = \_ -> writeIORef state initState
    }
  where
    checkOK :: Either SomeException () -> Maybe String
    checkOK (Left err) = Just (show err)
    checkOK (Right ()) = Nothing

(This uses the new Lockstep machinery in QSM that we introduced in the Munihac 2019 hackathon.)

If we run this test, we get the minimal counter-example we expect, along with the HasCallStack backtrace and the context telling us precisely that we have a thunk inside a lazy pair:

*** Failed! Falsified (after 6 tests and 2 shrinks):
Commands
  { unCommands =
      [ Command At { unAt = Cmd B } At { unAt = Resp Nothing } []
      , Command At { unAt = Cmd A } At { unAt = Resp Nothing } []
      , Command At { unAt = Cmd B } At { unAt = Resp Nothing } []
      ]
  }

(..)

Resp (Just "Thunk exception in context [Int,(,)]
    called at shared/StrictIORef.hs:26:5 in StrictIORef
    writeIORef, called at app5/Main.hs:71:37 in Main")
:/= Resp Nothing

The combination of a minimal counter-example, a clear context, and the backtrace, makes finding most such memory leaks almost trivial.

Under the hood

The core of the nothunks library is the NoThunks class:

-- | Check a value for unexpected thunks
class NoThunks a where
  noThunks   :: [String] -> a -> IO (Maybe ThunkInfo)
  wNoThunks  :: [String] -> a -> IO (Maybe ThunkInfo)
  showTypeOf :: Proxy a -> String

data ThunkInfo = ThunkInfo {
      thunkContext :: Context
    }
deriving (Show)

type Context = [String]

All of the NoThunks class methods have defaults, so instances can be, and very often are, entirely empty, or – equivalently – derived using DeriveAnyClass.

The noThunks function is the main entry point for application code, and we have already seen it in use. Instances of NoThunks, however, almost never need to redefine noThunks and can use the default implementation, which we will take a look at shortly. Conversely, wNoThunks is almost never useful for application code but it’s where most of the datatype-specific logic lives, and is used by the default implementation of noThunks; we will see a number of examples of it below. Finally, showTypeOf is used to construct a string representation of a type when constructing the thunk contexts; it has a default in terms of Generic.

noThunks

Suppose we are checking if a pair contains any thunks. We should first check if the pair itself is a thunk, before we pattern match on it. After all, pattern matching on the pair would force it, and so if it had been a thunk, we wouldn’t be able to see this any more. Therefore, noThunks first checks if a value itself is a thunk, and if it isn’t, it calls wNoThunks; the w stands for WHNF: wNoThunks is allowed to assume (has as precondition) that its argument is not itself a thunk and so can be pattern-matched on.

noThunks :: [String] -> a -> IO (Maybe ThunkInfo)
noThunks ctxt x = do
    isThunk <- checkIsThunk x
    if isThunk
      then return $ Just ThunkInfo { thunkContext = ctxt' }
      else wNoThunks ctxt' x
  where
    ctxt' :: [String]
    ctxt' = showTypeOf (Proxy @a) : ctxt

Note that when wNoThunks is called, the (string representation of) type a has already been added to the context.

wNoThunks

Most of the datatype-specific work happens in wNoThunks; after all, we can now pattern match. Let’s start with a simple example, a manual instance for a type of strict pairs:

data StrictPair a b = StrictPair !a !b

instance (NoThunks a, NoThunks b)
      => NoThunks (StrictPair a b) where
  showTypeOf _ = "StrictPair"
  wNoThunks ctxt (StrictPair x y) = allNoThunks [
        noThunks ctxt x
      , noThunks ctxt y
      ]

Because we have verified that the pair itself is in WHNF, we can just extract both components, and recursively call noThunks on both of them. Function allNoThunks is a helper defined in the library that runs a bunch of thunk checks, stopping at the first one that reports a thunk.

Occasionally we do want to allow for selected thunks. For example, suppose we have a set of integers with a cached total field, but we only want to compute that total if it’s actually used:

data IntSet = IntSet {
      toSet :: !(Set Int)

      -- | Total
      --
      -- Intentionally /not/ strict:
      -- Computed when needed (and then cached)
    , total :: Int
    }
  deriving (Generic)

Since total must be allowed to be a thunk, we skip it in wNoThunks:

instance NoThunks IntSet where
  wNoThunks ctxt (IntSet xs _total) = noThunks ctxt xs

Such constructions should probably only be used sparingly; if the various operations on the set are not carefully defined, the set might hold on to all kinds of data through that total thunk. Code like that needs careful thought and careful review.

Generic instance

If no implementation is given for wNoThunks, it uses a default based on GHC generics. This means that for types that implement Generic, deriving a NoThunks instance is often as easy as in the AppState example above, simply saying:

data AppState = AppState {
      total :: !Int
    , indiv :: !(Map Char Stats)
    }
  deriving (Show, Generic, NoThunks)

Many instances in the library itself are also defined using the generic instance; for example, the instance for (default, lazy) pairs is just:

instance (NoThunks a, NoThunks b) => NoThunks (a, b)

Deriving-via wrappers

Sometimes, we don’t want the default behavior implemented by the generic instance, but defining an instance by hand can be cumbersome. The library therefore provides a few newtype wrappers that can be used to conveniently derive custom instances. We will discuss three such wrappers here; the library comes with a few more.

Only check for WHNF

If all you want to do is check if a value is in weak head normal form (ie, check that it is not a thunk itself, although it could contain thunks), you can use OnlyCheckIsWhnf. For example, the library defines the instance for Bool as:

deriving via OnlyCheckWhnf Bool
         instance NoThunks Bool

For Bool, this is sufficient: when a boolean is in weak head normal form, it won’t contain any thunks. The library also uses this for functions:

deriving via OnlyCheckWhnfNamed "->" (a -> b)
         instance NoThunks (a -> b)

(Here, the Named version allows you to explicitly define the string representation of the type to be included in the thunk contexts.) Using OnlyCheckWhnf for functions means that any values in the function closure will not be checked for thunks. This is intentional and a subtle design decision; we will come back to this in the section on permissible thunks below.

Skipping some fields

For types such as IntSet where most fields should be checked for thunks, but some fields should be skipped, we can use AllowThunksIn:

deriving via AllowThunksIn '["total"] IntSet
         instance NoThunks IntSet

This can be handy for large record types, where giving the instance by hand is cumbersome and, moreover, can easily get out of sync when changes to the type (for example, a new field) are not reflected in the definition of wNoThunks.

Inspecting the heap directly

Instead of going through the class system and the NoThunks instances, we can also inspect the GHC heap directly. The library makes this available through the InspectHeap newtype, which has an instance:

instance Typeable a => NoThunks (InspectHeap a) where
  -- ..

Note that this does not depend on a NoThunks instance for a. We can use this like any other deriving-via wrappers, for example:

deriving via InspectHeap TimeOfDay
         instance NoThunks TimeOfDay

The advantage of such an instance is that we do not require instances for any nested types; for example, although TimeOfDay has a field of type Pico, we don’t need a NoThunks instance for it.

The disadvantage is that we lose all compositionality. If there are any types nested inside for which we want to allow for thunks, we have no way of overriding the behaviour of the no-thunks check for those types. Since we are inspecting the heap directly, and the runtime system does not record any type information, any NoThunks instances for those types are irrelevant and we will report any thunks that it finds. Moreover, when we do find such a thunk, we cannot report a useful context, because – again – we have no type information. If noThunks finds a thunk deeply nested inside some T (whose NoThunks instance was derived using InspectHeap), it will merely report "..." : "T" as the context (plus perhaps any context leading to T itself).

Permissible thunks

Some data types inherently depend on the presence of thunks. For example, the Seq type defined in Data.Sequence internally uses a finger tree. Finger trees are a specialized data type introduced by Ralf Hinze and Ross Paterson; for our purposes, all you need to know is that finger trees make essential use of thunks in their spines to achieve their asymptotic complexity bounds. This means that the NoThunks instance for Seq must allow for thunks in the spine of the data type, although it should still verify that there are no thunks in any of the elements in the sequence. This is easy enough to do; the instance in the library is:

instance NoThunks a => NoThunks (Seq a) where
  showTypeOf _   = "Seq"
  wNoThunks ctxt = noThunksInValues ctxt . toList

Here, noThunksInValues is a helper function that checks a list of values for thunks, without checking the list itself.

However, the existence of types such as Seq means that the non-compositionality of InspectHeap can be a big problem. It is also the reason that for functions we merely check if the function is in weak head normal form. Although the function could have thunks in its closure, we don’t know what their types are. We could check the function closure for thunks (using InspectHeap), but if we did, and that closure contained, say, a Seq among its values, we might incorrectly report an unexpected thunk. Because it is more problematic if the test reports a bug when there is none than when an actual bug is not reported, the library opts to check only functions for WHNF. If in your application you store functions, and it is important that these functions are checked for thunks, then you can define a custom newtype around a -> b with a NoThunks instance defined using InspectHeap (but only if you are sure that your functions don’t refer to types that must be allowed to have thunks).

Comparison with the heap/stack limit size method

In 2016, Neil Mitchell gave a very nice talk at HaskellX, where he presented a method for finding memory leaks (he has also written a blog post on the topic). The essence of the method is to run your test suite with much reduced stack and heap limits, so that if there is a memory leak in your code, you will notice it before it hits production. He then advocates the use of the -xc runtime flag to get a stack trace when such a ‘stack limit exhausted’ exception is thrown.

The technique advocated in this post has a number of advantages. We get an exception the moment a thunk is created, so the stack trace we get is often much more useful. Together with the context reported by noThunks, finding the problem is usually trivial. Interpreting the stack reported by -xc can be more difficult, because this exception is thrown when the limit is exhausted, which may or may not be related to the code that introduced the leak in the first place. Moreover, since the problem only becomes known when the limit is exhausted, minimal counter-examples are out of the question. It can also be difficult to pick a suitable value for the limit; how much memory does the test site actually need, and what would constitute a leak? Finally, -xc requires your program to be compiled with profiling enabled, which means you’re debugging something different to what you’d run in production, which is occasionally problematic.

Having said all that, the nothunks method does not replace the heap/stack limit method, but complements it. The nothunks approach is primarily useful for finding space leaks in pieces of data where it’s clear that we don’t want any thunk build-up, typically long-lived application state. It is less useful for finding more ‘local’ space leaks, such as a function accumulator not being updated strictly. For finding such leaks, setting stack/heap limits is still a useful technique.

Conclusions

Long-lived application data should, typically, not have any thunk build-up. The nothunks library can verify this through the noThunks and unsafeNoThunks function calls, which check if the supplied argument contains any unexpected thunks. These checks can then be used in assertions to check that no thunks are created. This means that if we do introduce a thunk by mistake, we get an immediate test failure, along with a callstack to the place where the thunk was created as well as a context providing a helpful hint on where the thunk is. Together with a testing framework, this makes memory leaks much easier to debug and avoid. Indeed, they have mostly been a thing of the past in our work on Cardano since we started using this approach.

Project Catalyst; introducing our first public fund for Cardano community innovation

An exciting experiment to start building the future of Cardano

16 September 2020 Dor Garbash 4 mins read

Project Catalyst; introducing our first public fund for Cardano community innovation

Today, we are announcing the launch of Project Catalyst’s first public fund, an important first step into the world of on-chain governance, treasury, and community innovation for Cardano.

The public fund launch follows five months of intense activity, across two previous trial ‘funds’. ‘Fund 0’ was the very first experiment, using a focus group made up from IOG team members. ‘Fund 1’ was the first time we introduced the idea to the Cardano community, recruiting a group of some 50 volunteers to help us develop the platform and processes. While this voting cycle did not offer ‘real’ funding, it served as a hugely valuable way to give our team and the Cardano community members a chance to test drive and refine the emerging process.

We still have a way to go. But with the community's support, we want to maintain the pace of progress we have already set. If Fund 0 was the technical runthrough, Fund 1 was the dress rehearsal. Announced today, Fund 2 is the opening night where the star acts of the community get the chance to compete for the funding to bring their project center stage.

Funding the future

We have learned a lot since we kicked off the private phase of the program in August. Support from our pioneer group of 50 community participants helped us to identify areas for improvement so we could develop and refine the process before opening things up more widely. We learned that providing clear documentation and guidelines helps the community engage more deeply and focus on the ideas. We also learned we can provide alternative avenues for the community to discuss “meta” proposals. This enhances the Catalyst and Voltaire processes while focusing attention on writing impactful proposals. Furthermore, we realized that it was important for IOHK to support individuals writing proposals to ensure their ideas were fairly represented.

Cardano will thrive by unlocking the creative potential of our global community. Our voting protocol will only be as good as the ideas which feed into it. To that end, we are developing a guide to help anyone develop the best possible proposal for Fund 2 and beyond.

The first public fund we’re announcing today contains up to $250k-worth of ada, which the community can access. Anyone can bring their idea and create a proposal. Through a public vote, ‘winning’ proposals will begin a development process.

Initially, we’re keeping the focus tight, asking the community to address a challenge statement: “How can we encourage developers and entrepreneurs to build Dapps and businesses on top of Cardano in the next 6 months?” Funding proposals (or ‘FPs’) can address this with a wide variety of ideas – from marketing initiatives and infrastructure development, to business planning and content creation.

The first stage will be to ‘explore the challenge’, asking members of the community to provide their perspectives. Next, we’ll encourage everyone to put their best ideas forward on the innovation platform to collaborate and discuss via a dedicated Telegram chat channel.

A public vote

After the initial ideation, collaboration and proposal stages, we’ll be putting things to a vote. Proposals can be reviewed either on the innovation platform or on the new mobile voting application, currently in development. When it comes time to vote, all participants will need to register to vote through the voting app. The ‘right’ to vote will be linked to each participant’s ada holdings and will earn them additional rewards for voting. Participating in this first funding round will not prevent ada holders from delegating their ada and earning rewards as normal. Voting effectively functions like a ‘transaction’, allowing all participants to cast their vote to indicate ‘yes,’ or ‘no.’ We’ll share further information on the app and the voting process in a future blog post.

Collaboration and innovation

Voltaire will be a crucial building block in the Cardano ecosystem, because it allows every ada holder to be involved in making decisions on the future development of the platform and contribute to the growth of the ecosystem. Project Catalyst is the important first component in delivering that capability. The early stages of this experiment have already demonstrated the passion and commitment of the Cardano community to develop this further. With the introduction of an on-chain voting and treasury system, network participants will be able to use their stake and voting rights to steer Cardano towards tackling shared goals, in a democratic and self-sustaining way.

To find out more, you can watch our announcement Crowdcast with Charles Hoskinson and Project Catalyst’s product manager, Dor Garbash.

Project Catalyst and Voltaire bring power to the people

Establishing a long-term future for Cardano growth has begun with a treasury and democratic voting in the Catalyst project

10 September 2020 Bingsheng Zhang 7 mins read

Project Catalyst and Voltaire bring power to the people

Designing a groundbreaking proof-of-stake blockchain means it is vital to ensure that the system is self-sustainable. This will allow it to drive growth and maturity in a truly decentralized and organic way. Voltaire is IOHK’s way of establishing this capability, allowing the community to maintain the Cardano blockchain while continuing to develop it by proposing and implementing system improvements. This puts the power to make decisions in the hands of ada holders.

Rigorous research lies at the heart of building a solid blockchain. July’s Shelley summit included a presentation on the importance of funding for the growth of Cardano. This was based on research between Lancaster University and IOHK into the notion of a treasury system and an effective, democratic approach to funding Cardano’s long-term development. IOHK has now applied treasury mechanism capabilities in Project Catalyst, which combines research, social experiments and community consent to establish an open, democratic culture within the Cardano community.

A democratic approach

With the rapid growth of blockchain technology, the world is witnessing the emergence of platforms across a variety of industries. Technological growth and maturity are essential for long-term blockchain sustainability and development. That is why someone has to support and fund growth and system enhancements. A democratic approach is an integral part of the blockchain ecosystem because it allows sustainability decisions to be made collaboratively, without relying on a central governing entity. Thus, the governing and decision-making process must be collective. This will allow users to understand how improvements are made, who makes decisions, and ultimately where the funding comes from to make these choices.

Long-term sustainability

There are several ways to raise capital for development purposes. Donations, venture capital funding, and initial coin offerings (ICOs) are the most common. However, although such models may work for raising initial capital, they rarely ensure a long-term funding source or predict the amount of capital needed for development and maintenance. In addition, these models suffer from centralized control, making it difficult to find a consensus meeting the needs and goals of everyone.

To establish a long-term funding source for blockchain development, some cryptocurrency projects apply taxation, taking a percentage from fees or rewards, and accumulating them in a separate pool – a treasury. Treasury funds can then be used for system development and maintenance purposes. In addition, treasury reserves undergo inflation as the value of cryptocurrencies grows. This grants another potential source of funds accretion.

However, funding systems are often at risk of centralization when making decisions on guiding development. In these systems, only certain individuals in the organization or company are empowered to make decisions on how to use available funds and for which purposes. Considering that the decentralized architecture of blockchain makes it inappropriate to have centralized control over funding, disagreement can arise among organization members and lead to complex disputes.

Treasury systems and Cardano

A number of treasury systems have arisen to address the problems. These systems may consist of iterative treasury periods during which project funding proposals are submitted, discussed, and voted on. However, common drawbacks include poor voter privacy or ballot submission security. In addition, the soundness of funding decisions can be compromised if master nodes are subject to coercion, or a lack of expert involvement might encourage irrational behavior.

As a third-generation cryptocurrency platform, Cardano was created to solve the difficulties encountered by previous platforms.

Cardano aims to bring democracy to the process, giving power to everyone and so ensuring that decisions are fair. For this, it is crucial to put in place transparent voting and funding processes. This is where Voltaire comes in.

The paper on treasury systems for cryptocurrencies introduces a community-controlled, decentralized, collaborative decision-making mechanism for sustainable funding of blockchain development and maintenance. This approach to collaborative intelligence relies on ‘liquid democracy’ – a hybrid of direct and representative democracy that provides the benefits of both systems.

This approach enables the treasury system to take advantage of expert knowledge in a voting process, as well as ensuring that all ada holders are granted an opportunity to vote. Thus, for each project, a voter can either vote directly or delegate their voting power to a member of the community who is an expert on the topic.

To ensure sustainability, the treasury system is controlled by the community and is refilled constantly from potential sources such as:

  • some newly-minted coins being held back as funding
  • a percentage of stake pool rewards and transaction fees
  • additional donations or charity

Because funds are being accumulated continually, it will be possible to fund projects and pay for improvement proposals.

So, the funding process can consist of ‘treasury periods’, with each being divided into the following phases:

  • pre-voting
  • voting
  • post-voting

During each period, project proposals may be submitted, discussed by experts and voters, and finally voted for to fund the most essential projects. Even though anyone can submit a proposal, only certain proposals can be supported depending on their importance and desirability for network development.

Voting and decision making

To understand which project should be funded first, let’s discuss the process of decision-making.

Ada holders who participate in treasury voting include scientists and developers, management team members, and investors and the general public. Each of these may have different imperatives for the growth of the system, and that is why there has to be a way to make these choices and desires work together.

For this, the voting power is proportional to the amount of ada someone owns; the more ada, the more influence in making decisions. As part of the liquid democracy approach, as well as direct yes/no voting, an individual may delegate their voting power to an expert they trust. In this case, the expert will be able to vote directly on the proposal they regard as the most important. After the voting, project proposals may be scored based on the number of yes/no votes and be shortlisted; the weakest project proposals will be discarded. Then, shortlisted proposals can be ranked according to their score, and the top-ranked proposals will be funded in turn until the treasury fund is exhausted. The strategy of dividing the decision-making process into stages allows reaching consensus on the priority of improvements.

To ensure voter privacy, the research team has invented an ‘honest verifier zero-knowledge proof for unit vector encryption with logarithmic size communication’. Zero-knowledge techniques are mathematical methods used to verify things without revealing any underlying data. In this case, the zero-knowledge proof means that someone can vote without revealing any information about themselves, other than that they are eligible to vote. This eliminates any possibility of voter coercion.

Treasury prototypes have been created at IOHK for benchmarking. Implementing the research as the basis of Voltaire will help to deliver reliable and secure means for treasury voting and decision-making. Project Catalyst is an experimental treasury system that combines proposal and voting procedures focusing on the establishment of a democratic culture within the Cardano community. Initially, Cardano’s treasury will be refilled from a percentage of stake pool rewards ensuring a sustainable treasury source. Other blockchains have treasury systems, but IOHK’s combines complete privacy through zero-knowledge proofs, liquid democracy from expert involvement and vote delegation, and participation for all, not just a governing entity. This should encourage participation, incentivization, and decentralization for making fair and transparent decisions.

It is also important to note that this treasury system mechanism can be implemented on a variety of blockchains, not just Cardano. It has already been proposed implementing it for Ethereum Classic. In the process, treasury systems can help everyone to understand how a network will develop.

After a successful closed user group trial earlier which started this summer, Project Catalyst will very soon be opened up to its first public beta program. Although it is still early days for Cardano on-chain governance, we look forward to a bright future, with the community lighting the way. So, please follow the blog for updates on Voltaire and how Project Catalyst is paving the way to Cardano’s sustainability.