Blog > 2017

A solution for scalable randomness

SCRAPE can handle far more users than previous solutions and can be applied to blockchain protocols, electronic voting, anonymous instant messaging and more

6 June 2017 Bernardo David 7 mins read

A solution for scalable randomness - Input Output

A solution for scalable randomness

It is well known that randomness is a fundamental resource for secure cryptographic schemes. All common encryption and authentication schemes are only as secure as their randomness sources are good. Using a poor randomness source can basically render insecure every cryptographic scheme that would otherwise be secure. In many cases, it is sufficient to have a trustworthy local source of fresh randomness (e.g. your operating system’s randomness pool or a Random Number Generator (RNG) embedded in your CPU). However, in many distributed applications, it is necessary to use a publicly accessible source of randomness that allows users (and external observers) to make sure they are all getting the same random values. With such applications in mind, in 1983 Michael Rabin introduced the concept of a randomness beacon, a publicly accessible entity that provides a periodically refreshed random value to its clients. More recently, NIST, the National Institute of Standards and Technology in the United States, started offering a randomness beacon service that derives its randomness from quantum physical processes.

When users are willing to trust a centralised randomness beacon they can simply choose one of them to perform this role or use a publicly available beacon, such as the one from NIST. However, standard randomness beacons provide no means for users to verify that the output values are indeed random and not skewed in a way that gives an advantage to an attacker.

As we know, cryptographic schemes have suffered both from poor implementations and from governmental agencies (e.g. the NSA) that actively work to undermine existing and future standards. In this context, a centralised randomness beacon could be providing bad randomness because of simple implementation mistake or in order to do the bidding of third party attacker who has undermined the central randomness generator or simply presented the people running the service with a subpoena that compels them to do so.

Using a coin tossing protocol (a concept also introduced by Rabin in 1981) is an obvious approach to (partially) solve this problem. Coin tossing allows several users to come together and generate an output that is guaranteed to be uniformly random without having to trust each other or any third party.

In a coin tossing protocol, good randomness is obtained if at least one of the parties is honest. If you run a coin tossing among many servers, somebody who trusts at least one of the servers can be assured that the output is truly random. In many applications (such as cryptocurrencies) we assume that at least half of all parties are honest by design.

Even though using coin tossing protocols solves the trust problem, it still does not make it any easier to ensure that all users will receive the random output. For example, a user with a faulty internet connection or an attacker who wants to mount a Denial-of-Service attack against the protocol (and consequently the application using the final randomness) can simply abort (i.e. stop sending protocol messages) before the output is obtained, keeping all the other users from receiving randomness. In fact, a malicious user can do even worse, he can execute the protocol up to the point where he learns the random output but still not send the final message that would enable the other users to also learn this output.

Tal Rabin and Ben-Or introduced a classical method for making sure all users receive the proper protocol output – assuming at least a majority of them is honest – in a seminal paper in 1989. The main idea is to use "secret sharing" to split each user’s input into "shares" that do not reveal any information about the input by themselves but allow for total recovery of the input if a sufficient number of them are put together. If each user is given a share of every other user’s input – such that the inputs can be recovered if more than 50% of the shares are pooled – malicious users still cannot learn anything because we assume that more than 50% of the users are honest. However, if a malicious user aborts the protocol at some point, the honest users can pool their shares to recover the malicious user’s input and finish the protocol by themselves.

This general approach can be used to make sure that all the users running a coin tossing protocol will receive the final random value. However, this still does not allow users and external observers to make sure that the output is being generated correctly. In order to do that, the users must use a kind of secret sharing that is publicly verifiable, meaning that anyone can make sure that the shares of inputs are correctly generated. This prevents malicious users from posting invalid shares and then aborting in such a way that the honest users cannot reconstruct their inputs. This approach has been used for generating publicly verifiable randomness in the Ouroboros Proof-of-Stake based, permissionless consensus protocol and for constructing a stand-alone randomness beacon in a paper by Syta et al. that will be presented at the IEEE Symposium on Security and Privacy 2017.

The main ingredient in building randomness beacons through this approach is Publicly Verifiable Secret Sharing (PVSS), a type of secret sharing scheme introduced by Stadler that allows users to split their inputs in shares whose validity can be promptly verified by any third party. Using such schemes, it is possible to construct coin tossing protocols with Guaranteed Output Delivery through the approach of Rabin and Ben-or, meaning that every user is guaranteed to receive the final randomness produced by the protocol. However, the PVSS scheme from Schoenmakers, which until recently was the most efficient PVSS scheme, still requires a number of modular exponentiations that is quadratic in the number of users, meaning that if there are n users running the protocol, verifying the n shares produced by one of them requires O(n^2) exponentiations. Note that this poses serious scalability issues since this quadratic overhead means that the number of expensive modular exponentiations required for verifying n shares grows quite a lot given even a small increase in the number of users.

In a recent joint paper with Ignacio Cascudo, SCRAPE: Scalable Randomness Attested by Public Entities that will presented at ACNS 2017 in Japan this July, we have solved this scalability problem by introducing a PVSS scheme that only requires O(n) modular exponentiations to verify n shares (5n modular exponentiations to be precise) while maintaining efficiency of all other protocol components. The main idea behind our protocol is to use a cheap information theoretical trick to verify the validity of shares instead of performing expensive modular exponentiations. As a result, our protocol requires only 5n modular exponentiations in the verification phase and 4n modular exponentiations in the distribution phase (where shares are generated), while the protocol by Schoenmakers requires 4n+(n^2)/2 exponentiations for verification and 4n+t exponentiations for distribution, where n is the number of users/shares and when n/2 shares are required for reconstructing the secret. Our efficient PVSS protocol can be used to improve the randomness generation component of Ouroboros and the recent result by Syta et al.

Team Grothendieck move closer to ETC goal

Working on the code in Argentina

22 May 2017 Jeremy Wood 6 mins read

Team Grothendieck move closer to Ethereum Classic goal - Input Output

Team Grothendieck move closer to ETC goal

It took a little longer than expected but I finally made the trip to Buenos Aires, Argentina. In fact, I'm standing at a work desk by the window in Frankfurt airport waiting for my flight back to Dublin. I'm enjoying a gloriously sunny day here through the wall of glass. 

It was another productive trip, a lot has happened since the Team Grothendieck trips to Poland and St Petersburg. In our work to build a Scala client for Ethereum Classic there has been a lot of code written, a lot of understanding gained, and a couple of milestones reached: we now have the ability to download and execute blocks of transactions from the ETC chain. We have also evolved a lot as a team.

The remaining milestones to reach include mining, and the JSON API – to allow Mist and other dapp wallets to use our client. In parallel with that we need to focus on our codebase. It was this process that the Grothendieck team’s Alan Verbner and Nico Tallar, and I spent our time on in BA. 

As a background, in an ideal world we would create code from day one supporting the coupling that made sense as we approached the release. However, this is an almost impossible task because we can't usually know the most sensible "final" coupling when starting out. For the ETC client we took the (oft used) approach that we would write clean unit tested code that implemented the functionality we understood at the time and then refactor as we learnt more. For example, when we finished the block download phase we had very little in the way of model classes for ‘blockchain’. However, as we spun up the "Tx Execution" phase, we discovered it made a lot of sense to create a set of functions coalescing around a ‘blockchain’ model. 

There's a school of thought that says this is the way to carry on: don't waste your time building "reusable" components that aren't reusable and won't be reused. I have sympathy for this approach because building reusable components is hard and it is embarrassing for your new component – the one you spent time and effort on – to fail at the first attempt at reuse because it doesn't quite do what you need it to. Better to allow the new functionality to drive refactors as and when it comes. There's a humility to this approach that appeals to me. 

Guess what's coming next? We're going to look at ways to modularise our client. Why? Firstly and most importantly it's a functional requirement for the codebase to support a significant level of flexibility. Four things that might define the core of a blockchain client are the network module, the ledger, the consensus mechanism and the wallet. Closely coupling the wallet and ledger together, we would like to experiment with different types of ledger and different types of consensus. And these should be able to use a well defined network module.  

So we will first attempt to isolate the 'network' module. This is a module that maintains connections to peers and sends and receives a configurable set of messages. It allows messages to be addressed to a peer or broadcast to many peers. It allows clients of the module to register for types of messages and types of message per peer. It's also functionality that we have already created. We just need to organize it so that it's reusable!

Why now? The JSON RPC API – in theory – should be controller layer code. The mining integration should – also in theory – not affect the workings of the network module. So the functionality to be reused should already exist and when we repackage it without breaking the existing system we know it's useful. By the time we get around to examining the coupling of the ledger and consensus the same should be true, we won't be making up use cases for invented modules, we will have specific working code to repackage. Will we produce interfaces and coupling that can be reused? That's the challenge. And after that – optimization of the internals...

Mining, web API and modularisation are not trivial tasks but they will end. And with them we reach the end of the existing roadmap – stability, bug fixes and auditing aside. For the past five months we have been playing catch up, we didn't need to talk about future evolution of the technology because we had a clear and challenging mandate – to recreate a client from the ground up. Now that we're relatively close to doing that, the exciting process of talking about the future of the codebase can begin. 

While in BA, Sergio Lerner kindly hosted the three of us at his office and over decent coffee and alfojores we had a good discussion about Ethereum tech, and some of the things he's been up to. And of course, RSK's upcoming release of their platform at Consensus 2017 in NYC. (Best of luck RSK!)

I'm always interested in how a global blockchain aimed at general purpose use can scale, with no way to delete defunct contracts from the global state trie. Sergio made the interesting point that ETC probably won't need to scale for a couple of years. He also suggested that with storage being so cheap for a network in a steady state (with most nodes staying up to date) it would be more expensive for all nodes to delete a contract than to keep it.

Apart from Rootstock, the Bitcoin Embassy in Buenos Aires, where Alan and Nico normally work is littered with interesting people working on multiple ways of leveraging the Bitcoin blockchain. There's a great atmosphere in the building, calm and friendly but industrious and I really enjoyed my time there, so when it was suggested we attend a live podcast…we said yes!

A special shout out to Alan Verbner, a man who is proud of his city and I think the city can be proud of him. We walked the city the whole weekend and I got a real sense of it. BA is modern but you don't have to look too hard to find old world charm – French restaurants with dark wood and marble counters, majestic old cafes full of Art Deco gold fittings and the smell of cardamon infused coffee. And then there's the steak. Vegetarians, look away now. I'm delighted to report that Argentina’s reputation for steak is well deserved. The variety of cuts, the sauces, the cooking...it might be worth going back just to eat steak. 

Cryptocurrencies need a safeguard to prevent another DAO disaster

High assurance brings mission-critical security to digital funds

12 May 2017 Duncan Coutts 13 mins read

Cryptocurrencies need a safeguard to prevent another DAO disaster - Input Output

Cryptocurrencies need a safeguard to prevent another DAO disaster

TL;DR You want to avoid the next DAO-like disaster: so you want confidence that the system underpinning your cryptocurrency doesn’t have a hidden flaw that could be triggered at any time and render your assets worthless. To get that confidence you need a high assurance implementation of the system operating your cryptocurrency. Formal methods (mathematical specifications and proofs) are the best way to build high assurance software systems, and that is what we are aiming to do with the software behind the cryptocurrencies we build.

How can you sleep at night?

A gold bar or a wodge of cash stashed in a safe has the rather nice property that it doesn’t just evaporate overnight. Money managed by computer software is not inherently so durable. Software flaws can be revealed without warning and can destroy the trust in whole systems.

We only have to look around us to see the prevalence of software flaws. The IT trade press is full of news of data breaches, critical security patches, zero-day exploits etc. At root these are almost all down to software flaws. Standard software development practices inevitably lead to this state of affairs.

With the DAO in particular, the flaw was in the implementation of the smart contract that defined the fund, not directly in Ethereum itself. So the implementation of contracts and the design of smart contract languages is certainly an important issue, but the next flaw could be somewhere else. It’s hard to know.

So how are we to sleep soundly at night? How can we be confident that our cryptocurrency coins are not just going to evaporate overnight? What we need is assurance. Not to be confused with insurance. Assurance is evidence and rational arguments that a system correctly does what it is supposed to do.

Systems with high assurance are used in cases where safety or a lot of money is at stake. For example we rightly demand high assurance that aircraft flight control systems work correctly so we can all trust in safely getting from A to B.

If as a community we truly believe that cryptocurrencies are not a toy and can and should be used when there are billions at stake then it behoves us to aim for high assurance implementations. If we do not have that aim, are we really serious or credible? And then in the long run we must actually achieve high assurance implementations.

In this post we’ll focus on the software aspects of systems and how formal methods help with designing high assurance software. Formal methods can be very useful in aspects of high assurance system design other than software, but that’ll have to wait for some other blog post.

What does assurance look like?

While we might imagine that assurance is either “yes“ or “no“ – you have it or you don’t – it actually makes sense to talk about degrees of assurance. See for example the summaries of the assurance levels, EAL1 to EAL7, in the CC security evaluation standard. The degree of assurance is about risk: how much risk of system failure are you prepared to tolerate? Higher assurance means a lower risk of failures. Of course all else being equal you would want higher assurance, but there is inevitably a trade-off. Achieving higher levels of assurance requires different approaches to system development, more specialised skills and extra up-front work. So the trade-off is that higher assurance is perceived to come with greater cost, longer development time and fewer features in a system. This is why almost all normal commercial software development is not high assurance.

There are two basic approaches to higher assurance software: the traditional approach focused on process and the modern approach focused on evidence, especially formal mathematical evidence.

Historically, going back to the 1980s and before, the best we could do was essentially to think hard and to be very careful. So the assurance standards were all about rigorously documenting everything, especially the process by which the software was designed, built and tested. The evidence at the end is in the form of a big stack of documents that essentially say “we’ve been very methodical and careful”.

Another approach comes from academic computer science – starting in the 80’s and becoming more practical and mature ever since. It starts from the premise that computer programs are – in principle – mathematical objects and can be reasoned about mathematically. When we say “reason about” we mean mathematical proofs of properties like “this program satisfies this specification”, or “this program always computes the same result as that program”. The approach is that as part of the development process we produce mathematical evidence of the correctness of the software. The evidence is (typically) in the form of a mathematical specification along with proofs about some useful properties of the specification (eg security properties); and proofs that the final code (or critical parts thereof) satisfy the specification. If this sounds like magic then bear with me for a moment. We will look at a concrete example in the next section.

One advantage of this approach compared to the traditional approach is that it produces evidence about the final software artefacts that stands by itself and can be checked by anyone. Indeed someone assessing the evidence does not need to know or care about the development process (which also makes it more compatible with open-source development). The evidence does not have to rely on document sign-offs saying essentially “we did careful code review and all our tests pass”. That kind of evidence is great, but it is indirect evidence and it is not precise or rigorous.

In principle this kind of mathematical approach can give us an extremely high level of assurance. One can use a piece of software called a proof assistant (such as Coq or Isabelle) which provides a machine-readable logical language for writing specifications and proofs – and it can automatically check that the proofs are correct. This is not the kind of proof where a human mathematician checking the proof has to fill in the details in their head, but the logician’s kind of proof that is ultra pernickety with no room left for human error.

While this is perhaps the pinnacle of high assurance it is important to note that cryptocurrencies are not going to get there any time soon. It’s mostly down to time and cost, but also due to some annoying gaps between the languages of formal proof tools and the programming languages we use to implement systems.

But realistically, we can expect to get much better evidence and assurance than we have today. Another benefit of taking an approach based on mathematical specification is that we very often end up with better designs: simpler, easier to test, easier to reason about later.

Programming from specifications

In practice we do not first write a specification then write a program to implement the spec and then try to prove that the program satisfies the specification. There is typically too big a gap between the specification and implementation to make that tractable. But it also turns out that having a formal specification is a really useful aid during the process of designing and implementing the program.

The idea is that we start with a specification and iteratively refine it until it is more or less equivalent to an implementation that we would be happy with. Each refinement step produces another specification that is – in a particular formal sense – equivalent to the previous specification, but more detailed. This approach gives us an implementation that is correct by construction, since we transform the specification into an implementation, and provided that each refinement step is correct then we have a very straightforward argument that the implementation is correct. These refinement steps are not just mechanical. They often involve creativity. It is where we get to make design decisions.

To get a sense of what all this means, let’s look at the example of Ouroboros. Ouroboros is a blockchain consensus protocol. Its key innovation is that it does not rely on Proof of Work, instead relying on Proof of Stake. It has been developed by a team of academic cryptography researchers, led by IOHK Chief Scientist Aggelos Kiayias. They have an academic paper, Ouroboros describing the protocol and mathematical proofs of security properties similar to that which Bitcoin achieves. This is a very high level mathematical description aimed for peer review by other academic cryptographers.

This is a great starting point. It is a relatively precise mathematical description of the protocol and we can rely on the proofs of the security properties. So in principle, if we could prove an implementation is equivalent (in the appropriate way) to the description in the paper, then the security proofs would apply to our implementation, which is a great place to be.

So how do we go from this specification to an implementation following the “correct by construction” approach? First we have to make the protocol specification from the paper more precise. It may seem surprising that we have to make a specification more precise than the one the cryptographers wrote, but this because it was written for other human cryptographers and not for machines. For the refinement process we need to be more like the pernickety logicians. So we have to take the protocol specification written in terms of English and mathematical symbols and redefine it in some suitable logical formalism that doesn’t leave any room for ambiguity.

We then have to embark on the process of refinement. The initial specification is the most abstract and least detailed. It says what must be done but has very little detail about how. If I have a more detailed specification that is a refinement of the initial specification then what that means intuitively is: if you are happy with the initial specification then you would be happy with the new specification. You can have different refinements of the same specification: they differ in details that are not covered in the original specification. Refinement also has a quite specific formal meaning, though it depends on exactly what formalism you’re using. In process calculi, refinement is described in terms of possible observed behaviours. One specification is a refinement of the other if the set of possible observed behaviours are equivalent to the other. Formally the kind of equivalence we need is what is known as a bisimulation.

In the case of Ouroboros we start with a very abstract specification. In particular it says very little about how the network protocol works: it describes things in terms of a reliable network broadcast operation. Of course real networks work in terms of unreliable unicast operations. There are many ways to implement broadcast. The initial specification doesn’t say. And it rightly doesn’t care. Any suitable choice will do. This is an example where we get to make a design choice.

The original specification also describes the protocol in terms of broadcasting entire blockchains. That is the whole chain back to the genesis block. This is not intended to be realistic. It is described this way because it makes the proofs in the paper easier. Obviously a real implementation needs to work in terms of sending blocks. So this is another case for refinement. We have to come up with a scheme where protocol participants broadcast and receive blocks and show how this is equivalent to the version that broadcasts chains. This is an interesting example because we are changing the observed behaviour of protocol participants: in one version we observe them broadcasting chains and in the other broadcasting blocks. The two do not match up in a trivial way but we should still be able to prove a bisimulation.

There are numerous other examples like this: cases where the specification is silent on details or suggests unrealistic things. These all need to be refined to get closer to something we can realistically implement. When do we move from specification to implementation? That line is very fuzzy. It is a continuum, which comes back to the point that both specifications and programs are mathematical objects. With Ouroboros the form of specification is such that at each refinement step we can directly implement the specification – at least as a simulation. In a simulation it’s perfectly OK to broadcast whole chains or to omit details of the broadcast algorithm since we can simulate reliable broadcast directly. Being able to run simulations lets us combine the refinement based approach with a test or prototype based approach. We can check we’re going in the right direction, or establish some kinds of simulated behaviour and evaluate different design decisions.

There are also appropriate intermediate points in the refinement when it makes sense to think about performance and resource use. We cannot think about resource use with the original high-level Ouroboros specification. Its description in terms of chain broadcast makes a nonsense of any assessment of resources use. On the other hand, by the time we have fully working code is too late in the design process. There is a natural point during the refinement where we have a specification that is not too detailed but concrete enough to talk about resource use. At this point we can make some formal arguments about resource use. This is also an appropriate point to design policies for dealing with overload, fairness and quality of service. This is critical for avoiding denial of service attacks, and is not something that the high-level specification covers.

Of course any normal careful design process will cover all these issues. The point is simply that these things can integrate with a formal refinement approach that builds an argument, step by step, as to why the resulting design and implementation do actually meet the specification.

Finally it’s worth looking at how much flexibility this kind of development process gives us with the trade-off between assurance and time and effort. At the low end we could take this approach and not actually formally prove anything, but just try to convince ourselves that we could if we needed to. This would mean that the final assurance argument looks like the following. We have cryptographers check that the protocol description in their paper is equivalent to our description in our logical formalism. This isn’t a proof, just mathematicians saying they believe the two descriptions are equivalent. Then we have all the intermediate specifications in the sequence of refinements. Again, there are no formal proofs of refinement here, but the steps are relatively small and anyone could review them along with prose descriptions of why we believe them to be proper refinements. Finally we would have an implementation of the most refined specification, which should match up in a 1:1 way. Again, computer scientists would need to review these side by side to convince themselves that they are indeed equivalent.

So this gives us some intermediate level of assurance but the development time isn’t too exorbitant and there is a relatively clear path to higher assurance. To get higher assurance we would reformulate the original protocol description using a proof assistant. Then instead of getting a sign-off from mathematicians about two descriptions being equivalent, we could prove the security properties directly with the new description using the proof assistant. For each refinement step the task is clear: prove using the proof assistant that each one really is a refinement. The final jump between the most detailed refined specification and equivalent executable code is still tricky because we have to step outside the domain of the proof assistant.

With the current state of proof tools and programming language tools we don’t have a great solution for producing a fully watertight proof that a program described in a proof assistant and in a similar programming language are really equivalent. There are a number of promising approaches that may become practical in the next few years, but they’re not quite there yet. So for the moment this would still require some manual checking. Really high assurance still has some practical constraints: for example we would need a verified compiler and runtime system. This illustrates the point that assurance is only as good as the weakest link and we should focus our efforts on the links where the risks are greatest.

Direction of travel

As a company, IOHK believes that cryptocurrencies are not a toy, and therefore believes that users are entitled to expect proper assurance.

As a development team we have the ambition, skills and resources to make an implementation with higher assurance. We are embarking on the first steps of this formal development process now and over time we will see useful results. Our approach means the first tangible results will offer a degree of assurance and we will be able to improve this over time.

Finding a brand for Daedalus

26 April 2017 Richard Wild 6 mins read

Finding a brand for Daedalus - Input Output

Finding a brand for Daedalus

Just who was Daedalus? It was the first of many questions the design team had to answer when we started work to find a logo for the digital wallet that will store the Ada cryptocurrency. Daedalus is an unusual name. We were in Riga, on an IOHK working sprint last autumn, when we got the brief to develop the visual identity and brand for the wallet, so the first thing we did was some research online.

It turns out that the name was chosen with good reason for a cryptocurrency wallet. An important figure in Greek mythology, Daedalus was the father of Icarus, a great artist, an innovator… as well as the creator of the labyrinth that kept the minotaur forever captive._

IOHK wants the wallet it is creating to be the best cryptocurrency wallet out there. Not just a place to store, send and receive cryptocurrency (the first of which it will hold is Ada) but a place where you can later access plug-ins and other developer tools/SDKs, anything from managing your mortgage to splitting a bill between people. The vision is that it will become a platform full of applications, and be as important a product as Internet Explorer was for Microsoft._

The creative team needed to find an identity that would connect to that vision, to the meaning behind the name Daedalus and his story, and to what IOHK is as a company and the technology it builds._

We got together to discuss it and an idea came out._

According to the myth, the minotaur never escapes the maze – that’s how good a craftsman Daedalus was._

We argue creatively that the minotaur is your money, or your digital identity. The minotaur is forever held in an infinite prison where only you control your keys. We have a lot of clever cryptography powering the security of the wallet, and like the minotaur in his maze, your money will never be able to escape our wallet, so Daedalus represents our expertise in security, and in building methodically and securely._

Tomas Vrana and Alexander Rukin, both part of the design team, came up with some early concepts. Among many different designs, Tom produced a minotaur; Alexander came up with a contemporary “D” design. They came up with creative sketches and drawings, trying to gauge how best to approach the form and the function of the logo. Some of their early designs are below.

Ribbon variant
Flat maze variant
Ribbon variant
Origami Minotaur variant
Gothic “D” variant
Minotaur variant

Then we threw open the design challenge to a competition, connecting to a community of designers to offer their solutions. This would provide us with a wealth of ideas and give us interesting feedback. We had about 100 entries from graphic designers, ranging from the bad, to the good, to the funny. There were solid corporate identities, also very creative executions. There were lots of designs based on the letter “D”. And lots based on keys (with a nod to public key cryptography). However, that was too obvious. We are a crypto wallet, yes we have keys. A simple key in a logo didn’t covey a strong brand message, it didn’t set us apart. None was quite right._

We were looking for something that would connect to the Daedalus story and have a huge potential for creative story telling in being a brand we could develop._

So we updated the brief and asked the designers to focus on the story of Daedalus. There were about 30 entries this time. A maze in the shape of a key was good, but not excellent. And there were lots of minotaurs. On some, the execution was not great. There was one that looked like a pixelated deer._

The one we finally chose, with the help of IOHK’s two founders, was by a designer called Zahidul Islam. It’s contemporary, it feels modern and fresh yet connects to a very old story. It has the minotaur. It’s a well crafted form, it’s balanced, and yet still has the maze motif in it – it reflects the identity of our brand._

There was one issue, it was quite complex, which meant it didn’t work at smaller sizes. So we worked with the designer to develop a brand hierarchy – which comprises a primary, secondary and tertiary form for the identity to be used in various situations as the rules dictate._Below are the primary, secondary and tertiary images, each with successively less complexity.

The primary logo will be the larger formats, for example to be used on T-shirts, or on main focal areas such as loading screens. (We have a plan to develop the identity, to make it a “living” brand that is dynamic, so you see it moving if you have to wait for the screen to load for instance.)

The secondary logo will be more of a symbol and used at smaller sizes. Some of the complexity has been removed, and it will be used if the requirements are for 64 pixels or smaller. And then we produced a tiny icon, to be used for example as a security feature on paper wallets.

The colours you see might change. The colour palette of the brand has only black and green in it now.

So why did we go to all this effort over five months?

Behind the Daedalus wallet is so much time, money, skill and effort. You wouldn’t want to represent all that hard work with a mark that is unconsidered, or one that lacks capacity to carry the story of Daedalus. You want to bring enigma, some creation to the symbol that represents your quest. This is about more than a platform, it is a brand and about building brand allegiance. It has to stand out, be different and make people think of you. This brand has a lot of runway, a lot of potential for development creatively. The design may change as the brand evolves, but the current identity will scale and develop into something wonderful from this point on._

A trip to Malta and a Grothendieck milestone

18 April 2017 Jeremy Wood 6 mins read

A trip to Malta and a Grothendieck milestone - Input Output

Last Monday should have been particularly jarring given the recent excitement. However, it was anything but. Sunlight was flooding the back garden and all the small birds of the neighborhood came together to perform an impromptu concert at maximum volume. I could hear them clearly through the double glazing. Spring had sprung in Dublin. And I’d just come back from Malta talking Ethereum, Cardano, blockchain, crypto, functional languages, goal management and a ton of other cool stuff. Life is good.

It was my first time attending the Financial Cryptography and Data Security conference. The conference is a week-long annual event for cryptography as applied to finance. This year, IOHK's chief scientist Aggelos Kiayias put a great programme of speakers together. Instantly recognizable figures from the crypto community attended – Adam Back, Emin Gün Sirer, Vitalik Buterin and many more. IOHK researchers attended and it was great to see people who may only know each other through twitter feeds and published papers get to speak to each other.

I hope and presume this conference has generated many fine and detailed articles on Coindesk and beyond. This won't be one of them. Particular highlights for me were listening to MIT professor and cryptography pioneer Silvio Micali speak about the Algorand protocol and the conscious decision to keep incentives out of the equation. That instantly generated a little controversy and is going to need a long second look. 

Dmitry Meshkov, IOHK researcher, presented Improving Authenticated Dynamic Dictionaries, with Applications to Cryptocurrencies which is of particular interest to Team Grothendieck as we have wrestled with our implementation of the "Modified Merkle Patricia Trie" as specified in the original yellow paper. In all its forms it’s a clever idea – being able to move forward and back through the state by memorizing a root hash and being able to show tries are equivalent based on the equivalence of their root hashes. Dmitry et al have a scala implementation, and if I heard Vitalik Buterin correctly (he commented after the presentation) he suggested there might be room for improvement on the original implementation, so there are possibilities for enhancements there.

The conference was good, but it had some stiff competition from all the fun I had and everything I learned from mixing with other IOHK employees. IOHK employees usually work remotely but for more than a week almost 40 people gathered in Malta, who had traveled from far flung places like Osaka, St. Petersburg and California, including IOHK founders Jeremy Wood and Charles Hoskinson. While that week was mostly about the conference it was also an opportunity to get some work done. On the top floor of some very nice rented office space, the Serokell and Daedalus teams along with other key personnel on the Cardano project hammered out plans and approaches to give said project a major push forward. There was time for some introspection too and a lot of productive meetings around development methodology.  

A real highlight of working for this company is tripping across experts in many technical fields – functional languages, formal verification, full-time life time cryptographers, language designers and creators, high energy physicists... High energy physicists. Who tell jokes. 

And it gets better. The previous week, the week of the 27th of March, Team Grothendieck arrived in Athens to work on our next and arguably most important milestone – "Transaction Execution" or "tx execution" for short. This was the first time the whole team came together to work, physically together and in same time zone. 

The team reached its first milestone on Friday 24th March. That milestone involved downloading all blocks to the local machine and providing those blocks to other clients, further dispersing the transactions across the peer-to-peer network. The client also supports "fast download", which is the process of downloading the state trie from a point in recent history in order to shorten the time required for a client to get fully up to date with the blockchain. The premise being that downloading the state trie is faster than executing every transaction since block 0. As our first milestone it was very exciting to reach, but also to see the blocks and transactions flying around the network and know that we can successfully synchronize our local database with the rest of the Ethereum Classic network. 

We had a productive few days at the university of Athens, the area is quiet, cool in the shade and conducive to working. The sun shone, the wind blew and the coffee was good. Our hotel was close to the university so we got to walk the streets of Athens in the mornings and see a little of daily life in the city. The subject of our days in Athens (transaction execution) is the process of updating the ledger by applying valid transactions to it block by block. After each block of transactions has been applied to the ledger the ledger exhibits a new state. This state is stored in the form of a state trie and the root of this trie is a hash reflecting precisely the contents of the state trie. The questions we had to answer were – did we understand the goal; did we understand how we measure success; did we have the functionality covered by existing tasks; how long would it take and finally some knowledge swapping as working apart inevitably means small knowledge silos had begun to develop despite our efforts. By Thursday evening we had satisfactorily answered all these questions and we expect to reach the tx execution milestone by the end of April. 

On the Friday the team attended the Smart Contracts conference and spoke with Charles Hoskinson, Prof Aggelos Kiayias, Darryl McAdams and others about the future of smart contracts and the law.

There was tentative agreement on the eventuality of smart contract template libraries, so if for example the author wanted to provide an upgrade path for the contract in the event of some issue being found (ahem!) or have some means of dissolving/locking the contract if the participants lose faith, a tried and trusted set of templates would exist for the contract author to mix and match. Templates of this type could claim regulatory compliance out of the box, which is a great (if not new) way to leverage the usefulness of software in the world of contract law – solve a problem once and reuse the solution ad nauseam. I suspect this is an area most smart contract developers currently enjoy ignoring! 

A final word on Athens, the Acropolis museum  is a wonderful building and worth a visit even if they housed nothing there, but coupled with the treasures of the ancient world and a good restaurant it's a must-see if you find yourself in the area.

Here's hoping the end of April sees us reach another exciting milestone, an even bigger one this time, and we are able to execute every transaction in the blockchain using the Grothendieck client. That would really give the birds something to sing about...