Mantis – Ethereum Classic Beta Release

A command line interface client for the ETC community

8 August 2017 Jeremy Wood 4 mins read

Mantis – Ethereum Classic Beta Release - Input Output

Mantis – Ethereum Classic Beta Release

We are excited to announce that there is now an Ethereum (ETH) client built specifically for the Ethereum Classic (ETC) community. The release of this beta client, Mantis, will take place today and is the culmination of seven months of work by the Grothendieck Team, the IOHK developers dedicated to Ethereum Classic. There are three reasons for the client. First, IOHK wants to demonstrate that it has the technical competency and culture to be a leader for the development of ETC. Second, IOHK wants to dispel the myth that ETC is a “copy and paste” coin that uses other people's code, and show that it is an independent and viable alternative to Ethereum. Third, the client is built in Scala, which is a functional programming language that offers security guarantees that other languages do not.

This release is comprised of the four functional milestones we have been working on since January.

  • Blockchain download
  • Transaction execution
  • Command and query interface
  • Mining integration

However, please be aware that this is an early release – the important thing is that we get the Mantis client into the hands of ETC community members who can provide valuable feedback. Tell us what you think, and how we can improve Mantis. We want to stress that this is not yet production ready and has not been optimized for performance, so there will be bugs. **Anyone using the beta release of the Mantis client should be using it on a testnet only, please do not use the Mantis client with actual funds.**

These are some of the features that made it into the beta release for the Mantis client:

  • Mist Integration
    Connect the Mist browser to the Mantis client over HTTP.

  • Multi-platform
    We have tested the application on recent versions of Linux (16.02), Mac OS (El Capitan, Sierra) and Windows (10, 8).

  • Testnet and Private Chain Support
    The client supports synchronizing with the Morden testnet and also creating private chains.

  • Documented Configuration
    Our client uses neatly formatted configuration files in the “conf” folder to configure the client, all the keys and values have descriptions to help the user optimize the client's utility.

We are also able to include a “Fast Sync” feature in this release. From start-up, the Mantis client (using default settings) will attempt to discover existing ETC nodes on the internet and fast sync the ETC chain from them. Fast Sync is fantastic feature for a blockchain client because it downloads a recent snapshot of the blockchain and this speeds up the process of setting up a properly functioning full node. It also downloads the entire blockchain history to have this available to other peers on request. Fast Sync is faster, and more convenient than downloading all of the blocks from peers, although this is also supported and can be switched using a flag in the configuration file.

Although Fast Sync is quicker, it is still slow by today's internet standards. For those who would like to get a node synchronized as fast as possible a “bootstrap database” has been provided. This database contains the whole chain up until August 2nd 2017. Users can download this large file, unzip it in their data folder and then start the Mantis client.

The Mantis client is now being passed into the hands of technically savvy community members. Enthusiasts, who are comfortable with a command line interface and are willing to install code that has not been fully tested, will have a lot of fun using the Mantis client and can provide us with valuable feedback. We would like to encourage anyone with the necessary technical skills to try out the client and report any bugs to the ETC Slack channel.

We will have more updates and news coming soon, and will share our progress with the community in the upcoming weeks. Please stay tuned for more details!

Cardano: Resilient and Scalable by Design

System performance engineering so DevOps may soundly sleep

3 August 2017 Peter Thompson 10 mins read

Cardano Resilient and Scalable by Design - Input Output

What we expect from traditional providers of financial services such as banks is both security (my money is safe) and responsiveness (I can move my money in and out at will in a timely fashion). The days when banks delivered such services using legions of clerks writing in double-entry ledgers are long gone – nowadays it’s all done by software, and so the security and performance of such systems is critical to a bank’s reputation. Banks invest heavily in their computing and network infrastructure and personnel to mitigate this risk (for example, we happen to know the Head of Unix System Engineering at a major international bank is paid a LOT more than any of us!). Customer expectations are high, tolerance of poor performance is low, and it is the poor DevOps who end up dealing with the consequences of any deficiencies and emergent instabilities. Another advantage that traditional banks have is periods when they aren’t expected to be fully operational – after local markets close, for example, and during public holidays. As global infrastructure, Cardano needs to run both continuously and indefinitely. Its resulting performance needs to be acceptable even in the presence of hardware or software failures and cyber-attacks – and it must do all this without constant maintenance from a large DevOps team.

This requires the application of the emerging discipline of distributed systems performance engineering to anticipate and mitigate the issues associated with long-term, continuous and scalable operation. This combines failure-mode effects analysis with stochastics (which uses probability and randomness over real-time) to model the impact of both resource-sharing (for example in packet networks and virtualised infrastructure) and the possibility of failures and exceptions. Of course, it’s natural to ask: if the software correctly implements the specification, how can there be failures? What have we done wrong? The answer is that we haven’t done anything wrong, it’s just that we’re not operating in a closed environment. The real world is an open environment, many elements of which are not under our control. Messages between components of Cardano may be lost or corrupted, either by accident or malicious intent; VMs running such components may crash or be starved of resources; and DoS attacks may exhaust resources. Even if our code is perfect, the world in which it runs is not.

Performance engineering can be used in a post-hoc way to assess the expected performance and resource consumption of an existing system, but for Cardano we’ll use it to help guide design decisions in the re-implementation of the network layer. In a previous blog, Duncan Coutts of Well-Typed, and Cardano's Director of Engineering, talked about how formal methods can help to ensure that a design decision doesn’t break the top-level specification of what the system should (and should not) do; what performance engineering adds is an assessment of whether such a design decision moves us closer to (or further away from) meeting the resilience, performance and scalability targets for the eventual deployment.

Current state of the art

With the exception of “hard real time” systems such as anti-lock brake controllers, it’s rare to see performance, resilience and robustness treated as first-class citizens in the software development lifecycle (SDLC). Even where such properties are considered, this typically occurs late in the SDLC. Performance, in particular, is regarded as something that will “emerge” after the design, and much, if not all, of the implementation has been done. Although robustness, resilience and performance are closely linked, let’s focus on performance, as this is the most widely misunderstood.

In the academic world, system steady-state performance has been widely studied using queueing theory. Approaches tend to take a resource-centric view of the system, focusing on the utilisation/idleness of individual components. Where job/customer performance is considered (such as in mean-value analysis or Jackson/BCMP networks) it is in the context of “steady-state” and “averages”. Thus these methods cannot deliver metrics such as the distribution of the system’s response time to a stimulus, or the probability that such a response will occur within a specified time.

Meanwhile, in today’s customer experience-centric, performance-critical service-delivery environments, such metrics are essential. An end customer doesn’t care how efficient the system is, only how long it takes to process her transaction; and an acceptable average does not compensate for the disappointment of a particular transaction taking a hundred times too long!

This has led academic research into the characterisation of “passage-times”, i.e. the time taken for a system to follow a particular path to a state, that path being characteristic of an outcome. Such a style of analysis has been combined with stochastic/probabilistic algebras to generate tools that can be applied in the SDLC, such as PEPA and PRISM. These are retrospective validation tools, operating on fully specified systems, that will give probabilistic measures of outcomes for certain classes of system under steady-state assumptions.

However, constructing a large-scale system such as Cardano is expensive, and no-one wants to iterate large parts of the design just because the required performance is not achieved and/or the resources consumed are uneconomic. Mitigating that risk requires an approach that supports both prospective and retrospective validation and verification. It needs to be able to capture performance requirements, to validate them, to construct performance properties/invariants that “witness” those requirements, and to support the reification and abstraction of such properties/invariants throughout the SDLC. In other words, the analysis approach needs to be composable at all points in the SDLC, a property which all of the other approaches above lack with respect to performance.

A composable approach

Composability is the key to managing complexity in the SDLC. The principle of composability is as follows: the meaning of a complex expression is determined by the meanings of its constituent expressions and the rules used to combine them. For composable properties, what is “true” about small subsystems (e.g. their timeliness, their resource consumption) tells us what is “true” about their (appropriately constructed) combination. Conversely, it means that there is an invariant that must hold (e.g. timeliness, aspects of functional correctness) over the reified components of the system.

This is the same as checking functional correctness by breaking down a top-level specification into a number of component specifications and proving that the combination meets the top-level spec.

Engagement with the general notion of composability, and the associated improvement in productivity, can be seen in the increasing tendency of leading ICT practitioners (e.g. Google, Facebook, WhatsApp, leading banks’ real-time trading systems – and of course, Cardano) to use functional/declarative programming approaches such as Haskell for their key systems. Such approaches are improving the verification and validation (V&V) of functional aspects of software systems; composable performance engineering represents a similar step-change in the V&V of the “non-functional” aspects of performance and resource consumption.

PNSol has developed a framework around a composable measure of performance that we call “∆Q”. This enables a new development process that is composable with respect to both performance hazards (i.e. time to complete and probability of non-completion/divergence) and aspects of resource consumption (e.g. CPU time, network/interconnect capacity). PNSol represent the operational semantics with a stochastic process algebra (using a combination of improper random variables and serial-parallel flow graphs), to capture both communication and computational behaviour. This approach has a supporting software library/API that PNSol has been using for more than 10 years in consultancy engagements, which supports both symbolic and concrete representations of the metrics of interest, helping to capture design and operational uncertainties as part of the SDLC.

This approach also helps to pinpoint performance sensitivities, i.e. to reveal which parameters have the most impact on the eventual system performance. The DevOps team can then know what to measure, track, and trend in order to have early warning of performance or resource consumption problems, and hence can get some sleep from time to time!

Applying the ∆Q Framework

To apply this in practice we need to first establish some “quantifiable intent”, that is to say, to set bounds on the performance of some observable behaviour of the system. An observable is something that starts and finishes; in Cardano we might think about submitting a transaction and seeing it embedded in the blockchain, although a simpler and more familiar example would be clicking a button on a web page and getting some response. The quantified intent for that web server response might be something like: 50% of the time it should take less than 2s; 95% of the time it should take less than 5s; 99.9% of the time it should take less than 20s. Note that we allow the possibility that it might fail altogether – technically this means we represent the observable with an “improper random variable” – which is very important for dealing with the real world, in which things can (and do) fail. Taking proper account of this allows us to design systems that degrade gracefully rather than collapsing apparently arbitrarily. The next step is to extract from the design what other observables the initial one depends on, typically transfers of information across a network and computations using that information (each of which also consumes some resources). Given the way these observables are causally related (called the serial-parallel flow graph, SPFG), the ∆Q framework allows us to combine their performance distributions to calculate the resulting distribution for the original observable, and to check whether it meets the original intent (here is a worked example). If it doesn’t meet this intent, we may need to tighten the distributions for some of the component observables, or change the design to alter the SPFG. Note that we can either apply this approach top-down (as a set of performance “budgets”) or bottom-up (some elements such as network delays may not be changeable), or use a combination of the two. We can also treat a whole subsystem as delivering a single observable, and then break the delivery of that observable into its constituent parts, thus iterating the whole process – this makes the approach composable, as discussed above.

At the same time we can add up the distributions of resource consumption to obtain not merely averages but also probabilities of thresholds being exceeded. Once the relationship between delivered performance and resource consumption is properly modelled, it is straightforward to address issues of scalability, exception/failure propagation and/or mitigation, and the impact of correlations in demand (discussed in more detail here).

Applying this to something as complex as Cardano-SL will be a challenging project, but will enable us to address the issues of robustness, resilience and performance with our eyes wide open – resulting in an economic and appropriately scaled solution.

First Hand Experiences of the Oregon Programming Languages Summer School

27 July 2017 Tamara Haasen 11 mins read

First Hand Experiences of the Oregon Programming Languages Summer School - Input Output

First Hand Experiences of the Oregon Programming Languages Summer School

The Oregon Programming Languages Summer School (OPLSS) takes place annually at University of Oregon, and brings together academics and professionals who are interested in programming language theory. The goal of the school is to provide an opportunity for participants to understand the current landscape in programming language research.

During this two-week program, professors lecture on a mix of the fundamentals and recent research in the field. Since its start in 2002, OPLSS has covered a range of topics, including logic, language semantics, and mathematical proofs about language properties.

This year, the theme was "A Spectrum of Types". This focuses on programming languages that can feature a type system to help programmers detect and prevent errors early in the development process.

Toward the end, lecturers explored how typed languages can safely interface with untyped languages using contracts or proofs about the behavioral equivalence of programs implemented in different languages.

Here are the experiences of three of our team members who attended the OPLSS:

Lars Brünjes is a Cardano SL Developer within Team Haskell at IOHK

When I flew to Eugene, Oregon, to attend the OPLSS 2017, the Oregon Programming Languages Summer School, I was both very excited and a bit apprehensive.

I was excited for the chance to play student again, immerse myself into an academic environment and learn new things. I was also apprehensive because I had no formal background in type theory and was afraid that all the lectures would be well above my head.

The first lecture by Bob Harper, the "God" of type theory, seemed to confirm my worries. I could barely read his handwriting on the whiteboard (my wife claims I’m too vain to wear glasses, maybe she has a point), and most of the things he mentioned in passing as something everybody knew were actually new to me. Bob Harper also doesn’t like Haskell – a fact that made me like his lecture even less.

However, things went steeply uphill from there.

The next two lectures of the day were much easier to follow; I understood almost everything. In the afternoon "handson session", I started working on an exercise put forward by Bob, implementing a type checker and interpreter for "Gödel T", and programming Euclid’s Algorithm in that language.

We were encouraged to do that exercise in ML, but of course I did it in Haskell (take that, Bob!), and actually working with the material made things a lot clearer for me.

The next day’s lecture by Bob was actually fascinating and a lot of fun. I finally understood how different ingredients come together to endow a language with features like polymorphism, inductive, dependent types, or general recursion.

Over the next two weeks, I learnt a lot about dependent types, gradual types, session types, and secure compilation.

I also received an introduction to Idris, the "dependently typed Haskell", by Edwin Brady, the creator of the language himself! After that, I spent every idle moment hacking in Idris.

This is what made OPLSS 2017 a great experience for me: not only did I learn lots of new things, I also felt inspired. Being surrounded day and night by highly motivated and intelligent people does that to you.

Speaking of people – they were great! Both the instructors, who all tried very hard to deliver excellent lectures, and the participants, who all shared their excitement for the material and their willingness to learn, no matter how tired and jetlagged they were. There were people from all over the world, and talking to them, exchanging stories, ideas, and experiences, was wonderful.

I also really enjoyed getting to know my colleague Jake Mitchell better. Over many beers at the nearby pub, we talked about everything, ranging from Idris and dependent types, to politics and religion. I love working for a remote company like IOHK, but sometimes it is really nice to be in the same room with a colleague and be able to have a beer with him!

After the two weeks were over and we had to start thinking about travel arrangements home, we were all very sad. One Chinese PhD student even cried and couldn’t be comforted when she had to say goodbye. It was a fantastic experience for me, and I am very grateful for IOHK for giving me the chance to attend. I would be even more grateful if I could attend again next year (pushing my luck here)!

**Kawin Worrasangasilpa is a PhD student working on formal verification of Proof of Stake at IOHK**

I attended both the review sessions and the main program of OPLSS 2017 over the course of three weeks. With only self-taught knowledge that I had about type theory, I was slightly worried about understanding these lectures.

The first three days were allocated for the review sessions for those who hadn’t had proper training in type theory. One of the lecturers, Paul Downen, prepared us with 11 presentations, so we could be confident enough to jump to the main sessions after this. I felt calm and prepared when I finished these three days, and most of my main questions were answered.

The content provided during the three-day session, and the academic atmosphere naturally created by all of the participants who were eager to learn new things, was fantastic. We all had a chance to share our experiences and interests, what kind of work we had been doing, and why we attended the school.

The real challenge began on the the first day of the main programme. Each day, we had to consume and digest three different 80-minute lectures. Most of the students and I felt overwhelmed initially. However, after the third lecture of the day, the hands-on session with exercises helped us to understand the concepts in practice. During this session, at least one lecturer was present to answer any questions participants asked.

Apart from lectures and hands-on sessions, there were activities for participants nearly every day. One of the activities was a group presentation, and participants were given the opportunity to present their work. Many people took part by sharing their ideas and asking questions, leading to improvements in their work and the wider view of how we can apply type theory to feasible research questions.

Aside from the friendly atmosphere, I learned about a range of interesting topics. These included Dependent Type and Linearity, Contracts and Gradual Types, Substructural Type Systems and Concurrent Programming, and techniques of using Idris and Racket.

The one that I want to share is the series of lectures, "Contracts and Gradual Types", by Sam Tobin-Hochstadt. Contracts are about how to identify who to "blame" if a program has errors within its many modules. For example, Alice builds a program containing Function f and provides a contract that f has ‘int’ as a domain, but then Bob calls f applying it to a string leading to an error, so Bob is to be blamed in this case because he breaks the contract. This situation would be reversed if Alice didn’t write the contract specifying the domain of Function f but f only works on ‘int’, which makes Alice the one to blame since Bob calls f correctly but f doesn’t work.

At first, Sam showed us a useful message system using Racket, which can detect the specific module that is responsible for dynamic errors. Then, in subsequent lectures, he explained more precisely how contracts work by introducing contract semantics. In my opinion, this is a very practical topic for real-world problems because we work with other developers and plug in pieces of code from modules we don’t own most of the time. Therefore, it would save time if we could always identify who must take responsibility for programs’ errors and fix only the modules which work incorrectly. For more details on this topic and others, please see this list of all OPLSS lectures.

I sincerely recommend this OPLSS programme to anyone who works specifically in programming language theory, uses a functional programming language regularly, or just wants to be exposed to this field. I am certain it will not disappoint and dare to promise that it will be worth participating.

Jacob Mitchell is a DevOps Engineer at IOHK

OPLSS began with an optional three-day review session on type theory. One of the lecturers, Paul Downen, gave presentations based on chapters from Benjamin C. Pierce’s Types and Programming Languages and Bob Harper's Practical Foundations for Programming Languages.

Before arriving I had already studied TAPL and other similar material. Living like a student again was shocking since it's been a while, but the familiar lecture content helped me re-acclimate to academic life. I enjoyed meeting so many students and professionals who were also interested in type theory yet wanted to be sure they grasped the basics. It encouraged us to meet others who were eager dissect the core material before moving on to more advanced topics.

The following two weeks were a whirlwind of material. When one of the lecturers, Professor Van Horn, recalled his experience attending OPLSS as a student he compared it to drinking from a firehose. I can relate. It was all interesting, but admittedly there was a lot I didn't fully grasp. I love that now I have a lot more interesting material to study and connect to what I already know about type theory!

For now I'll touch on some concepts that interested me most:

1. Idris and Dependent Types

Idris is a relatively new language with many similarities to Haskell, but one major difference is it's designed to support dependent types. Idris allows programmers to more precisely describe what the program may or may not do, and get helpful and rapid feedback from the computer when any code contradicts with those descriptions. This powerful type system prevents a lot of bugs and encourages clean, coherent designs.

Let's consider a useful application for dependent types.

Engineering is largely about getting different pieces to interact with each other to solve a problem none of them could handle on their own. Interactions between components are often dictated by state machines. When a system breaks down it can frequently be traced to a part that went off script from a specified state machine. Software developers routinely face these problems, both when using a dependency that has unclear or cumbersome state machine requirements, and when designing a library to have clear and simple state machine expectations. Unfortunately, these kinds of bugs can be difficult to detect and often get deployed only to cause a disaster later on. It would be much better if the compiler could catch these bugs and convey them to the developer.

Professor Brady, the creator of Idris, suggested how dependent types can address exactly this problem. In particular, library developers can encode state machines using dependent types, and then users of those libraries get compile-time checks that verify whether their code is compatible with the upstream state machines. To learn more I recommend reading Brady's Type-Driven Development with Idris and studying the state machine implementation for an ATM.

2. PLT Redex and Abstracting Abstract Machines

Type theorists need a language for communicating about programming languages because they do it frequently. Although conventions exist, the precise details of the metalanguages used can vary from one researcher to the next. Worse, sometimes a single researcher's descriptions of a programming language are incoherent due to inconsistent use of metalanguage or typos.

PLT Redex attempts to fix that problem by giving researchers a simple language to specify the semantics of a programming language. After implementing the language specification the researcher can even interactively test the language's behavior and typeset the specification for publication. As Professor Van Horn demonstrated, it is a useful tool for building abstract machines and interpreters, and discovering runtime properties of programs compatible with those abstractions.

3. Correct and Secure Compilation for Multi-Language Software

Much of the foundational software used in industry is implemented in languages which aren't optimized for verification. As much as we'd like everything to be rigorously verified, at least for now the practical reality is we must rely on components which are difficult to verify. To make matters worse, the verified properties about our software are usually invalidated when linked with unverified code.

Professor Ahmed's research frames these problems precisely and introduces general strategies and, in specific scenarios, offers concrete solutions. I'm looking forward to doing a close reading of a paper she coauthored called "FunTAL: Reasonably Mixing a Functional Language with Assembly".

Why we are building Cardano

10 July 2017 Charles Hoskinson 3 mins read

Why we are building Cardano - Input Output

Why we are building Cardano

I’m delighted to announce the release of Why Cardano, a document explaining the philosophy behind the design and development of Cardano. Publishing this is a key milestone for the project and I hope it helps with explaining why we are building Cardano. The document is fully translated to Japanese, Chinese and Korean, also join the community by visiting Cardano community social channels via cardanohub.org.

Introduction

Cardano is a project that began in 2015 as an effort to change the way cryptocurrencies are designed and developed. The overall focus beyond a particular set of innovations is to provide a more balanced and sustainable ecosystem that better accounts for the needs of its users as well as other systems seeking integration.

In the spirit of many open source projects, Cardano did not begin with a comprehensive roadmap or even an authoritative white paper. Rather it embraced a collection of design principles, engineering best practices and avenues for exploration. These include the following:

  • Separation of accounting and computation into different layers
  • Implementation of core components in highly modular functional code
  • Small groups of academics and developers competing with peer reviewed research
  • Heavy use of interdisciplinary teams including early use of InfoSec experts
  • Fast iteration between white papers, implementation and new research required to correct issues discovered during review
  • Building in the ability to upgrade post-deployed systems without destroying the network
  • Development of a decentralized funding mechanism for future work
  • A long-term view on improving the design of cryptocurrencies so they can work on mobile devices with a reasonable and secure user experience
  • Bringing stakeholders closer to the operations and maintenance of their cryptocurrency
  • Acknowledging the need to account for multiple assets in the same ledger
  • Abstracting transactions to include optional metadata in order to better conform to the needs of legacy systems
  • Learning from the nearly 1,000 altcoins by embracing features that make sense
  • Adopt a standards-driven process inspired by the Internet Engineering Task Force using a dedicated foundation to lock down the final protocol design
  • Explore the social elements of commerce
  • Find a healthy middle ground for regulators to interact with commerce without compromising some core principles inherited from Bitcoin

From this unstructured set of ideas, the principals working on Cardano began both to explore cryptocurrency literature and to build a toolset of abstractions. The output of this research is IOHK’s extensive library of papers, numerous survey results such as this recent scripting language overview as well as an Ontology of Smart Contracts, and the Scorex project. Lessons yielded an appreciation for the cryptocurrency industry’s unusual and at times counterproductive growth.

Read More »

Blockchain in Ukraine

7 July 2017 Jeremy Wood 4 mins read

Blockchain in Ukraine - Input Output

A magnificent view across the Black Sea from a sunny rooftop terrace in Odessa – it’s hard to imagine a more beautiful location for a blockchain conference. Guests and speakers from more than 20 countries were at the fifth Blockchain Incredible Party, BIP001, a leading cryptocurrency and blockchain event in eastern Europe. The event started only two years ago but has already built up a following who keep coming back for the friendly atmosphere, line up of international speakers and emphasis on social events put together with a hospitable touch.

Lilia Vershinina and Pavel Kravchenko, the organisers and founders, say Ukraine is fertile ground for blockchain start-ups because there is funding, a pool of talented developers and a general appetite for new business ideas. "There are a lot of people trying to experiment, and they are not afraid," said Pavel.

The event aims to showcase the Ukrainian scene and strengthen ties between blockchain advocates in Ukraine and the west, he says, pointing out that they are a long way from Silicon Valley. Another goal is providing education for the enthusiastic and fast-growing Ukrainian blockchain community. Speakers at the sold out conference included Vlad Zamfir, developer at Ethereum, Giacomo Zucco, CEO at BlockchainLAB, Rob Viglione, co-founder of ZenCash, and Pavel Kravchenko, co-founder at DistributedLab.

Charles Hoskinson, CEO of IOHK, took to the stage to talk about the biggest challenge that he sees facing cryptocurrencies.

He drew a sobering comparison between what he said was a current cryptocurrency bubble, and the early period of the 20th century. Tracing an arc from the Knickbocker crisis of 1907, when a three-week financial panic crashed the US stock market, to the founding of the Securities Exchange Commission in 1934, Charles charted the excesses of capitalism that ravaged the economy during those years. From insider trading scandals to the novice investors who ploughed their savings into badly judged investments, the financial turmoil of that era took the US to the verge of bankruptcy and triggered the Great Depression.

Comparing then to now, Charles said: "We are in a bubble and there will be a collapse. A lot of businesses will wash away; the strong will survive. Moving beyond that we either repeat history and create another Federal Reserve, or we can ask ourselves, 'what can we put into code?'"

 "That is the ultimate challenge we face as a space. If we are successful, not only do you create something much better, more transparent, and efficient – it will be a global system. That is my hope for what cryptocurrencies can achieve. If we can get there scams will disappear, and there will be an expectation that things will work the way they ought to, as opposed to today."

Other speakers also tackled the growing trend of ICOs, or looked at the issue of regulation.

Also at the conference was Professor Roman Oliynykov, part of the IOHK Veritas team, which is based in Ukraine. Set up in 2016, the six-person team conducts due diligence, research and development in the area of cryptocurrencies. He summed up the local interest in the subject:

"In my city, Kharkiv, there were more than 1,000 registrations for the last Bitcoin meet-up, which would have been unimaginable only six months ago. There is a huge popularity of blockchain systems in Ukraine and there are many qualified professionals for scientific research into blockchains."

A main focus for the Veritas team has been research into treasury systems. This mechanism provides a pool of funding for the development of cryptocurrencies and allows communities, rather than single individuals or entities, to vote on the decisions that determine the future of a cryptocurrency.

Roman said: "Researchers at Lancaster University in the UK in collaboration with the Veritas team have almost finished a new voting protocol for a treasury, which provides privacy in voting, yet in a transparent process. It also provides delegation. Moreover, we can find out how popular each expert is, how many people are delegated to each expert, but the delegation itself remains private."

An expert as referenced in this case is a person trusted by the community. Some people will prefer to delegate their vote to experts.

For the organisers of BIP, an expanded programme is being planned for the next event. Pavel is looking at the possibility of a week-long conference that could include courses too. That would be great news for the loyal fans of this well put together event.