After a warm welcome of Untangled two weeks ago, we are ready to release the next episode of the podcast to keep you enlightened during the Holidays. And, make sure you enter the new year fit and healthy.
In the second episode of Untangled we are examining the current challenges residing in the healthcare sector. Specifically, we explore how the Tangle and distributed ledger technologies can ease the upcoming paradigm shift of truly digitalizing healthcare.
We are speaking to co-founders Andre Fialho and David Hawig from the healthcare start-up, PACT Care; Jakob Uffelmann, Director of Innovation at sundhed.dk, the public Danish eHealth portal and Navin Ramachandran, eHealth Lead in the IOTA Foundation. They all offer their different perspectives about this universally evolving sector.
In the second episode of Untangled we are examining the current challenges in the health care sector and exploring how…
We already received a lot of valuable feedback, but we would be pleased if many more of you want to share your thoughts and reflections with us.
Provide your feedback here
Since the first episode of Untangled we have upgraded our choice of channels and you can now find us here:
September was all about testing the Qubic programming language Abra and creating its initial support library, which is written in Abra of course. While creating the support library, the need arose to be able to verify all kinds of ideas. This resulted in a parallel trajectory where a simple Abra language parser was created in Java. The parser allowed us to run syntactical sanity checks on the library code even before we had a running Abra compiler. To facilitate the building of the parser we created an EBNF syntax diagram for the Abra language.
The process of building this parser directly resulted in a few changes to the Abra language syntax that make it easier to parse and analyze the language. In addition, while building the support library, it became clear that there was a lot of repetitive programming going on due to the fixed-size nature of the Abra trit vector data type. This resulted in the addition of a template-like feature which allows us to create generic functions and then instantiate them for the required trit vector sizes.
In the mean time, one of our Discord community members, ben75, managed to use the EBNF syntax diagram to implement an awesome syntax highlighter for Abra on the IntelliJ IDEA platform. That turned out to be a great help for us while creating the support library code. This community never ceases to amaze me.
Once the parser/analyzer worked correctly it was decided to give it quick-and-dirty ability to run the Abra code as an interpreter. This allows us to run the Abra code and test it already, even without being able to compile it for a specific platform yet. It also allowed us to debug the support library code by stepping through the Java interpreter code in the debugger while it executes the Abra code.
We’re happy to report that most basic library functions worked exactly as designed, and only a few minor details needed fixing. The most astonishing thing that happened during this phase was that *by far* the most complex function that was written, the integer multiplication function, worked flawlessly right off the bat! Astonishing because when we wrote this code there was no way to run the code other than in our minds.
While we haven’t yet created the corresponding integer division function, the functions that implement arithmetic, logical, and conditional operations have already proven to work correctly in practice when we used them to implement several test functions. The most impressive part of the support library is probably the way we can tailor those functions to any required trit vector size, which allows us to perform integer arithmetic natively on a vast selection of integer ranges that is unmatched by most other programming languages. For example we defined an integer data type we named Huge, which is a trit vector 81 trits long and can represent values in the range of minus to plus 221,713,244,121,518,884,974,124,815,309,574,946,401! We even tested with a 6561 trit data type that is supposed to hold an IOTA signature and found that it can represent integer numbers that are a whopping 3131 digits in length. And the integer arithmetic functions will work correctly with all of them!
Further additions to the parser allow it to generate a ternary representation of the Abra code ready for inclusion in a Qubic message to be sent through the Tangle, and convert that ternary code back into the original representation that can be run by the interpreter. This could prove very helpful in speeding up the process of getting to a working proof of concept for Qubic until a more robust version of the end-to-end functionality can be completed.
The document about the Qubic computational model is coming along nicely. It has grown so large that we decided to split it into multiple parts. The first two parts are currently being reviewed, and the third part is expected to be ready in the first week of October. The following parts are planned at the moment:
A conceptual overview of the Abra processing model.
An overview of the basic entities in Abra.
An overview of the Abra programming constructs.
Some example Abra functions with details on how they work.
The Qubic Dispatcher and its interaction with Abra.
It has been very exciting for us to finally see the first working Abra programs in action this month! We hope to be able to share the documentation and interpreter with the community as soon as possible, so that you can start playing with it and contributing to our Abra efforts as well.
Validity in the Tangle: The good, the bad, and the ugly
This article discusses how we can assess the validity of a transaction in a distributed ledger, and the particular issues we face in the case of the Tangle.
To begin with, let’s look at blockchain data structures. In a blockchain, before users attach a new block to the end of a chain they will make sure that all of the transactions in the chain are valid. Validity means that in each transaction, the payer has to have the required funds, and no coins can be created outside the protocol specification. This is made simple, because the blocks are ordered in time and transactions are ordered within the blocks (by the miners) — this makes comparison of the timing of two transactions relatively straightforward. For example, if Alice starts with an empty wallet, the transaction “Alice gets 3 coins” must precede the transaction “Alice gives 3 coins,” otherwise the latter is invalid.
How is The Tangle Different?
Just like a blockchain, a necessary condition for a transaction to be valid is that the payer has received the coin before making the payment. However, since the Tangle is a Directed Acyclic Graph (DAG), the question of time-ordering becomes non-trivial. The DAG structure makes it hard, or sometimes impossible, to resolve whether a payer has received coins before spending them.
Let’s take the case of 2 transactions A and B. In a blockchain transaction, there are only two options regarding their order. A happened before B or A happens after B. However, in a DAG, we also have the option that two transactions are uncomparable (i.e., we do not know which happened first).
To continue our discussion, let us agree on a few preliminaries. First, we treat validity as a state of a transaction, examined in the context of the current state of the Tangle. That is, any function that decides if a transaction is valid or not, takes into account both the transaction itself and the Tangle it lives in.
Second, the only thing a transaction is accountable for is its cone of past transactions — that is, all of the transactions that it approves directly or indirectly.
When a user chooses which transactions to approve, they check the validity of the transactions by analysing their cones (the part of the Tangle which these transactions reference), and can ignore the rest of the Tangle.
Every transaction in a cone implies the movement of coins from one account to another. To assess validity, we apply all of these movements to the Genesis state to generate an updated state of wallets and their balances. We will call this state acceptable if all of the balances are non-negative (i.e., address balances cannot contain negative values, only zero or positive) and the sum of the balances is equal to the number of coins created in the Genesis.
Now let’s dive deeper into what properties a transaction and its cone should have, for the transaction to be considered valid. There are many options for such properties. Here, we consider three of those:
1. The minimum property
The moment that an honest (even if selfish) user sends a transaction, they should own the funds they are spending. That is, in a healthy system, we expect users to have the money “in their pocket” before trying to give it to someone else.
Having sufficient funds is the most basic condition. For a transaction to be considered valid, we expect that after all of its cone transactions are applied to the Genesis state, we get an acceptable state. This property is minimal in the sense that any other reasonable property we come up with will satisfy this property.
2. The “good random walk” property
This property comes up rather naturally when implementing the Tangle. As we know, to choose two transactions, the user performs two random walks starting from the Genesis until they get to a tip that they consider as “valid.” It would be a pain to go all the way from the Genesis to a tip just to discover that it is no good.
To save time, and broken hearts over bad tips, a user can check that every transaction that the random walk steps on has the “minimum property” stated above. This means for each transaction on the way, after applying the cone of this transaction to the Genesis state, the state we get is acceptable.
3. The maximum property
The most strict property we can demand from a transaction is that every transaction in its cone has the minimum property. In addition to getting an acceptable state for the end transaction or the transactions we saw on the walk, we also check that we get an acceptable state for any transaction that we approve, directly or indirectly.
So if users approve a transaction, they must take some responsibility over the funds in that transaction, even if those funds are not involved in the new transaction they are adding. This property is maximal in the sense that any reasonable property we can think about cannot ask more from the cone than what is stated above.
So which property is good, which is bad and which is ugly? Well, the one thing we can clearly say is that the “good random walk” property is bad, even if appealing from an implementation point of view. The problem with it is the following. If all of the users agree on this property it might be that two users disagree on a validity of a given transaction because the randomness of their walks took them in two different paths. As we would like to have honest users agree with each other as much as possible, this property is definitely bad.
As for the other two — numerous hours and gallons of e-ink went into the discussion of which is the good one (and which is the ugly one). We were not able to find an attack that the minimum property allows or a use case that the maximum property prevents, which leaves a lot of room for discussion without a clear tiebreaker. I have my ideas, but I encourage you to think and share yours and maybe we’ll come up with the one property to rule them all.
David Sønstebø: Florian Doebler is an Economist and Entrepreneur with a focus on behavioral science and international development. He joins the IOTA Foundation as Social Impact & Donor Relations Coordinator at our headquarters in Berlin. During his studies in Munich and Vienna his academic interest centered around the sustainable governance of commons throughout economic history and the intersection of innovation, cultures, and institutions.
Florian is a co-founder of a social start-up with the aim of bringing transparency to ecological accounting while incentivizing reductions in resource use. He is a specialist on the behavioral consequences of digitalization, and contributed to the implementation of experimentality in governments and private enterprises, as an employee at a global consultancy.
After he graduated in January, Florian supported the establishment of the newly established Blockchain Lab at the German Development Agency (GIZ), furthering the maturity of Distributed Ledger Technologies with a view to the United Nations Development Goals. Tapping his interdisciplinary skill set and passion for life on earth, he will help to harness the potential of IOTA for positive impact on people, communities, and ecosystems.
When I first learned about IOTA, it immediately sparked my imagination as a key enabler of the sharing economy. In my eyes, the answer to the challenges of today can only be the more sensible use of our resources and increased cooperation on a global scale. IOTA promises to pave the way towards these goals and opens up new avenues to build a more inclusive and equal society.
I could not be more thrilled to support the development of the future economy, in such an ambitious and passionate team.
Florian´s profound understanding of economics, technology, and human behavior will complement our Social Impact & Regulatory Affairs Team, and help to anchor IOTA as the backbone of the sharing economy. We are very happy that he is part of our journey and welcome him warmly to the IOTA Foundation!
Mark Nixon joins IOTA to head up the Smart City Program. He brings a wealth of experience gained across the TMT sector having held senior commercial and operational roles for 3Com, Verizon, Nokia, O2 and most recently Huawei where he led the Business Consulting Practice in MENA.
Mark started his career in Telecommunications 30 years ago with the British Army “Royal Corps of Signals” as a Systems Engineer working on some of the earliest Mobile Data Communications Networks deployed globally to support British Military Operations. Early successful forays into near-realtime Intelligence Systems working on personality tracking and vehicle identification / tracking systems led to his leaving the Royal Signals and joining the TCP-IP Switching Company 3Com. Here he held several Technical and Product Marketing roles, leading key projects with the UK academic community on the development of “JANET” and “SuperJANET”, a global secure IP network supporting research and academic institutions.
Throughout his career he has looked to be innovative in terms of how technology can be applied to real world business and consumer lifestyles, looking to solve problems that make a difference to our lives. He launched the first Blackberry Service outside the US with O2, and quickly followed by leading the launch of the first Windows mobile data solution with the O2 XDA, successfully introducing secure scalable data centric solutions to the UK market.
Mark led the O2 platform solutions business, securing breakthrough contracts with Camelot (the UK National Lottery) and Pop Idol (for cross-carrier interactive voting services which have revolutionized how we interact with our TV viewing). His team pioneered the early M2M market creating an ecosystem of development and innovation for the mobility security market, with the introduction of a smart car tracker service in the UK and Europe. He played an integral role in the establishment of a mobile data application development ecosystem (O2 Litmus), enabling innovation and collaboration for the creation of data applications that can be used in 3–4G carrier networks.
At Nokia Mark drove the successful launch of the OVI Store, offering customers content that was compatible with their mobile devices and relevant to their tastes and locations. Ovi introduced the first carrier billing capability to the Nokia partner network of operators, to seamlessly simplify consumers accessing and paying for relevant content.
Most recently Mark has been leading Huawei Business Consulting Practices in Europe and the Middle East. He was responsible for driving new innovative engagements with the TMT sector, looking to enable the Digital Transformation of traditional connectivity-centric mobile network operators to becoming viable Digital Services Businesses. As a TMForum Ambassador, Mark is at the heart of introducing Industry standards and frameworks that look to accelerate transformation by utilising new open transparent technologies. For example the Open API Program, Digital Maturity Model and FrameworX — the structured building blocks for carriers to address the transition from legacy processes, skills and operating systems, to new agile experience-based business models.
I’m so excited to be joining the IOTA team, where I will be leading the Smart City Program. I am looking forward to continuing the great work that IOTA has done to date in engaging with the Smart City ecosystem including government, academia and industry. We hope to continue to leverage the IOTA Foundation to partner in the creation of real world DLT solutions that enhance the lives of citizens globally. Utilising the IOTA Tangle, R&D teams and Partner Network in the development of POC’s and Use Cases, we can drive open source, permissionless development.
Mark Nixon brings a massive wealth of experience and a significant network of contacts, to lead the Smart City Program at the IOTA Foundation. Give him a warm welcome!
People often ask me what Blockchain or Distributed Ledger Technology is. As a technical analyst, it is part of my job to explain it to them. However some technologies are extremely difficult to explain in a simple fashion.
DLT or Distributed Ledger Technology is one of those cases. Even for people with a software engineering or computer science background, it is sometimes difficult to understand the technology itself, and more so the implications for current and future business models.
Sometimes it really helps to show how it all fits together..
Inspired by Volkswagen@CEBIT
..something like Volkswagen’s miniature world showcase at CEBIT this year, which demonstrated how IOTA may be used in different automotive use cases, such as:
Vehicle identification. Allowing a vehicle to identify itself to parking garages, smart home systems, other vehicles, etc.
Over the air updates. Allowing verification that the software running in your car is correct and up-to-date.
Function on demand. Allowing payments to upgrade your car with features such as higher maximum speed or extended range.
Autonomous vehicles that earn money. Allowing autonomous vehicles to act as fee-earning taxis when not used by their owners.
Payment for tolls and EV charging. Allowing automatic, transparent and frictionless payment.
When I first saw the Volkswagen showcase I was taken aback, as were many others. This approach actually made people understand the implications of the machine-to-machine economy vision of IOTA.
We saw that physical demonstrations help people to understand the vision. So how could we as a foundation model our vision? With Lego!
Many people have childhood memories of building their dreams with Lego. So we wanted to see if we too could build our dreams with it. Lego EV3 is a computing device / platform with which you can build programmable robots. Since its release in 2013, people have implemented some sophisticated applications on the platform.
In this project we installed ev3dev, a Debian-based operating system, on our EV3 and ran a simple shell script. This script checks whether the balance on a specific IOTA address has increased, indicating that someone paid the truck. If the funds on the address increase, then the truck starts its engine and drives for a set amount of time: 1Ki equals one second of driving. Below you can see the car start driving when money is sent to the right address.
To summarize: We built a toy truck that drives only when it gets paid and stops when it ‘runs out of money’. That is the most basic use case but it already has real world relevance. With this simple example it becomes clear that IOTA:
Decreases complexity. There are no middle men involved. Only you and the truck.
Has no transaction fees. What you send is what the truck receives.
Simplifies cross-country payments. No need to deal with foreign currencies any more.
Can enforce sanctions in case of a default. The truck will only drive as long as you are paying for it .
These features even today, could help to reduce fraud, reduce risks and reduce costs.
We believe that by bringing together IOTA and such elegant and familiar modelling tools, we can make our new technology more approachable, and help people to understand its possibilities and implications.
The next step could be towards supply chain tracking, eg by putting a Bosch XDK (https://xdk.bosch-connectivity.com/) into a toy shipping container, and using the recently published xdk2mam (https://xdk2mam.io/) code to stream environmental data to the Tangle. Furthermore one could let the shipping container pay the truck to start driving, which would bring this miniature world showcase even closer to IOTA’s vision of a machine-to-machine economy.
There is a whole world yet to be built from these components. So that everyone can see what the future will look like!
Michele Nati, PhD has 15 years of experience in the fields of data and IoT, working on research and development and recently on innovation in a number of roles, from Academia to SMEs to government organisations. During his career he pioneered and touched all the technologies and aspects that helped to create the connected world we are living today.
In 2007, Michele obtained a PhD in Computer Science from the University of Rome La Sapienza. He researched, designed and deployed novel cross-layer communication protocols for Wireless Sensor Networks and other resource-constrained embedded devices, dividing his time between Rome and Boston where he was a visiting researcher at the Northeastern University. After his PhD, he worked for academia and a number of SMEs as a research scientist in the area of remote sensing / monitoring.
In 2010 Michele moved to the UK to work at the University of Surrey, where he was a Senior Researcher at the 5G Innovation Centre. As part of the first large scale european smart city project, SmartSantander, he led the development and deployment of an Internet of Things campus-wide testbed comprising of thousands of sensor devices for energy monitoring applications. He led a group of PhD students investigating the impact of IoT and data in creating more sustainable buildings and campuses. He developed proposals and led a number of other European Projects, researching the field of privacy-aware communication and crowdsensing in mobile Internet of Things.
Since 2015 Michele has worked as the Lead Technologist for Data and Trust at Digital Catapult London, driving open innovation in the area of data and trust. He worked on a number of projects and initiatives to increase transparency and individual control on how personal data are collected and shared. He championed the introduction of a standardized Personal Data Receipt to increase transparency, track personal data sharing transactions and provide General Data Protection Regulation compliance.
In his research into trust and transparency, Michele was exposed to blockchain and distributed ledger technologies. He researched DLTs as a means to build an infrastructure for sharing data with trust, and to create innovation in the digital manufacturing and creative industries.
Currently he is an active member of the Mydata community, actively liaising with academia, SMEs, large organizations and governments to identify open innovation activities in the field of data and IoT.
Beyond technology Michele is a keen trail runner and meetup organiser.
The IOTA technology and vision bring together all the aspects that have filled my technical and research interests for the last 15 years, IoT (from its early stage), data and trust. I believe these are the same three pillars on top of which innovation should now be created. I like the way the IOTA Foundation is working and establishing its presence in different vertical domains, through open innovation, demonstrators and partnership, rather than just capitalising only on the value associated with a cryptocurrency.
I am very excited to use my experience in managing multi-stakeholder initiatives, to find real world problems and to help with adoption of IOTA as trust infrastructure for creation of new data sharing ecosystem.
Michele’s has deep technical knowledge, multi-sector exposure and has participated in different stages of research and innovation activities. These qualities, together with his innate ability to manage multiple stakeholders, will be of great value for adoption and evolution of the IOTA protocol.
Michele is based in London, where he works as the Lead Technical Analyst to support innovation in Global Trade and Supply Chains.In light of his deep domain knowledge and network of contacts, Michele also serves as the Personal Data Lead, working with the Business Development team to define a strategy for the Personal Data arena. Give him a warm welcome!
Over the last few months, the IOTA network has seen a significant increase in activity as more and more developers start to implement solutions based on the Tangle. While this is a very promising development, reflecting the increasing adoption of IOTA, it also results in an increase in database size, which may be problematic for nodes with limited hardware resources.
The IOTA Foundation has been performing global snapshots on a regular basis, whereby the transaction history is pruned, and the resulting balances are consolidated into a new genesis state that allows nodes to start over, with an empty database. However, this way of dealing with a growing ledger size is becoming more and more impractical as it requires us to:
Temporarily stop the coordinator.
Generate the snapshot state.
Give the community time to verify the generated files.
And finally restart the coordinator.
The Solution — Local Snapshots
To solve this problem, we have been working on implementing a feature called Local Snapshots. This has always been a central part of IOTA’s Roadmap. The initial implementation is now being tested internally, and we will keep you up to date with next steps, but first we must review all the implemented changes and gather sufficient metrics about the behaviour of this new feature.
What does this mean for node operators?
Before we dive into the technical aspects of Local Snapshots, we want to give a short summary of the changes that this new feature brings for node operators:
When spinning up a new node, it is possible to sync based on a small local snapshot file, which allows nodes to be fully synced within minutes (rather than bootstrapping the node with a large database file).
The disk requirements for nodes are massively reduced — in fact we already have nodes running with just a few hundred megabytes of hard disk space.
Since there will no longer be a need for global snapshots, nodes could theoretically run for years without maintenance.
Nodes should be in a position to handle thousands of transactions per second, without the database size ever becoming a problem.
How does it work?
To understand the way Local Snapshots work, we first need to clarify a few things about the way the Tangle works:
The Tangle is a data structure that has a lot of uncertainty at its tips but gains certainty as time progresses.
Consequently, the further you go back in time the less likely it is for an unconfirmed transaction to suddenly become part of consensus. This is the reason why it is necessary to “reattach” transactions if they have been pending for too long.
To verify transactions and take part in IOTA’s consensus, it is only necessary to know the recent history of pending transactions and the current state (balances) of the ledger.
The basic principle behind Local Snapshots is relatively easy to understand and can be divided into different aspects:
Pruning of old transactions and persisting the balances
We first choose a confirmed transaction that is sufficiently old and use this transaction as an “anchor” for the local snapshot.
We then prune all transactions that are directly or indirectly referenced by this transaction and clean up the database accordingly.
Before we clean the old transactions we check which balances were affected by them and persist the resulting state of the ledger in a local snapshot file, which is subsequently used by IRI as a new starting point.
Solid Entry Points (fast sync for new nodes)
While pruning old transactions is no problem for nodes that are already fully synced with the network, it poses a problem for new nodes that try to enter the network, since they are no longer able to easily retrieve the complete transaction history dating back to the last global snapshot.
Even if we assume that they are able to retrieve the full history by asking permanodes for the missing transactions, it would still take a very long time to catch up with the latest state of the ledger. This problem is not new and one of the reasons why a lot of node operators bootstrap their nodes with a copy of the database from another synchronized node.
To solve this problem, we use the local snapshot files not just as a way to persist the state of the node but also to allow new nodes to start their operations based on the exact same file (which can be shared by the community and the IF in regular intervals).
To be able to bootstrap a new node with a local snapshot file we need to store a few more details than just the balances:
First of all a new syncing node needs to know at which point it can stop solidifying a chain of transactions and just consider the subtangle solid. To be able to do so, we determine which one of the transactions that we deleted, had approvers that did not become orphaned and store their hashes in a list of “solid entry points”.
Once a node reaches one of those hashes it stops asking for its approvees and marks the transaction as solid (like the 999999….99 transaction after a global snapshot).
This enables us to use local snapshot files as a bootstrap mechanism to get a new node synced very quickly (within minutes), which at the same time is much easier to provide and retrieve than a copy of the whole database.
Seen Milestones (even faster sync)
While solid entry points allow us to stop the solidification process as soon as possible, it can still take a while to learn about all subsequent milestones that happened after our chosen cut-off point.
Since we want the local snapshot files to be a viable and efficient way of bootstrapping a new node we also save all subsequent milestone hashes in the local snapshot files, so that new nodes can immediately ask for missing milestones, without having to passively wait for their neighbours to broadcast them.
Since the pruning of data will be controlled by a simple configuration setting, it will now be possible to run permanodes that keep the full history of transactions, which has so far been impossible due to the fact that global snapshots were a network wide event.
The whole procedure of taking local snapshots is then automatically repeated to allow nodes to run with a relatively constant space requirement without unnecessary maintenance.
The upcoming feature of Local Snapshots will not just solve the space problems that arise with the growing adoption of IOTA, but will also simplify the setup of new nodes and allow organisations and community members to operate permanodes.
We will be opening this up for beta testing in the coming weeks. More information will be posted in the #snapshots channel on the IOTA Discord.