@adlrocha - Beyond Bitswap (I)

Exchanging blocks the IPFS way!

green pine trees

Disclaimer: All the images of this publication were extracted from this awesome slide deck by Dirk at Protocol Labs.

When we think about exchaning content in a p2p network like IPFS, the first thing that comes to mind is the distributed hash-table (DHT). In IPFS content is addressed by hashing the content itself to create a content-identifier (CID). Intuition tells us that if we want to retrieve a block with the CID `QmPAwR…`, what our peer has to do is to go to his DHT, and trigger a lookup operation to find the specific peer storing the piece of information we want.In IPFS, this is correct as long as Bitswap has already failed in it's duty of finding blocks (more about this in a moment).

What we all initially learn about IPFS is that it uses three types of key-value pairings mapped using the DHT:

  • Provider Records that map a data identifier (i.e. multihash CID) to a peer that has advertised that they have, and are willing to provide the content (this is the DHT we mentioned above, and the one we are going to show interest in throughout this publication).

  • Peer Records that map a peerID to a set of multiaddresses at which the peer may be reached. It is our way of discovering nodes in the network and starting connections with them.

  • IPNS Records which map a IPNS key (hash of a public key) to a IPNS record.

From this we would expect that to find any content in the network we just have to make a lookup on the provider records DHT to find the peer that can serve us with this content. However, before doing that IPFS nodes implement an exchange interface to try and speed-up this process before having to resort to the to the DHT. If you want to dig deeper on DHTs and how IPFS leverages them, I highly recommend Adin's deep-dive into the DHT over on the IPFS blog.

However, the truth is that before doing that IPFS nodes implement an exchange interface to try and speed-up this process before having to resort to the to the DHT (by the way, if you want to dig deeper on DHTs and how IPFS leverages them, I highly recommend this post from the IPFS blog).

Content storage in IPFS

Before delving into how content is requested, we are going to make a quick detour to understand how information is stored in an IPFS network. IPFS uses a Merkle DAG structure to represent data. When you request the storage of a file to the network, your node splits the file in blocks. Blocks are linked to each other, and each of them is identified with a unique CID (the hash of the content). Splitting files into blocks means that each block is stored in a different node, so when you request the file different parts come from different sources and can be authenticated quickly.

The use of Merkle DAGs to store content also tackles the issue of redundancy of content in the network. If you are storing two similar files in the network, they will share parts of the Merkle DAGs, i.e. they share blocks that won’t necessarily need to be stored twice. But this separation of data in blocks also means that when we want to access specific content consisting of several blocks, we need to find all the nodes that store these blocks, and request their transmission. This is the reason for the existence of Bitswap in IPFS.

Say hello to Bitswap

Bitswap is the data trading module of IPFS. It manages requesting and sending blocks to and from peers. Bitswap has two main jobs:

  • Acquire blocks requested by our client from the network.

  • Send other nodes blocks that they have requested

So how exactly does Bitswap work? Imagine that we are searching for a file consisting of these three blocks: {CID1, CID2, CID3}, and whose root block (i.e. the root of the DAG that will allow us find the rest of the blocks of the file) is CID1. To get this file we would trigger a “get CID1'' operation in our node. This is the moment Bitswap comes into play. Bitswap starts its operation by sending a WANT message to all its connected peers requesting the block for CID1. If there is no response to this request, Bitswap’s work is done. Our node will go to the DHT to find the node that stores the CID and end of story (no speed-ups to the standard DHT content discovery achieved).

But what happens if some of our connected nodes answer to this request saying that they have this block and they send it to us? We will then remember these nodes and add them to the same session for that CID1. The fact that these nodes had the block for CID1 means that they may potentially have the rest of the blocks left for the file (CID2 and CID3). The reason why we want to remember nodes from a session is so that in the next round instead of blindly asking all our connected peers for the next CID of the DAG, flooding the network with our requests, we can narrow down a bit the search.

Now imagine that instead of getting a specific file where you don’t know the CIDs of the rest of the blocks, you want to recover a section of a Merkle DAG for which you know in advance the CIDs. In this case instead of sending a single WANT from your node, Bitswap will send a Wantlist, i.e. a list of WANTs of CIDs. Each node will remember the received Wantlists from other peers until they are cancelled so that if before the requesting node gets the block it is seen by the peer with the Wantlist, it can forward it to the requestor. Thus, receivers of a Wantlist will forward back any of the requested blocks as soon as they have it.

Finally, the reception of a block from a Wantlist in the requesting node triggers a CANCEL message to the rest of the nodes the Wantlist was sent to in order to signal them that the block was already received and it is not needed anymore.

Extensions to Bitswap

For me, some of Bitswap’s most pressing limitations are its inefficient use of bandwidth, and the fact that the discovery of blocks is performed blindly (in the sense that Bitswap only talks to its directly connected peers). Fortunately to work around these problems, some extensions to Bitswap have already been proposed and implemented:

  • The use of additional WANT-HAVE, WANT-BLOCK, and HAVE messages. Flatly requesting blocks from every peer could lead to duplicating the exchange of blocks from many of our connected peers. Instead, we can use these new Bitswap messages to express our desire for a block before actually requesting its transmission. This may increase the number of RTTs required to get a block, but reduces the bandwidth requirements of the protocol.

  • Intelligent peer selection algorithm in sessions. When a node tells us that it has the block we were requesting we add them to a session. Subsequent requests for a specific CID are exchanged with all the nodes in the corresponding session, but this exchange is blindly broadcasted to all of them. To make the protocol more efficient, some proposals have been made to intelligently select the peers in a session to which requests will be broadcasted in order to reduce the number of messages exchanged in the network. Thus, these requests will only be sent to nodes with the “highest score” in the session (i.e. nodes with the highest probability of having the block we are requesting).

In Bitswap we can only request a block (or group of blocks in a Wantlist) through their CID. We cannot request a full branch or a full subgraph of a DAG. To solve this, another content exchange protocol similar to Bitswap has been implemented in the IPFS ecosystem, Graphsync. Graphsync leverages the data model of IPLD (standard representation of Merkle DAG structures), and IPLD selectors (a way of performing queries over these structures). Instead of sending flat WANT requests to peers, an IPLD selector is included in the request.. What this selector is specifying is a query over a DAG structure, so instead of asking for specific blocks it is like asking others “hey guys, if someone has any of the blocks fulfilling this query in the DAG please send them to me”. Smart right? To me, Graphsync is like a "batch Bitswap request".

Beyond Bitswap

The current implementation of Bitswap has already shown impressive results speeding-up the discovery an exchange of content compared to “traditional” DHT content lookups, or even to centralized infrastructures (I highly recommend reading this post which describes how the teams of IPFS and Netflix leveraged IPFS as a peer-to-peer CDN to let nodes inside Netflix’s infrastructure collaborate and seed common pieces to neighboring nodes, helping make container distribution faster. Incredible, right? And one of the responsibles for this improvements is Bitswap).

All of this doesn’t mean that Bitswap can not be further improved. My feeling is that one of the first things we need to do in order to be able to better Bitswap’s current outstanding implementation is to understand what are the overheads of the protocol, and what is preventing us from exchanging content faster in IPFS. It is the discovery of blocks or their actual transmission? Could we devise better ways to store data in the network to ease the content discovery? Even more, we use IPFS for a great gamut of use cases that may require from the exchange of several small files, to the download of large datasets. Does Bitswap perform equally under these scenarios, or should we consider the implementation of a “use case-aware” content exchange algorithm? 

To this extent, in the next few weeks I am planning to build a test environment to help us answer these questions in order to understand if together, and considering the results of this analysis, we can take Bitswap and file-sharing in P2P networks to its theoretical limit. 

In short, a lot of exciting things are going on around Bitswap. I could keep writing about this for a few hours more, but I guess it makes more sense to keep this the first part of a series, and periodically share some updates and knowledge pills. Stay tuned!

@adlrocha - I am joining Protocol Labs!

Thank you, Telefónica, for two great years of impactful work.

What JiffyLube Can Teach Healthcare Organizations - GeoVoices

Today I want to share some personal news I’ve been wanting to announce for a while: I am becoming a Labber! After following (and using) for years the amazing work being done at the core of Protocol Labs with projects such as IPFS, libp2p, Filecoin (and many more), I am becoming one of them. Specifically, I am joining their ResNetLab as a Research Engineer. For those of you who know me personally you clearly understand how cool it is for me to be able to work in the hard problems tackled by this group, and the luck I have of doing so with the amazing team with whom I am going to work. I’ll definitely do my best to learn as much as possible from them while contributing all that I can with my personal experience and my work. I am beyond excited with the opportunity!

Heading towards my definition of professional success

For those of you that don’t me personally, let me try to further explain my excitement with this opportunity. As Hamming stated in his “You and Your Research” talk at Bell Communications Research Colloquia Series (which I already talked about in this newsletter), the best way to succeed professionally (according to my definition of success, of course, which I think is a really personal concept) is to “work in the right problem in your field”, “work in problems that can become mighty oak trees”, and “work in things you won’t look back and regret in your death bed”. These are the three quotes that currently govern my professional decisions. And now I am fortunate enough to be joining a team and a work environment where I will be able to fulfill all of these requirements (and many more that conform my current definition of professional success) heading with them towards my view of success.

The mission of the Resilient Networks Lab (ResNetLab) is to build resilient distributed systems, by creating and operating a platform where researchers can collaborate openly and asynchronously on deep technical work, while Protocol Labs’ mission is to “drive breakthroughs in computing to push humanity forward”. The group’s mission, the company’s mission, and my role in the company have all the keywords I would use to search my “dream job”: [“research”, “engineering”, “computing”, “resilient distributed networks”, “push humanity”], and I would add a few more keywords that I feel are implicit in the aforementioned missions: [“fixing the Internet”, “open source software”]. Now you start getting my excitement even if you don’t know me, right?

To finish putting you in context and help you understand why this is so cool for me, the other day I was having dinner with a really good friend of mine. We’ve been best friends since high school, so you can imagine that he has endured long hours hearing me talk about my crazy ideas about how the Internet is broken, the urgent need to recover our privacy, the importance of science in society, the benefits of decentralization, and my obsession with self-organizing networks (he is an economist specialized in debt, so he knows nothing about computing, technology, etc.). When I told him the good news he, obviously, didn’t know who Protocol Labs were. I briefly told him about what they do, what is their focus, and what will be my contribution to the company, and his answer was: “wtf! but that is the kind of company you have been looking to start since high school”. Again, he knows nothing about technology, he came to this conclusion after years of spamming him on these topics.

Some of the problems ResNetLab are working on

Now I will take the liberty of directly copy-pasting from the ResNetLab’s site some of the problems they are working on to see if I am able to transmit to you part of my excitement about the hard problems they are working on:

The lab’s genesis comes from a need present in the IPFS and libp2p projects to amp their research efforts to tackle the critical challenges of scaling up networks to planet scale and beyond. The Lab is designed to take ownership of the earlier stages on the research pipeline, from ideas to specs and to code.

  • Preserve users’ privacy when providing and fetching content: How to ensure that the user’s of the IPFS network can collect and provide information while maintaining their full anonymity. My feeling is that a consistent solution to this problem could not only be huge for IPFS, but to the overall architecture of the Internet.

  • Mutable data (naming, real-time, guarantees): Enabling a multitude of different patterns of interactions between users, machines and both. In other words, what are the essential primitives that must be provided for dynamic applications to exist, what are the guarantees they require (consistency, availability, persistence, authenticity, etc) from the underlying layer in order create powerful and complete applications in the Distributed Web. The real time guarantees in distributed systems is something I have been looking to explore for a while.

  • Human-readable naming: You can only have two of three properties for a name: human-meaningful, secure, decentralized. This is Zooko’s Trilemma. Can we have all 3, or even more? Can context related to some data help solve this problem?

  • Enhanced bitswap/graphsync with more network smarts: Bitswap is a simple protocol and it generally works. However, we feel that its performance can be substantially improved. One of the main factors that hold performance back is the fact that a node cannot request a subgraph of the DAG and results in many round-trips in order to “walk down” the DAG. The current operation of bitswap is also very often linked to duplicate transmission and receipt of content which overloads both the end nodes and the network.

  • Routing at scale (1M, 10M, 100M, 1B.. nodes): Content-addressable networks face the challenge of routing scalability, as the amount of addressable elements in the network rises by several orders of magnitude compared to the host-addressable Internet of today. Remember when I mentioned self-organizing networks above? This problem and the next one are quite linked with my crazy ideas around this type of networks, so I can’t wait to see what I face at this end.

  • PubSub at scale (1M, 10M, 100M, 1B.. nodes): As the IPFS system is evolving and growing, communicating new entries to the IPNS is becoming an issue due to the increased network and node load requirements. The expected growth of the system to multiple millions of nodes is going to create significant performance issues, which might render the system unusable. Despite the significant amount of related literature on the topic of pub/sub, very few systems have been tested to that level of scalability, while those that have been are mostly cloud-based, managed and structured infrastructures.

Source: https://research.protocol.ai/groups/resnetlab/

Exciting, right?

The legacy I leave in Telefónica


Source: https://trustos.readthedocs.io/en/latest/

This all looks like the perfect story, but I am not going to lie to you, this journey is being quite bittersweet. I am sad to leave my current job at Telefónica and the amazing team of talented individuals I leave there. Don’t get me wrong, in Telefónica I had one of the nearest jobs I could have to my “dream job” in Spain, and in this two years I’ve had a blast and I haven’t stopped learning. It is not that my job there didn’t have all the ingredients in order for me to achieve professional success, is just that in Protocol Labs I will be able to have the focus towards my goals I am currently looking for to speed this quest. So now you also get why leaving my current job has not been an easy thing (quitting a job you enjoy and that you are comfortable with is never easy).

I don’t want to bore you with more personal matters, I know you come every Sunday to this newsletter probably for the tech and not for the gossip. But before I close this personal publication, I want to briefly share some of the things our amazing team in Telefónica managed to achieve in these two years:

  • It all started with the development of a tokenization platform. We wanted to abstract companies and internal units from the complexities of blockchain technology. We wanted to prevent them from having to worry about “where and how” to issue their tokens (Ethereum?, Hyperledger Fabric?, BigchainDB?, why not all of them?), and having to hire specialized talent (which was lacking at the time, and still is) to build their PoCs. With this system we managed to ease a lot the development of blockchain-based use cases and prototypes (internally and externally), and we saw the development of successful projects (such as Karma) using our technology.

  • With the success of the tokenization platform we became ambitious, and we came up with the concept of TrustOS, which would end up becoming Telefónica’s blockchain product to “add trust into the operations of corporations”. TrustOS was included in Telefónica’s Activation Programme, and many startups and SMEs are already using it through this programme.

  • But all the work behind TrustOS hasn’t been exclusively of product development and integration, it is also the result of long hours of research and development that led to:

    • An exhaustive analysis of performance best practices in Hyperledger Fabric for production networks (the conclusions where shared in a series of publications in this newsletter, and several conferences/meetups).

    • The filing of five different patents, protecting the IP of schemes included in the product in a broad range of fields such as: the management and recovery of blockchain identities leveraging the SIM card and the telco infrastructure; ways of increasing the trust of private blockchain networks and the execution of smart contracts logic; the deployment of federation of blockchain networks; or the development of new consensus algorithms for private blockchain networks.

    • The standardization of a new method of idea generation and collaboration for the team (this is something I haven’t shared yet in this newsletter. Expect a follow-up on this).

    • The participation in several European H2020 Innovation Projects with amazing teams from universities and corporations.

  • Moreover, as part of our work in TrustOS, we also realized the need of a common decentralized identity framework for authentication and authorization in corporate blockchain networks. It all started as a way of fixing a specific need we had in TrustOS related to identities in general-purpose Fabric networks, but this started growing and being generalized internally up to a point were we felt this could become a full-fledged open source project from which everyone could benefit. This is how TrustID was born, and it became an open source project hosted in the Hyperledger Labs (check the repo).

  • And many many more things that I may have missed in this brief recap of my last two years there (the hackathons we’ve won, the meetups we’ve given, etc.).

And I know for sure that this is just the beginning for them, I am really looking forward to see all the successes yet to come for this amazing team in the future (because, let’s be honest, I was definitely the one weighting them down with my crazy ideas ;) ). Thank you for these two really fun and exciting years: @jota_ele_ene, @joobid, @diegoescalonaro, @_mtnieto, @aggcastro, @cesssRC, @carlosalca94.


On Monday I start a new exciting professional endeavor without the amazing team from the pic above, let’s see how it goes. Wish me the best, and see you next week :)

@adlrocha - The role of Hardware in Emerging Technologies

We all wish HW prototyping was as cheap as SW prototyping

Originally published in https://lastbasic.com/blog

macro photography of black circuit board

Software prototyping is cheap. This is why every day we see a new service powered by AI, blockchain or IoT. You just need your personal computer or laptop, an internet connection, maybe access to some of the resources offered by a public cloud provider to speed your development and you are ready to go. But what about hardware prototyping? When we talk about hardware, things aren’t that easy, right? You don’t have a pay-as-you go public offer of hardware pieces to test your ideas and iterate over your designs. If every time you want to try a new implementation you need to buy a new piece of hardware or request the manufacture of an ad-hoc ASIC, by the time you are done with your product you may be broke. Especially if you are approaching a product in an innovative way, without funding and a history of past successes.

There is no SW without HW

All software is consumed through hardware, this is something that we must never forget when we are building our product. It may appear that when we talk about innovation in technology like blockchain, AI or IoT, we are exclusively talking about software, but we are disregarding the role of hardware in these advancements. Blockchain is cryptography and distributed systems, but also the hardware miners need to run the consensus, the network infrastructure required to deploy a global network, and what about the implementation of new advancements such as HW VDFs? (something briefly introduced in this newletter); AI is fancy training algorithms (see, for instance, OpenAI’s recent GPT-3, which I am really looking forward to playing with and share with you here) as well as all the hardware for parallel computation and data management that made this field —and the problems it tackles— accessible to anyone. And what about IoT? IoT is essentially the co-design of hardware and software.

brown circuit board on laptop

Owning the hardware allows for delivery of the best possible experience. One of the reasons Apple products are so popular is because they deliver a great experience. They do so in large part due to Apple’s strategy of vertical integration: a strategy where Apple makes both its software and hardware in-house. Apple designs and develops iOS as well as the A11 processor running in iPads and iPhones.

But the importance of Hardware is not only what separates us from stability and a good user experience, in many cases it is directly the reason why a new technology isn’t a reality yet. Take quantum computing as the perfect example. We have been hearing the goodness and potential impact of quantum computing for ages, why don’t we have a brand new quantum computing in our living room yet? Let me give you a hint, hardware.

Since the 80s we have been theoretically discovering and improving powerful quantum algorithms that would be able to break the encryption of the internet or find new molecules, but we don’t have the hardware to run them. Engineers all over the world are working hard to provide us with more qubits, lower error rates, greater connectivity of qubits, and the possibility of running quantum hardware in friendlier environments — did you know that the coldest places in the universe are the cryogenic chambers where we run quantum circuits? You are welcome, now you have something to brag about in your next dinner party.

UNIVAC I control station, in Museum of Science, Boston, Massachusetts, USA, by Daderot.

Fortunately, if you went back to the 40s or 50s in a time machine and asked experts about the future of their computers, few would practically project the degree of advances that have been made in processor speed, memory size, data storage size, physical size and ease of use. If you had suggested trying to fit a room-size mainframe in a shoebox, let alone a machine a hundred times more capable, they would have laughed at you, but that’s the scope of the challenge ahead of us for quantum computers. You see? It all comes to better hardware.

The return of HW-SW co-design?

And what can we, as ordinary mortals without a PhD and big pockets, do to build innovative products considering the important role of hardware in technology? Fortunately, there are two things that are going to help us in our endeavor to cheaper prototyping and better designs: hardware-software co-design and FPGAs.

The core concepts in hardware-software co-design are getting another look, nearly two decades after this approach was first introduced and failed to catch on. What’s different this time around is the growing complexity and an emphasis on architectural improvements, as well as device scaling, particularly for artificial intelligence and machine learning applications. Software is a critical component, and the more tightly integrated the software, the better, the power and performance. Software also adds an element of flexibility, which is essential in many of these designs because algorithms are in a state of almost constant flux.

The initial idea behind co-design was that a single language could be used to describe hardware and software. With a single description, it would be possible to optimize the implementation, partitioning off pieces of functionality that would go into accelerators, pieces that would be implemented in custom hardware and pieces that would run as software on the processor — all at the touch of a button (well, or a compiler).

And the only piece of hardware that you need for this apart from your laptop, a Field-Programmable Gate Array, or FPGA. FPGAs are a semiconductor IC where a large majority of the electrical functionality inside the device can be changed; changed by the design engineer, changed during the PCB assembly process, or even changed after the equipment has been shipped to customers out in the “field”. So combine an FPGA and the field of software-hardware co-design and not only do you have a way of approaching the design of your full product down-top, but you also have the platform you need for cheap hardware prototyping. Forget about buying and assembling brand new pieces for every iteration of your product and adapting your software to it, with this set-up the only thing you would need to test a new version of your implementation is re-programming the FPGA — just like with software.

Make HW great again!

I know hardware is not as sexy as software, but I feel it is key for the future of technology. If you’ll allow me to make a confession, I am an electrical engineer by training that felt in love with software along the way, so the idea of making hardware as “accessible” as software, and give everyone the capability of building a hardware system as easily as we now build software systems excites me greatly, and is a field I am really looking to explore. Let me know if you want to join me in this quest!

@adlrocha - Playing with GossipSub

And the release of my PubSub Playground

woman whispering on woman's ear while hands on lips

When one thinks about security and scalability in blockchain and distributed systems, the first thing that comes to everyone’s mind are consensus algorithms. The only thing preventing us from higher transaction throughputs and more resilient blockchain systems are better consensus algorithms, right? Well, unfortunately, this is only partially true. All these mechanisms require of the use of an underlying messaging protocol to communicate with the rest of the (unstructured) network, and orchestrate the operation of the consensus algorithm and any other overlaying scheme the system may have. Thus, in order to have a resilient and high performant consensus algorithm, we first need a resilient and high performant messaging protocol.

If I asked you what is the underlying messaging protocol used in Bitcoin or Ethereum, would you be able to answer? Until last week, I wouldn’t have been able to give an accurate answer either. We all have a general intuition, but many of us disregard this important component in blockchain platforms.

In case you were wondering, Bitcoin uses a flooding algorithm to spread new transactions and blocks to the rest of the network. In a flooding algorithm, nodes broadcast new messages to every other node connected in the network. This ensures that messages propagate as fast as possible while adding traffic redundancy. Ethereum’s current messaging protocol is a bit more “bandwidth efficient” than Bitcoin’s flooding. Instead of every node broadcasting messages to every other connected peer in the network, they randomly select sqrt(N) nodes to broadcast the message to, reducing a bit the traffic redundancy and bandwidth requirements of the system.

As practically shown for years in the Bitcoin and Ethereum networks, these protocols “kind of work fine”. But could we envision better messaging protocols for next-generation blockchain and p2p networks? Fortunately, we have a new good candidate for this, say hello to GossipSub.

GossipSub Publish/Subscribe

GossipSub is a brand new pubsub protocol proposal designed and implemented in libp2p. The aim of this new protocol is to enhance many of the limitations of existing pubsub and messaging protocols in unstructured networks (high bandwidth requirements, no delivery guarantees, high delivery latency, no mitigation schemes against potential attacks, etc.). GossipSub, and specifically GossipSub-v1.1, has been designed to incorporate reliance against a wide spectrum of attacks, ensuring fast message delivery.

So how does GossipSub work? The main focus of GossipSub is to deliver messages to all the nodes in the network subscribed to a certain topic. To achieve this, two overlay networks are built over the underlying p2p unstructured network: (i) a full-message peering mesh where a small number of nodes are connected using bidirectional links conforming a local mesh (the number of nodes to be included in the local mesh is determined by the degree of the network. This parameter controls the trade-off between speed, reliability, resilience and efficiency of the network); (ii) and a metadata-only network conformed by every peer in the network. This network is made up of all the network connections between peers that aren’t full-message peerings.

Diagram showing a large shaded area with scattered dots inside connected by
thick, dark lines representing full-message peerings between peers. Most of the
dots have three dark lines running from them to other dots. One of the dots has
four lines running from it and is labelled as “Peer reached upper bound”. A
different dot has only two lines running from it and is labelled “Peer reached
lower bound”.  Beneath the diagram is a legend reading “Network peering degree = 3;
Upper bound = 4; Lower bound = 2“ accompanied with small symbols showing dots
with three, four and two lines running from them

Source: Libp2p Pub/sub Documentation

The local mesh (full-message network) is conformed by nodes subscribed to the same topics (this is the way nodes choose what peers to add to their local mesh). Thus, when a peer subscribes to a topic, it selects some peers to become its full-message peers for that topic. In parallel, nodes in the global mesh (metadata network) use a gossipping algorithm to share information with the rest of the peers inside and outside their local mesh about things that are happening in the network. Hence, gossip messages include information about the subscription and unsubscription of connected nodes, recent messages seen in the network, etc.

Diagram showing a large shaded area with scattered dots inside connected by
many thin, light lines representing metadata-only peerings between peers. The
lines between the dots are labelled “Each peering is a network connection
between two peers”.

Source: Libp2p Pub/sub Documentation

When a peer wants to publish a message, it sends a copy to all full-message peers it is connected to; and when a peer receives a new message from another peer, it stores the message and forwards a copy to all other full-message peers. Peers remember a list of recently seen messages. This let them act upon a message only the first time they see it and ignore retransmissions of already seen messages (avoiding duplicate bandwidth).

Gossip messages propagate metadata throughout the network. Gossips are emitted to a random subset of peers that may or may not be part of the mesh, and they are used, for instance, to inform other peers about messages seen and that they may have missed.

Source: GossipSub paper

Additionally, GossipSub-v1.1 includes schemes to avoid a wide-range of attacks to the protocol. Every node keeps a score of every other peer it interacts with. Scores are private, and are never shared with other nodes (conforming a local view of the network from a specific node). The score function is used as a performance monitoring mechanism to identify and remove badly-behaving nodes from the mesh. It is computed using a set of interesting parameters from their counterparts such as the time a node has been part of the mesh, its rate of message delivery and failures, the number of invalid messages exchanged, etc.

This score is the foundation of many of the protection schemes introduced in GossipSub-v1.1 such as opportunistic grafting and adaptive gossip dissemination. In the sake of brevity, I will refer you to the recently released pre-print of GossipSub’s paper if you want to have a deeper understanding of its operation and design rationale. Even more, in this same announcement, an extensive evaluation report of the performance and security of the protocol (which I honestly had a blast reading), and the results of its security audit have also been released. In short, awesome references to imbibe all the knowledge around this awesome piece of tech.

The GossipSub Playground

And it’s time to get practical! You can find the implementation and specs of GossipSub in libp2p in case you want to start playing with it, or directly use it in your next application. But if the only thing you want to see is GossipSub in action in a simple network without having to worry about hacking anything, I built a simple tool to help you with this that I called “The PubSub Playground”. With this tool you will be able to easily deploy a local GossipSub network and watch how messages flow through the network while collecting interesting metrics.


To run the tool you just need to clone the repo and run the following commands:

$ go build  
$ ./pubsub-playground -pubs <number_publishers> -subs <number_subscribers> -log -server

With this simple command you will set up a network with your desired number of publishers and subscribers. Every publisher node will periodically publish messages into a topic with the same ID of his PeerID, while subscribers randomly subscribe to one of the available topics, and read the messages sent by the topic's corresponding publisher.

Throughout the run every node will collect data from the execution of GossipSub, and trace its local view of the exchange of messages with the rest of the network. All these traces are used to generate a set of metrics that can be checked in two ways:

  • By enabling the -log flag in the execution of the tool to periodically print metrics in stdout.

  • By using the -server flag that starts an HTTP Server and runs a WebSocket at ws:// so that anyone can subscribe to the socket and receive updates of the metrics collected in the network. Along with the websocket, a simple dashboard is provided at http://localhost:3000 so you can visually evaluate the evolution of your network metrics.

The implementation of this tool has been great to help me understand how GossipSub works and behaves under the hood with networks of different sizes. With the PubSub Playground we are able to draw conclusions about the behavior of the protocol such as:

  • The type of messages exchanged in a GossipSub network, and the amount of them required to deliver messages to subscribers according to the size of the network.

  • The behavior of the load of the network according to the number of publishers and subscribers.

  • Realize how the average delay of message delivery behaves, and understand how this metrics changes over time (from boot to stability).

  • See the average delay of messages from publishers to subscribers, and the rate of “useful” and “control” messages in the system.

And there are still a bunch of other things I could add to drive our analysis further like:

  • Dynamically set the number of publishers and subscribers (so we can add new nodes with the network running) and play with the network churn.

  • Enable the publication and subscription to more than one topic for peers.

  • Set additional delays and latencies in node connections to see how this affects the protocol.

  • One that I am really looking forward to start exploring (but haven’t got the time to tackle): use all the data collected to draw a heat map that show (in real time?) how messages flow and are exchanged between the different nodes in the network. This would allow us to draw conclusions about “hot links” between nodes, observe the formation of the local meshes, understand the bandwidth supported by links, etc.

  • And I would also love to start adding rogue nodes to the tool so we can setup and visualize the behaviour of the protocol when the network is under attack (as showed in GossipSub’s evaluation report).

And nothing else from my part, this was all for today. I would love to hear your thoughts about this simple project, and I hope you are as excited as me with the work being done around GossipSub and other brand new networking protocols for the next-generation of the Internet. See you next week (hopefully my next publication will be written from a beautiful Spanish beach :) )

@adlrocha - Deno, the new dino in town

Did you find the anagram? No-de, De-no

green T-rex toy on white stair

I keep a list in my notebook of potential topics to write about in this newsletter. This list has already over 30 items, but it was already Thursday afternoon and I couldn’t find the motivation (nor the time) to write about any of them, so I chose to make a Twitter call to see if any of my followers could suggest a topic I felt a bit more excited to write about with the time I had.

It was then when my good friend @koke0117 mentioned Deno. Actually, it had been a few weeks since I started reading and playing around with Deno, so I opened my text editor and started writing.

From Node to Deno

So what exactly is Deno? Deno is a simple, modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust.” Started by Node.js creator, Ryan Dahl, it is an attempt to fix many of the current issues of Node.js while accommodating to the new trends and advancements in Javascript from the last ten years. This talk of Ryan at JSConf EU illustrates perfectly some of the reasons why he chose to start a new Javascript runtime project. Some of the things he mentions in the talk that would have done differently now if he were to redesign Node are related with the node_modules, Node.js’ security, and the ability to run code in the browser and in the server.

As result of these concerns, Deno was born. Let’s go through some of the new features introduced by Deno:

(i) Secure by default. Deno runs your code in a secure sandbox by default, so you will have to give explicit permissions to your code if you want to access files, the network, or the environment you are running on. Let’s see what happens if we want to run this simple script that fetchs data from an API:

const data = await fetch("https://jsonplaceholder.typicode.com/todos/1")

If we run this using “deno run test.ts” we get the following:

Our script doesn’t have to permission to access the network. We need to explicitly grant him with the required permissions running “deno run --allow-net test.ts”. Pretty sweet, right?

(ii) Top level await compatibility, and Typescript support out of the box. Haven’t you seen anything weird in the code I shared above? I WAS USING AN AWAIT OUT OF AN ASYNC FUNCTION. This is another of Deno’s cool features. Forget about having to explicitly resolve promises using .then() when you can’t wrap the code inside an async function, Deno has you covered. The same way it has you covered if you want to use Typescript. In my previous example I didn’t have to compile my .ts file to run it, Deno directly knew how to do this for me.

(iii) And one of the features most loved by some and really hated by others, no need for NPM and the node_modules folder anymore. Deno moves from centralized package managers (npm or yarn) to the use of a decentralized approach where dependencies are imported using ES6 imports and the url address of the source code repository (the Golang way! Actually I am a bit biased because I love Go, so I won’t comment further about this). In previous posts, I already warned about the risks of centralized package managers, but if you want to read a bit more about this, check this post.

To test this new approach to dependencies you can run this example and see how the dependency is downloaded and made available to you in your local cache:

import { serve } from "https://deno.land/std@0.50.0/http/server.ts";

(iv) Deno ships software as a single executable file, and this is really convenient, as it produces a dependency-less single file from your code. You can, for instance, rundeno bundle test.ts test.js” in the previous example to see the magic happen.

(v) Built-in utilities and (audited) standard modules. Deno ships many of the things I love about Go such as “deno fmt” for code formatting and “deno info” for dependency inspection. Moreover, deno has a standard library with many useful functionalities to remove the need of external unknown dependencies.

(vi) Finally, Deno is compatible with “the browser” (Deno’s API should be compatible in the browser), and it supports the execution of Web Assembly (yay! and you know this is something I love).

Playing with Deno

If you want to start playing with Deno and try the examples shared above, the first thing we need to do is to install it. There are several ways of doing this, being my favorites the use of their provided install script:

curl -fsSL https://deno.land/x/install/install.sh | sh

Or if you have Rust installed, and as Deno is written in Rust, cargo installing it:

cargo install deno

Remember to export Deno’s path to use it directly from your CLI. To see if your installation has been successful, you can run the following command:

deno run https://deno.land/std/examples/welcome.ts

You’ll see how deno downloads every required dependency and runs the welcome typescript file 🦕.

A cool second example to test in Deno, is to run a web server using its standard library (it is the example suggested in their official site).

import { serve } from "https://deno.land/std@0.59.0/http/server.ts";const 
s = serve({ port: 8000 });
for await (const req of s) {  
    req.respond({ body: "Hello World\n" 

With this simple piece of code you will see may of the new features of Deno in action.

I know Node.js already supports the execution of WASM binaries, but this was another thing I wanted try in Deno. There are several tutorials out there that guide you on how to build your Web Assembly binary from Rust or Golang and run it in Deno. All of my past approaches to WASM have been using Rust, so this time I chose to go with Golang, see what happens.

My first impression was that the integration WASM-Go is not as straightforward as with Rust (I haven’t figured out yet how to compile to WASM a function with parameters from Go and run it and passing the arguments from JS, but we are here to talk about Deno, more about this in future posts).

This is the simple program I am going to compile into WASM:

package main

import "fmt"

func main() {
   fmt.Println("hello from WASM")

To compile it we just need to run the compiler with the following options:

$ GOOS=js GOARCH=wasm go build deno.go
$ mv deno deno.wasm

To run it from JS we first copy an auxiliary file used to load WASM from JS (I guess this file makes the interfacing work of the automatically generated JS files you get when compiling WASM in Rust). We copy this file in our project’s root:

$ cp "$(go env GOROOT)/misc/wasm/wasm_exec.js" .

And write the JS script that will run the WASM binary:

import * as _ from "./wasm_exec.js";
const go = new window.Go();
const f = await Deno.open("./deno.wasm")
const buf = await Deno.readAll(f);
const inst = await WebAssembly.instantiate(buf, go.importObject);

This script loads the binary from our local environment and runs it. When running it we have to remember to give access permissions to file within Deno. Thus:

$ deno run --allow-read deno.js

And there you go, we have our WASM binary running within Deno.

Closing thoughts

I don’t think Deno will replace Node.js, and probably many won’t even switch to Deno for a single project, but for those of use more comfortable with typed programming languages, and psyched with the future of WASM, I feel Deno is a step forward the future of Javascript, as it introduces by design many features that in Node.js require a significant overhead if you want to use modern Javascript (I am thinking about Typescript compilation, bundles, and top level await support). Adding to this the improvements in security, it poses in Deno a really interesting alternative to Node.js in the long run. But this my humble and probably biased opinion, I would love to hear your thoughts about Deno.

PS: I highly recommend this post as a complementary view of the role and future of Deno in the ecosystem.

Loading more posts…