@adlrocha - The outcomes of remote work

A thread on its benefits after one year of working from home.

The publication I was working on for this week needs a bit more of work. Instead of publishing it "half baked" today, I am going to delay it to next week. I didn’t have any backup article lined up for today, so I will take the opportunity to share a with you all a thread collecting my thoughts after more than a year working from home. Enjoy!

(PS: This last tweet is a poll. Click on the link to participate and share your thoughts with the tweetaverse. So far, no one wants to come back to an office, not even Apple’s employees. What about you?).

I know, this is not a conventional publication, but I thought it could be fun to share a bunch of a tweets article-like to enhance their readibility. A new experiment for this newsletter, and as always, I would love to know your opinion.

Have you been working from home and want to share your thoughts in this newsletter? Mention me in your tweet storm and I will add your thread here. Let’s collect as many testimonials about remote work as possible. It may help companies rethink their talent retention and adquisition policies (or not). Have a wonderful Sunday!

@adlrocha - IPFS for storage, the blockchain for verified immutability

A case for decentralized storage in L2 solutions

It’s never a good idea to store big chunks of data in a blockchain. For starters this is straight impossible, as the amount of data that may be included in a transaction is limited. But even if you could, it would be prohibitively expensive in terms of cost and resources, as every peer in the network would have to store that piece of data you chose to store on-chain. Moreover, if your plan is to do all of this over a public blockchain network, everything that you store on-chain will be visible for every peer in the network, so forget about storing sensitive information, as if you are not careful you may be disclosing your darkest secrets (they kind of secrets you already disclose to Facebook).

And you may be wondering, “then, how can someone implement a decentralized application that requires the storage of large chunks if they can’t be stored directly on-chain?”. Well, fortunately, we already have in the web3 ecosystem decentralized storage solutions like IPFS to help us in this quest.

IPFS, blockchain’s best friend to deal with data

IPFS is a distributed system for storing and accessing files, websites, applications and data. It is a public network, which means that anyone can run a peer and start downloading and storing content in the network right away. IPFS is a content-addressable network, so all content stored in the network is identified by a unique identifier called Content IDentifier or CID. The CID of some content is derived from the hash of the content. This means that if the content changes, its CID changes, so different versions of the same content will have completely different identifiers. 

To download content from the IPFS network we need to tailor a request specifying the CID of the content we want to download. Our IPFS client will take care of the rest leveraging the network’s underlying protocols. It will find the peers in the network that are storing the content we are looking for, and download it for us. “Get” requests, how we call download operations in IPFS, are usually specified through a link that looks like this:

/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/docs.html 

In the end downloading data from the IPFS network means providing our IPFS clients with one of these links. If you want to learn more about the specifics of IPFS you can check these tutorials out, or the project’s doc page.

And why is IPFS blockchain's best friend to deal with data? Well, the fact that content in the IPFS is uniquely identified, and that if the data in any piece of content changes, its CID also changes with it, means that data in the IPFS network is immutable. These CIDs are a few bytes long, and can be stored on-chain and used in smart contracts to point to data stored in the IPFS network. With this, we don’t need to store large chunks of data on-chain anymore. The only thing stored and managed on-chain is the CID of the corresponding data. If someone needs to access the specific content (and not just the identifier), it can do so by making a request to the IPFS network for that CID. Cool, right?

But enough with the theory behind IPFS. What are some good use cases for this integration between IPFS and the blockchain? The perfect example of the need of decentralized storage in the blockchain are NFTs (Non Fungible Tokens), of course. 

NFTs are used to represent one-of-a-kind digital assets. When a NFT represents a collectible crypto-cat (like a in CryptoKitties), all the specifics of the NFT can be stored directly on-chain, but what happens when what we are minting as an NFT is a digital asset like a song, an image, or a deep learning dataset? We can’t store these things directly on-chain. Here is where IPFS and content-addressable decentralized storage solutions excel. We can store our digital asset in IPFS, and then use the CID (and maybe some additional metadata), to mint the NFT. Anyone would be able to validate the ownership on-chain, and access the asset in question in the IPFS network. You have an illustration of how this would work at nft.storage and in the image below. 

Decentralized storage as a first-class citizen in L2 solutions

Does one need to operate an Ethereum (pick your blockchain of choice) node and an IPFS node in order to implement and orchestrate these kinds of use cases and interaction in a decentralized application? Well, not really. There are several alternatives: you can use IPFS gateways for the IPFS side of things (like Pinata, Infura, or Textile), or even delegate the operation of all your nodes to someone else. What is clear is that even with this, the operation of storing your asset and minting your token can not be done atomically.

I was reflecting on this “atomicity of operations between decentralized storage systems and blockchain platforms” when I realized something. A few weeks ago I wrote a comparison between different Optimistic L2 solutions. For that publication, one of the contendants of the comparison was Metis. Metis is an optimistic L2 rollup solution. One of the features that caught my attention from this project was their VM integration with IPFS. According to their whitepaper they support decentralized storage “out-of-the-box” in their VM through an IPFS resolver. The idea of atomically interacting with the IPFS network and making transactions on-chain is something that really interested me. Atomic operation in IPFS and on-chain seemed like an impossible thing to me, but this may actually be possible in the L2 world. I decided then to deepen into Metis technology and understand if this kind of atomic operation would be possible in Metis.

Metis includes two types of storage in their VM. The regular VM storage, responsible for storing the blocks and account states; and a special storage layer that integrates with IPFS (see figure below). Metis leverages the IPFS cluster technology. IPFS cluster nodes are regular IPFS peers that can run private sub-networks, so the data stored in the IPFS cluster is not shared with public peers from the public IPFS network (making it really convenient for the storage of sensitive information). IPFS cluster nodes can choose to store content in the public network or restrict the access to content to one of its connected subnetworks.

Content stored in IPFS may be accessed from Metis through the IPFS resolver of the VM. When a user invokes a method that needs to interact with the special storage layer, the IPFS router in the VM intercepts the corresponding operations, and sends them to the IPFS network through the IPFS Resolver. The IPFS resolver behaves as an IPFS client and is also responsible for encrypting both the data and the final CID of the content (if the information needs to be private), so it can be committed on-chain without privacy and security worries. 

To illustrate how all this integration works, let’s use an example. Imagine that you want to mint an NFT for your new song in the Ethereum network using Metis. If the NFT factory smart contract is already deployed and in place, the only thing that you need to worry about is triggering the right operations to store the song in IPFS, and mint the NFT. The Metis VM will be responsible for intercepting the IPFS operation, encrypting the data (if necessary), and interacting with the IPFS network to store the song. The result of this operation is the CID of the song, which is then used in the L2 transaction sent to mint the song. This L2 transaction is then rolled-up in L1, and eventually persisted in the Ethereum network. In this way, the Metis node manages all the interactions necessary to atomically store data in the IPFS network and persist the result in the blockchain.

Another interesting part of this integration, and which is specific for Metis, is that DACs (Decentralized Autonomous Companies) can use these IPFS layers to store sensitive information for the DAC in a decentralized way, without having to rely on centralized storage systems. The data is conveniently encrypted with the corresponding DAC credentials. Even more, when A DAC is created for the first time, a new “charter” is also created to determine the rules of the DAC. In the charter, the DAC creator can include access rights and the operation permissions for this IPFS and sensitive data storage integrations.

Let’s imagine that a big retailer company is using Metis to track all the lifecycle of its products using Metis: from their production, to their distribution, and the sale to its customers. There are already companies using blockchain technology for this purpose (Carrefour, Costco, Maersk, etc.). A smart contract in a blockchain network is used by every party involved in the lifecycle of the product. Every status update in the life of the product is conveniently registered on-chain. These updates can include information such as: the time when a specific entity in the supply chain manipulated the product, how, and what is the next step (or owner) in the chain. All of this can be done today with any blockchain network with support to run smart contracts. Unfortunately, in real life all of these interactions are governed through legal contracts, and acknowledged by “real-life documents” such as delivery notes.

One of the added values of having a blockchain network orchestrating these interactions is that all entities have a common information system that stores all the supply chain information, but what happens with the documents related with the actions performed in the blockchain? They need to be stored somewhere else. This is where solutions like Metis’ work like a charm. These documents may include sensitive information, they can’t be stored in the clear in a public network. Even more, presumably not every document should be accessible by every party.

Through Metis IPFS integration, every DAC involved in this supply chain use case, is able to perform the transaction to trigger an update to the state of a product in the blockchain, while storing the corresponding document to the IPFS network. As described above, these documents would be conveniently encrypted with the a of keys that gives access to the document exclusively to the right entities. The status update in the smart contract would add a pointer to the document’s CID in case anyone wants to check the “real-life document” associated with the product status update. In this process, DACs will be able to determine what other DACs or entities have access to these documents. And with this, our companies are able to share a common information system which is consistent with the state in the blockchain without having to worry about implementing additional schemes or having to maintain an independent system for document storage.

L2 is more than just scalability

It is clear for everyone by now the importance of decentralized storage systems for the success of Web3, but something that people don’t realize when thinking about L2 solutions, is that they are not exclusively for scalability. They are actually way more than that. L2 can become complete enhancements over L1, in terms of scalability, but also in terms of features. In this publication we’ve seen a clear example of this (integrated decentralized storage in L2)

Decentralized Applications are increasingly in need of storing large amounts of data in an immutable way leveraging blockchain technologies and decentralized storage systems (take NFTs as an example), and L2 solutions can take this opportunity to inject additional features to L1 networks, just as Metis have done with their IPFS integration. I can’t wait to see what is yet to come in the L2 ecosystem. Are you aware of any other cool projects with innovative L2 ideas? Do not hesitate to ping me :)

@adlrocha - PARPA (Private-DARPA)

Building better organizations for disruptive innovation

I accidentally came across this piece at the beginning of the week. I had another topic planned for this week, but the content of the piece was too powerful and full of ideas to miss the chance to write about it, and share these ideas with you now that they are fresh. Basically, the article explores how to build an organization that systematically enables “more science fiction to become reality”. Just like organizations such as DARPA and Bell Labs did decades ago with the development of the Internet, semiconductors, mobile communications, solar cells, and a long list of new discoveries that changed the world as we knew it. 

Instead of doing “yet another write-up” about the piece, I am going to do what I’ve done before with other publications: I will share my raw notes and reflections on the piece as a reminder for my “future me”, hoping that you also get some value from reading them. (BTW, if you feel extremely lazy and you don’t want to even read this publication, but you want to know what PARPA is all about, you can read this two-pager the author made available sunmmarizing the core ideas).

What is the current landscape of the innovation ecosystem?

“Different institutions enable certain sets of activities that we associate with innovation: Academia is good at generating novel ideas; startups are great at pushing high-potential products into new markets; and corporate R&D improves existing product lines. Together, these institutions comprise an ‘innovation ecosystem’".

Each of these structures have strengths and constraints that makes them great at what they do, but not more. Academics are good are generating ideas, but they only have resources to focus on narrow fields and open problems; startups can push an idea to the market blazing fast, but also have limited resources, and need to focus on short-term profitability in the process in order to survive; corporate R&D units have deeper pockets, and can attract multidisciplinary talent, but they will only focus on research endeavors related to their products, or that can represent an alternative source of income for their company. Generally speaking, big corporations are quite risk averse.

All of them are valuable for the innovation pipeline, but there is a gap that needs to be filled to have a complete innovation pipeline in this ecosystem able to transform science fiction and theory into reality. What allowed golden-age research orgs to produce such transformative technology?

Most significantly, these [type of] organizations simultaneously:

  • Promoted work on general purpose technologies before they became specialized.

  • Enabled "targeted piddling around," especially with equipment and resources that would not otherwise be available.

  • Fostered collaborations among diverse individuals with useful specialized knowledge.

  • Shepherded smooth transitions of technologies between different readiness levels, combining manufacturing with research to create novelty and scale.

And this is where PARPA-like organizations can operate sustainably and support the innovation pipeline. 

Innovation with structure, a system, and a roadmap

The goal of a PARPA-like company is to transform science-fiction, and theory into reality. This means moving from ideas, to prototypes, and to products that people can use. These ideas and prototypes may require expertise and research from multiple disciplines, and the final realization of some of these project may end up being that the idea is not feasible. This is why projects and research programs of a PARPA-like company need to be smoothly de-risked through the pipeline to make an efficient use of resources and researcher’s time. Research in hard problems can be slow, so PARPA-like companies need to focus on building a foundation that enables them to be sustainable for decades (what would have happened to Bell Labs if they didn’t have the funding to survive more than a decade?). This is achieved systematizing the innovation/research process within the org:

1. Create and stress-test unintuitive research programs in a systematic (and therefore repeatable) way.

2. Use that credibility to run a handful of research programs and produce results that wouldn't happen otherwise.

3. Use that credibility to run more research programs and help them "graduate" to effective next steps.

4. Make the entire cycle eventually-autocatylizing by plowing windfalls into an endowment.

We can even come up with a clear structure and a set of detailed fine-grained steps to achieve this. Even more, my personal feeling is that structuring a project with a system like this would no only be useful for PARPA-like companies focused on innovation, but for more traditional organizations approaching projects with a certain level of uncertainty (with its obvious differences).  Forget about Scrum and embrace the PARPA-like project management approach 🤓) 

  1. It's possible to design programs in a systematic (and therefore repeatable) way .

    1. It's possible to find people who want to do pieces of work that would not happen otherwise.

    2. There exist several areas that could possibly yield results (defined loosely) in 3–5 years (given steady work).

    3. For each potential program, it's possible to come up with a set of small experiments that could further confirm or deny its feasibility within 12–24 months and ~$1M scale.

    4. It's possible to get people to undertake seedling experiments.

    5. Accomplishing 1a→1d will involve a combination of one-on-one conversations driven by reading papers, finding gaps, reaching out to people, and holding small, intensive workshops.

    6. It's possible to find people who are excited to be PMs and actually do 1a→1e.

    7. It's possible to systematize this process.

  2. It's possible to execute on these programs in a way that relaxes academic/startup constraints.

    1. It will be possible to coordinate several research projects toward a coherent goal.

    2. The early pieces of work will be done by some combination of academic labs, contract R&D orgs, independent researchers, and possibly people within corporate R&D who are able to do work for grants.

    3. People will be willing to shift their organizational affiliations to an exploratory program organization as the program goes on.

  3. It's possible to "graduate" these programs in a way that gives them a life of their own.

    1. During the programs themselves, it will be possible to do some of the pre-work to figure out the best way to graduate technology in order to maximize its beneficial impact on the world.

    2. At least some of the people who have been working on the program will want to carry it forward in some form.

    3. Some of the technology will make sense as a company, some of it will make sense as a nonprofit, some of it will make sense as a licensed technology, and some of it will make sense as open-source.

  4. It's possible for this entire cycle to eventually become autocatalytic.

    1. An early (non-monetary) component of autocatalysis is a community that generates good inbound programs.

    2. 'Pseudo-autocatalysis" is a state where the organization is getting enough consistent revenue through some combination of donations, spin-offs, and other sources, like licensing and consulting, that it can continue to run multiple programs assuming those revenue sources continue.

    3. Full autocatalysis can only happen by building up an endowment.

And how to do all of this without worrying about money?

Research for the sake of research is not profitable. I always say that “a bad result is a great result, because it lets others know a path that is not worth following”. Unfortunately, bad results won’t fund your research, and the reason why many great researchers end up leaving research is because they get extremely burnt out of having to not only worry about their research, but also finding the funding to keep it alive. The goal of PARPA-like companies should be to build programs where researchers can safely focus on the research without having to worry about making ends meet. But the money has to come from somewhere, also for PARPA-like companies. Money does not fall from the sky, even if central banks insist on making us believe it does. So how can this kind of research organization fund themselves efficiently (and effectively) to be sustainable for several decades while building general-purpose science fiction?

“The nature of PARPA’s work means that while it will (hopefully!) create a lot of value, it likely won’t be able to capture enough of that value to be net profitable and absolutely would not be able to compete with startups and the stock market on a time-adjusted ROI basis. However, commercialization and startups are powerful dispersion mechanisms for certain technologies. If PARPA does its job right, it could shepherd industry-defining technologies in the same way that PARC or Bell Labs did in the past. It’s a reasonable bet that a portfolio of programs that become companies would have an investable return. A purely Nonprofit organization funded by donations would leave support for these programs on the table. To that end, PARPA will use a hybrid for-and-non-profit structure. The non-profit will run the programs and ’drive’ to make sure that we work on programs based on potential impact, not profit.”

I’ve been thinking a lot about the incentive systems of research and innovation and ways to improve it. I once came up with an organization structure similar to the one presented in the article to fund research so that researchers can keep doing research without having to worry about becoming engineers, or even sales representatives, of their project if it ends up being a success. Don’t look at me like that, this has happened before in R&D corporate units, and I’ve experienced it personally.

Roughly, how I feel the funding stages of an organization like this could look like is:

  • At stage 0, there needs to be some kind of initial seed money in the form of a grant, a donation, or an initial investment in the company (risk capital) to fund the first few research projects/programs. It helps if the company has a good first idea of something that could generate short-term value to attract future money.

  • From there on, additional projects would have to be funded with:

    • Program-specific investments from organizations and individuals interested in the research and development of certain fields.

    • General donations and research grants.

  • When the results from these initial projects start being a reality because they’ve been sufficiently de-risked, and the general-purpose technology can start being realized into actual projects, a spin-off company is created to generate value from the results of the research project. The parent company takes equity on the spin-off, and uses the generated value from this company to fund future programs and research. At this point, the parent company can welcome new investors, or liquidate their stake according to its financial health, or how much it wants to boost research according to its active research programs, or the one in the pipeline.

  • After this initial stage, the company can go two ways: the spin-off process of the initial programs is a success and it can start planning for its decade-long sustainability replicating the aforementioned process; or the results of the initial spin-offs are not as successful as expected and the company needs to return to stage 0, find external investment and bet on a small number of new research projects to generate the added value required to recover its financial health. 

  • Additionally, many research groups and R&D departments will be willing to work with the parent company. Collaboration is key for innovation. These companies will also behave as innovation hubs where researchers and companies gather to work on the hard problems of a field. PARPA-like companies would benefit from investing part of their funds to become patrons of great researchers and teams to improve research in a field (see below).

This was a toy-example of how I imagine the funding of this type of companies to go. The article goes in “way more detail”, and it is a must read to understand the economics of research and innovation.

Industrial labs enabled high-collaboration research work among larger and more diverse groups of people than in academia or startups.

The team, collaboration, and researching in the open

In addition to giving projects longer time scales, less existential risk, and larger budgets than most startups, there are several specific ways that industrial labs helped technologies find good niches that startups and academia don't provide.

When you're still trying to figure a technology out, it's not clear which skill sets you want in the room. Industrial labs facilitated people floating between different projects loosely creating and breaking collaborations. Bell Labs was particularly good at enabling these free radicals:

"The Solar cell just sort of happened," he [Cal Fuller] said. It was not "team research" in the traditional sense, but it was made possible "because the Labs policy did not require us to get the permission of our bosses to cooperate—at the Laboratories one could go directly to the person who could help."

If I got you hooked this far, you definitely need to read “The Idea Factory”. Is a must if you are a researcher looking to work or building a company as the one that allowed Shannon to give birth to Information Theory, or to Shockley et. al. to design their transistor. 

I can reiterate enough the importance of collaboration on innovation. Being exposed to a constant flow of ideas boosts researchers creativity (again, I highly recommend reading the Idea Factory, and understanding the power of hallway walks). For PARPA-like orgs to be successful on their research endeavors, they should dedicate part of their resources to promote this. This can be promoted by:

  • Giving grants to top-researchers to figure out the blind spots of missing pieces of a problem in order to de-risk certain programs.  

  • Promote open research. The goal is to transform theory into reality, and the flow of open ideas and collaboration can take theory into reality faster. These companies are building general-purpose technologies that will lead to specific technologies that conform to new products and technologies, so promoting open research doesn’t put their stake in the result. If the result of the research is something like the Internet, the amount of new products that can be built enables everyone to get a piece of the pie. The aim of the research in PARPA-like companies is not to capitalize a piece of the pie with their technology (as it generally happens in large corporation R&D), but to make the pie substantially larger for everyone.

  • Finally, sharing common roadmaps with other research groups and organizations can certainly help parallel streams of research with a common goal. PARPA-like orgs can become the thought leaders orchestrating these efforts between different entities.

A must read

This was a collection of ideas and reflections triggered after reading this piece. I’ve chosen these ones, either because I’ve been reflecting on them before, or because they have really resonated with me. In any case, if you are into innovation and building a company that can sustainably focus on doing research, you should definitely read this article.

One final thought. While writing the article I was trying to come up with existing companies that fit the description of a PARPA-like org. I could only come up with one. What about you? Can you think of existing companies that approach innovation in a similar way as the one described above? If you can please ping me, I would love to learn more about them and how they operate internally.

@adlrocha - The Optimistic Layer 2 Wars

Fighting Ethereum's limitations

In the past few months, the price of ETH has surged, and the usage of the Ethereum network has significantly increased. The main culprits for this trend have been the renewed interest in NFTs and the consolidation of DeFi applications, along with the outstanding growth of the cryptocurrency market. This has resulted in a number of “not so pleasant” consequences for DApp developers in the Ethereum ecosystem: mainly, the network’s inability to accommodate the increase in usage leading to high gas costs (even more if you want your transaction to be validated in the next few blocks). At the time of writing, the average gas fee in the Ethereum mainnet is of approximately $15.

This new scenario we are facing in the Ethereum ecosystem has shifted layer 2 improvements overnight from a “nice to have” feature to an “utmost requirement” for Dapps to be able to operate sustainably in terms of performance and cost. Fortunately, we already have several consistent layer 2 platforms and protocols to help us in this quest. One of the most promising foundational constructions to build layer 2 solutions are optimistic rollups. Many projects are built upon them, but how can we choose the layer 2 solution based on optimistic rollups that best fulfills our needs? This publication is an attempt to answer that question, making a comparison of the three promising layer 2 solutions based on optimistic rollups. Let’s jump right into it!

What are rollups? And why are they optimistic?

In order to be able to compare different layer 2 solutions based on rollups, we first need to make a quick detour to understand what optimistic rollups are. Rollups are solutions that bundle (or “roll up”) sidechain or off-chain transactions into a single transaction that is then committed to L1. To secure all of these bundled transactions, and to make them individually verifiable, a cryptographic proof is generated from the bundle.

A requirement for rollups to work is to have some kind of Ethereum-compatible independent blockchain, with a reduced number of nodes or with additional features for high-performance, that is responsible for handling signature verification, contract execution, etc. This makes the independent blockchain able to verify the validity of the transactions that are afterwards bundled for their commitment in the main Ethereum chain. L2 rollup sidechains are responsible for verification and contract execution, while the L1 exclusively stores immutable transaction data.

In Optimistic Rollups participants are “optimistic” about the validity of the transaction being performed in the sidechain. There is no need for additional computation by aggregators to commit sidechain transactions into the main chain. And how can we be sure that sidechain transactions are actually valid? Optimistic rollups use fraud-proofs to ensure that all transactions are legitimate. If someone notices a fraudulent transaction from an aggregator, the rollup can be challenged by sending a fraud-proof to run the transaction’s computation and verify its validity. This means that instead of performing a verification for every single transaction, like in other rollup solutions like ZK-rollups, we only perform the proof computation if we have suspicions that a transaction is fraudulent. This significantly reduces the gas costs compared to ZK-rollups, and opens the door to the ability of achieving x10-x100 improvements in the transaction throughput. After an invalid block has been committed and a fraud proof is finalized, the chain in layer 2 can be rolled back and resumed from the previous non-fraudulent block.

Introducing the comparison contenders

After this brief introduction to rollups, we have all the foundations we need to tackle our layer 2 comparison. For this comparison I selected three of the layer 2 solutions that, in my opinion, have a more interesting set of features for DApp developers (i.e. the ones I would personally consider to deploy my own applications).

All of them share (more or less) the same building blocks: an Ethereum compatible VM to run users’ Solidity contracts in L2; sequencer/aggregators responsible for batching transactions from L2 from bundles that are then committed at L1; a set of L1 smart contracts to orchestrate the interaction and commit the data from L2; the use of different fraud-proofs for peers to be able to refute invalid or forged transactions committed by aggregators; and the use of a stake to orchestrate the incentive and the economics of the L2 system.

Despite having building blocks in common, the three solutions differ significantly in the way they implement the rollup protocol. Let’s have a look at each of them in detail to get up to speed for our comparison.

Optimism:

Optimism leverages all the existing tooling in the Ethereum ecosystem, and modifies it to implement their optimistic protocol and layer 2 solution.

  • VM: Their L2 VM is the Optimism VM (OVM), which is a modification of the Ethereum VM (EVM) which replaces context-dependent EVM opcodes with new opcodes suitable for L2 contract execution. The VM behaves as a sandboxed environment which guarantees deterministic smart contract execution and state transition between L1 and L2.

  • Client: Optimism also modifies the wide-spread Ethereum client, Geth, so it can be used as a client for the L2 chain. This client modifies messages so that they are understood by other L2 clients, and it includes all the processes required for the sequencing and batching of transactions in order to build the rollup.

  • Rollup construction: For their rollup construction, Optimism uses the Geth client as a single sequencer. In Optimism, transaction data is compressed and then sent to the Sequencer Entrypoint contract on L2. The sequencer is responsible for “rolling up” these transactions in a “batch” and publishing the data on Ethereum, providing data availability so that even if the sequencer disappears, a new sequencer can be launched to continue from where things were left off. Anyone can send new transactions to L1, and these transactions are added in a L1 contract that behaves as a “append-only log” for every L2 transaction.

  • Verification: For each transaction published by the sequencer, a verifier is responsible for downloading that transaction and applying it against their local state. If everything matches, they do nothing, but if there’s a mismatch, the verifier needs to submit on-chain all the valid previous transactions, and re-execute any state root published to show that the published state root was actually wrong. If the fraud verification succeeds, the wrong states and batches are pruned from L1.

  • Economic model: The sequencer of batches at every epoc need to be marked as collateralized by a smart contract called the bond manager. To become a collateralized sequencer, a fixed amount of ETH needs to be staked in the contract. This stake is slashed every time fraud is detected for a sequencer. Sequencers can recover this stake after 7 days of depositing, the moment from which the batches for the sequencer can be considered final, as no verification and slash is possible anymore. If fraud is successfully proven, a percentage (X%) of the proposer’s bond gets burned and the remaining (1-X)% gets distributed proportionally to every user that provided data for the fraud proof. This economic model prevents sequencers from going rogue, but doesn’t address the potential case where verifiers send lots of fraud proofs for a large number of different batches on an attempt to the chain (forcing a lot of L1 computations).

Arbitrium:

  • VM and client: Arbitrium implements the Arbitrium Virtual Machine. The AVM is responsible for running L2 contracts and keeping their state. The state of the VM is organized as a Merkle Tree, and execution is done in the generated state transitions over this Merkle Tree. Arbitrium also implements its own custom L2 client.

  • Rollup construction: Arbitrium uses a single on-chain contract to orchestrate its rollup protocol. At any point in the protocol, there is some state of the VM that is fully confirmed and final, i.e. its hash is stored on-chain. New transactions in L2 trigger an update of the state of this Merkle Tree that stores every state in the chain. To validate the stored states, participants of the protocol can make what is called in Arbitrium a Disputable Assertion (DA) to attest starting from some state-hash, the VM is able to execute a specified number of steps of computation resulting in a specified new hash-state (with its corresponding contract execution, payments and event emission). The DA may end up being valid (i.e. the computation is successful), or invalid. If the DA is valid, the system will enter a new state, with a new state hash in the tree, and its corresponding side-effects (payments and logs) specified in the DA. If the DA is invalid, the branch is rejected and the state is unchanged. Each state can have at most one DA following from it. If a DA has no following state, then anybody can create a DA that follows it, creating a new branch point. The result will be a tree of possible futures. So we can see that while Optimism uses several L1 smart contracts to commit the state and execution at the L2, the L1 construction for Arbitrium’s rollup is based on the storage in L1 of a history of state roots that commits the state of the L2 chain.

  • Verification: Once a DA’s staking deadline has passed, and all of the timely (placed before the staking deadline) stakes that remain are on the same branch from that DA, the system can confirm the result of that DA. The DA is either accepted or rejected, and the current state moves to the appropriate square to the right of the DA. If the DA is confirmed as valid, its side-effects, such as payments, are effectuated on-chain. This is how the state of the VM moves forward. The protocol is completely trustless, as any participant is entitled to verify the state of the VM by staking on the branch thinks is right.

Metis:

  • VM and client: Metis uses an EVM-compatible virtual machine, the Metis VM (MVM). The MVM differs significantly in terms of functionality and features to all the VMs from the projects above. In the MVM, computing and storage at L2 are completely decoupled. Metis introduces the concept of Decentralized Autonomous Companies (DACs). DACs are independent entities in the system that can represent, for instance, large scale enterprises that perform many of their day-to-day operations over the platform). DACs are key for the operation of Metis. When a new DAC is instantiated in the system, a new storage layer is specifically created for the DAC. Thus, DACs have their own storage with a view of their chain interactions.

Metis’ L2 computing layer (i.e. block mining, consensus, cross-layer communications, etc.), on the other hand, its shared by all the DACs in the network, but it includes an interesting feature: the fact that all of the computing processes are implemented as individual services (following a microservice approach) allows the computation layer to be scaled up and down according to the overall network’s needs and throughput. Furthermore, the MVM introduced the role of providers that can sign up and contribute computing power to make Layer 2 construct truly decentralized (these providers can be seen as the sequencers from the Optimism platform). The provider will be incentivized based on the blocks produced. Finally, a really powerful feature included in the MVM and the Metis client which other L2 platforms lack is support, not only for contract execution, but for decentralized storage linked to the computation of smart contracts. Thus, Metis integrates with the IPFS network through an IPFS resolver in the MVM which allows contracts to point at immutable data stored in IPFS. This can be used, for instance, to point to confidential data stored in the IPFS network.

  • Rollup construction: In Metis, the sequencing and batching of L2 transactions is not done by a single sequencer but a pool. A pool of sequencers will be randomly selected to rollup the state roots and submit the transactions to L1. At L1, Metis deploys a set of contracts that orchestrate the commitment of batches of L2 to L1.

  • Economic model: Each sequencer needs to stake a number of Metis Tokens to be qualified. The fact that the Metis ecosystem has strong, real economic connections, with transaction values that can be in the billions requires the use of a Dynamic Bond Threshold (DBT) so that the risk and reward of malicious behavior is linked to the real economic value managed by the DACs involved in the transactions. The DBT is calculated using as a base the maximum economic capacity of the DAC assigned to a sequencer. The economic capacity of a DAC is computed according to its total balance. Thus, if the number of staked Metis Token (MT) of a particular sequencer is below the DBT for the DAC it is assigned to, it won’t be able to batch transactions for that DAC. A DAC’s sequencing is blocked until an eligible sequencer is found in the sequencer pool. New deposits or withdrawals of funds from the DAC’s balance trigger automatic updates to its DBT. Consequently, new withdrawals to the DAC balance will reduce the required DBT of sequencers, and vice versa for new deposits. This ensures that the required sequencing collateral always follows the real economic value of a DAC.

  • Verification: For verification purposes, the Metis platform introduces the concept of L2 Rangers in their MVM. L2 Rangers are members of a special DAC that are responsible for sampling a range of blocks and validate the state roots according to the transactions assigned periodically from a random DAC. Rangers not only validate sequenced transitions for other DACs, but they also do it for their own DAC (they overwatch themselves). Each completed validation from a Ranger is rewarded with some Metis Tokens (MT). A successful challenge (i.e. fraud proof) to a state of the chain awards the validator a portion of the “malicious” sequencers bond. On the other hand, a failed challenge will cause the Ranger validator to lose the bond and eventually lose access to MVM_RANGERS.

This verification method where both sequencers and verifiers need to be collateralized, this addresses one of the key issues we identified in the verification process of the Optimism platform, which is the lack of stake of verifiers in the generation of fake fraud proofs. The well-oiled coordination of collateralized sequencers and verifiers (i.e. L2 Rangers) also shortens the proofing window to enhance the network efficiency. Protocols such as the one proposed by Optimism, transactions can’t be considered final until the window of verification has passed, and verifiers have had enough time to send all the proofs. This is a direct consequence of verifiers not being collateralized. Despite being an incentive for detecting invalid state updates, there is no large penalization for misbehaving as a verifier. Thus, in order to prevent potential misbehaviors, the finality window is increased to “allow everyone to speak”. In Metis this is not needed because verifiers are collateralized, and a misbehavior from their side will be translated in a loss of funds. Verfiers as well as sequencers have “skin in the game”, and this enables a reduction of the finality window, and this is the reason for Metis being able to validate transactions in hours instead of the 7 days other protocols such as Optimism need.

Ready to compare!

So without further ado, let’s put all of our contenders side-by-side for a last general view of the situation:

As depicted in the table (and as described in our explanations from above), the three platforms are perfect fits to deploy your DApp in a performant L2 solution backed by the Ethereum mainnet as a L1. The specific decision will be potentially determined by your performance, scalability, flexibility, and feature requirements. Metis is the most feature-rich platform from the three we’ve described: it supports decentralized storage by default, and includes additional performance and security schemes. The decoupling of storage, their use of DACs, and their dynamic DBT scheme, makes it a perfect fit for corporations (large or small). Optimism is a great option for Ethereum maximalists, as it uses every tool from the Ethereum ecosystem (no need for new concepts). Finally, Arbitrium’s permissionless staking for state history verification makes it a really efficient and interesting proposal that allows faster verification times than standard rollup constructions preventing delay attacks (although still a bit slower than Metis due to the flat architecture it uses).

In conclusion, there is no single right answer, but a consistent roster of optimistic L2 platforms from which to choose. I hope this comparison helps you make a more informed decision of the L2 to choose if you are planning to deploy a new DApp, or migrating from L1 to L2.

@adlrocha - Traffic Management: Focusing on the elephant, ignoring the mice

Improving my math intuition series - Part II

I took more time than expected to finish this next paper from the algorithms course I am taking. I wasn’t expecting it that long, and I had less time to read than initially planned, so I had to skip last week's publication to focus on reading. But I am back with my notes on the paper. This paper was an oldie but goodie from 2003: New Directions in Traffic Measurement and Accounting: Focusing on the Elephant and Ignoring the Mice.

📊 Class 2: New Directions in Traffic Measurement

The problem

Accurate network traffic measurement has always been something networking researchers and professionals have been interested in. We need these measurements for accounting purposes, bandwidth provisioning, or detecting DoS attacks. We may naively think that measuring traffic is pretty straightforward, “just count the packets from a specific flow that are passing through your network”. Unfortunately, this may be practically intractable. Keeping a counter for each flow is either too expensive (in SRAM), or too slow (in DRAM), and more so in 2003, when our hardware wasn’t as capable as now, and we didn’t have fast RAMs of hundreds of GBs available.

The state-of-the-art at the time to circumvent this problem was to simply sample a few packets every other time instead of counting every single packet. This made traffic measurements tractable with the hardware available, albeit quite inaccurate. And accuracy is key for certain use cases, especially if there’s money involved (like for accounting use cases), or when missing a large flow can be catastrophic.

For this paper, the authors build upon experimental studies that have shown how a small number of flows is generally responsible for a large share of a link’s capacity. They realized the following, “what if instead of measuring every single flow with the drawback of investing resources on measuring small flows, and harming the accuracy of large flow detection and measurement, we only measure large flows accurately and disregard small ones? In the end, large flows are the ones responsible for the bulk of the link’s traffic.” So they propose two new algorithms (and a bunch of optional improvements to them) to improve the resource requirements and performance of the state-of-the-art random sampling algorithms.

The algorithms and the math behind them

The basic ideas behind the proposed algorithms are, in my opinion, quite simple but extremely elegant. However, I think the key strength of the paper is not the design of the algorithms alone, but the theoretical analysis and comparison with the state-of-the-art random sampling algorithm; and their experimental evaluation to check the accuracy of their theoretical work.

Algorithm 1: Sample and Hold.

The simplest way to identify large flows is through random sampling of packets. If we see a lot of packets from the same flow when we sample, it potentially means that this flow is consuming a lot of traffic. But what if we are unlucky and end up sampling always packets from the same small flow?

In the sample and hold algorithm, we also start randomly sampling packets. If the sample belongs to a flow F1, we create an in-memory entry for that flow, and start counting every single packet that belongs to F1, instead of continuing sampling. If F1 is a large flow, we will be counting every single packet, and thus accurately measuring the flow. If, on the other hand, F1 is a small flow, we will eventually evict it from our flow memory when we run out of space (as we need to create space for new sampled flows), disregarding any previous measurement from F1.

The theoretical analysis to show how sample and hold is more accurate requiring less memory than random-sampling can be tackled with many of the things we learned in the Bloom filters paper from last time. Briefly, if p is the probability with which we sample a byte, the sample probability for a packet of size s is ps = 1 - (1-p)s which again we can approximate using the approximation we learned in the previous paper:

If we wish to sample each byte with probability p such that the average number of samples is 10,000; and if C bytes can be transmitted in the measurement interval, p=10,000/C. For the error analysis, if we consider a flow F that takes 1% of the traffic (i.e. the threshold T for which a flow will be considered large is a 1% of the total capacity of the channel). Thus F sends more than C/100 bytes. Since we are randomly sampling each byte with probability 10,000/C. The probability that F will not be in the flow memory at the end of the measurement interval (false negative) is (1 − 10000/C)^C/100 which (again, by approximation) is very close to e−10.

So once again, simple statistics can get us pretty far. The paper shares a way deeper theoretical analysis considering the error bounds, the memory required considering an oversampling to avoid false positives, and other interesting improvements. For the error, a geometric probability distribution is used to model the number of bytes that go by before the first packet of a flow is identified. With this, we can approximate the error bound, as once the first packet is sampled, every packet for a flow is measured (so errors are not possible, as we are measuring accurately).

For the memory usage, the size of the flow memory is determined by the number of flows identified. The actual number of sampled packets is an upper bound on the number of entries needed in the flow memory, as new entries are created only for sampled packets. Assuming that the link is constantly busy, by the linearity of expectation, the expected number of sampled bytes is p · C = O · C/T. Where O is the oversampling factor p = O / T (i.e. additional sampling done to ensure a low false negative rate).

Algorithm 2: Multistage Filters

The multistage filters algorithm is an attempt to reduce further the probability of false positives (small flows identified as large flows); and false negatives (large flows that circumvent our measurements) in our measurements.

The idea behind multistage filters will remind you a lot of the counting Bloom filters from the last paper. The basic building block of a stage filter is a set of hash stages that operate in parallel. Each stage has a table of counters which is indexed by a hash computed on a packet flow id. All counters are initialized to zero at the beginning of the measurement period. When a packet for a flow F comes in, the hash of F’s id is performed, and the counter for the index received is increased in the size of the packet. We do this for different hash functions in every hash stage (so in each stage this maps to an increase in a different counter). Since all hashes of packets from a flow F will increase the same counters, when the counters of all stages for a flow achieve our “large flow threshold”, T, this means this flow is large and we add an entry to our memory. From there on, we measure every single packet for the flow without going through the multistage filter in order to measure it accurately.

The number of counters in our multistage filters will be smaller than the number of flows we may potentially encounter, so many flows will map the same counters (as it also happened for the members in a set of a Bloom filter). This can cause false positives in two ways: first, small flows can map to counters that hold large flows and get added to flow memory; second, several small flows can hash to the same counter and add up to a number larger than the threshold. We can circumvent this in several ways: one option is to increase the number of stages to reduce the probability of false positives. Effectively, the multiple stages attenuate the probability of false positives exponentially in the number of stages. The paper describes other ways in which this can be minimized, like using serial instead of parallel stages, or the use of shielding, but I will leave that for the interested reader.

The theoretical analysis for the multistage filter is a bit more complex (I had to read it several times, and have a look at the proofs of the appendix to grasp every detail). Fortunately, the preliminary analysis is pretty straightforward:

“Assume a 100 Mbytes/s link, 5 with 100,000 flows and we want to identify the flows above 1% of the link during a one second measurement interval. Assume each stage has 1,000 buckets and a threshold of 1 Mbyte. [...] For this flow to pass one stage, the other flows need to add up to 1 Mbyte −100 Kbytes = 900 Kbytes. There are at most 99,900/900 = 111 such buckets out of the 1,000 at each stage. Therefore, the probability of passing one stage is at most 11.1%. With 4 independent stages, the probability that a certain flow no larger than 100 Kbytes passes all 4 stages is the product of the individual stage probabilities which is at most 1.52 * 10−4. Based on this analysis, we can dimension the flow memory so that it is large enough to accommodate all flows that pass the filter. The expected number of flows below 100 Kbytes passing the filter is at most 100,000*15.2*10−4 < 16. There can be at most 999 flows above 100 Kbytes, so the number of entries we expect to accommodate all flows is at most 1,015.”

Section 4 introduces a more rigorous analysis that proves a stronger bound for any distribution of flow sizes. The above analysis makes no assumption about the distribution of flow sizes, but if we consider, for instance, that the flow sizes follow a Zipf distribution, the resulting bound is lower than the conservative one from the proof above. If you are not familiar with the Zipf distribution, it is all over the Internet and Complexity Theory, so it is a good thing to add to your tool belt. Finally, to understand every lemma and theorem in the paper I highly recommend checking out the first appendix, it is really instructive.

Measurement results

Something I found interesting about the paper, is how after the thorough theoretical analysis of the algorithms, the experimental measurements show results which are lower than the ones from the theoretical analysis. At first I thought it made sense because in the theoretical analysis we were approximating upper and lower bounds, but actually some of the comparisons use the average. Honestly, this is something I’ve missed in this first pass to the paper, and something I am hoping to dive deeper on my next pass.

Where can these algorithms be used?

In section 1.2, the paper motivates how these algorithms that only measure large flows accurately and disregard the “mice” flows can be useful. I think this motivation is key in order to understand the design decisions and the applicability of the paper:

  • Scalable Threshold Accounting: “The two poles of pricing for network traffic are usage based (e.g., a price per byte for each flow) or duration based (e.g., a fixed price based on duration) [...] We suggest, instead, a scheme where we measure all aggregates that are above z% of the link; such traffic is subject to usage based pricing, while the remaining traffic is subject to duration based pricing”. Cool, right? This is a great workaround for when you don’t have a practical way of accurately accounting all usages for your service. This ideas was the one responsible for me considering including this section in the publication.

  • Real-time Traffic Monitoring: “Many ISPs monitor backbones for hotspots in order to identify large traffic aggregates that can be rerouted to reduce congestion.” Makes sense, right? Why worrying about small flows when what interested in is in detecting congestion?

  • Scalable Queue Management: At a smaller time scale, scheduling mechanisms seeking to approximate max-min fairness need to detect and penalize flows sending above their fair rate”. If you took any theoretical networking course at college you probably have learned to love and hate max-min fairness as much as me.

  • Finally, the authors note how measurement problems (data volume, high speeds) in networking are similar to the measurement problems faced by other areas such as data mining, architecture, and even compilers. Thus the techniques in this paper may be useful in other areas (especially if you have measurements constraints) - read Section 9.

Tasks for next week 📰

Overall, this is a great paper to read if you find the time. We’ve encountered a few more mathematical concepts which can come handy in the future; we went bit deeper on how to theoretically (and thoroughly) analyze an algorithm’s performance; and we know now why we disregarding the measurement of small flows may be fine. There is much more in the paper, like implementation issues, or a discussion on why random sampling may still be a better option than the proposed algorithms for certain cases. I will leave these things out of the scope of the publication for the sake of “brevity”.

Next week, I have an interesting task ahead. The paper to read is An Improved Data Stream Summary: The Count-Min Sketch and its Applications, but this time my self-commended task is not only to read paper, but also to answer to the following question:

“Compare contrast the mice/elephants paper and the count-min sketch paper. How do they describe and define the underlying problem(s) they are considering? How do they formalize their solution(s)? How do they compare?”

So it's not only reading time but also thinking+discussion time. Do you want to join the discussion? Do not hesitate to ping me.

Loading more posts…