@adlrocha - The State of DEXes

Decentralizing the gateways to crypto

Decentralized Exchanges (DEX) are a key foundation for the DeFi ecosystem. They give you the ability to trade and swap one cryptocurrency for another peer-to-peer, without the need for third parties such as a centralized exchange or traditional financial institutions.

Still, we have to ask: Why worry about implementing decentralized exchanges if we already have their centralized counterparts? How do DEXes actually work, and more importantly, why are they important for the DeFi space? 

Decentralizing exchanges

If you are familiar with centralized exchanges (CEX), you will have no trouble understanding how decentralized exchanges work. In a centralized exchange, a central entity or corporation (see Coinbase) facilitates the trades between their users through a centralized order book which tracks every order in the platform. CEXes are responsible for aggregating these orders, matching them, and executing the actual buy and sell transactions on behalf of users. 

The fact that users need to delegate the execution of transactions to the exchange translates into them not having full ownership of their keys, as CEXes need to be able send transactions on traders’ behalf to execute their orders. In practice, what CEXes do is pool users’ cryptocurrencies in a number of “hot” wallets controlled by the exchange which are used to execute the actual orders. In many cases, if the order can be matched within users in the platform, the exchange doesn’t need to execute a transaction in the blockchain at all. 

What they do instead is update the balance allowances of the corresponding cryptocurrency for the users involved in the exchange on their centralized platform’s database. In the end, CEXes can be seen as traditional stock exchange brokers, but for cryptocurrencies. They serve as gateways between users and the underlying asset, offering them an interface to interact with them. This makes them really convenient to use, especially for newcomers. But they also have their drawbacks and risks, as we’ll describe in a moment.

Decentralized exchanges, on the other hand, do not rely on any centralized platform or third party to execute user orders. DEXes are able to perform the core operations of a centralized exchange and leverage a set of smart contracts to do it in a decentralized way. DEXes and their underlying infrastructure are also responsible for: receiving user orders, keeping the order book updated, matching orders, and executing them. In the case of DEXes, token exchanges are not done through a third party. Instead they are performed 1-to-1 on-chain between individuals, i.e. a full peer-to-peer exchange. Transactions are triggered by users in the corresponding blockchain, so they remain in full control of their credentials at all times.

CEX vs. DEX, fight!

Both CEXes and DEXes have their own advantages and drawbacks that you should be aware of before choosing one or the other.

  • Ownership of credentials: This was briefly introduced above, and is the most obvious difference. While in CEXes the exchange is the custodian of your keys and your funds, in DEXes every order and every transaction is done directly by the user. Consequently, DEXes support the use of hardware wallets, and give you full control and responsibility over your keys. But with great power comes great responsibility, which means that if you lose control of your keys or your seed phrase, you immediately lose access to your funds without anyone being able to recover them.   

  • Liquidity: Traditionally, CEXes have been more liquid than DEXes, although this is gradually changing. For now though, CEXes have more users than DEXes, which translates into them having more liquidity in their platforms. Also, when trading on a CEX you’re only allowed to trade the tokens listed by the platform, which makes it easier to match orders, and for the corporation behind the exchange to provision the system with additional liquidity if needed by adding funds for their hot wallets. 

    That said, the gap between CEX and DEX liquidity is narrowing, because increased interest in DeFi has resulted in a spike in DEX users. Also, many DEXes are becoming what we call “Automatic Money Makers”, which make use of  liquidity pools, making liquidity less of a problem.

  • Token pairs: DEXes enable true peer-to-peer exchanges between their users. As long as two users are willing to exchange one asset for another, these token pairs will be supported by the exchange. On a CEX, however, tokens need to be explicitly listed for users to be able to trade them. CEXes need to implement the pair exchange to support these token trades, limiting the tokens that can be traded on them.

  • Ease of use: CEXes are full-fledged trading platforms. They are an interface between the different blockchains and their users. The fact that there is a single entity operating the platform and orchestrating orders and transactions with the market means that they can build features that are hard to code in a decentralized manner, using smart contracts (such as limit orders, stop losses, and other cool features from traditional financial markets).

    Also, through their centralized platforms CEXes usually offer the direct purchase of tokens using fiat money. This is why they are so convenient for users looking to make their first crypto investment. In spite of there being ways to achieve this, DEXes in general do not support exchanges between crypto and fiat, which implies that someone looking to use a DEX needs to already own some crypto.

  • Security: This is a huge win for DEXes. It’s no secret that centralized exchanges have been hacked multiple times. Delegating the custody of your keys to the exchange lightens the burden of key management from you, but increases the risks (and the rewards) in the centralized exchange. While with DEXes an attacker needs to compromise the keys of every user to gain access to their funds, in centralized exchanges an attacker can gain access to all the funds in the exchange just by being able to hack the platform and compromise the keys of the hot wallets used to manage and guard users' funds.

  • Privacy and KYC: Regulators have an easier time regulating CEXes when compared to DEXes, because as their name implies they’re run by a single central entity. Regulators in almost every country in the world force CEXes to impement KYC (Know Your Customer) protocols for users to prevent money laundering and other illegal activities. Thus, in CEXes you are not trading privately anymore: the exchange knows each and every transaction you make, and may even need to inform the state of all your transactions. This is not the case for DEXes, where all you need to start trading is an identity in the blockchain and some tokens to exchange. 

  • Fees: CEXes are significantly more expensive than DEXes. For DEXes the blockchain is their main infrastructure, while CEXes need to operate their own system which is also the one responsible for keeping your keys safe. These services need to be paid for in some way, which jacks up the size of the fees they charge, 

How to build a DEX

You now have a clear view of the advantages and disadvantages of DEXes and CEXes. But there are also slight differences between different DEX platforms, depending on how they’re implemented. There are mainly three different approaches for implementing DEXes:

  • Using an on-chain order book. In this design, every transaction is written in the blockchain. Not just the actual purchase or exchange between user balances, but also user orders, i.e. user requests to buy or sell. It is the ultimate decentralization of exchange platforms. However, every operation needs to be completed on-chain, with its corresponding high cost and scalability limitations. Some examples of DEXes that use an on-chain order book are Bitshares and StellarTerm.

  • An alternative to this is to use off-chain order books. In this case, user orders are collected and matched off-chain, while the final transaction is settled on-chain. Since orders aren’t stored on-chain, this method can run into some of the security risks of centralized exchanges, but it doesn’t have the limitations of on-chain order books. In this approach, we trade performance and cost for decentralization (often a huge dilemma in the blockchain and crypto space). Examples of DEXes that use off-chain order books are Binance and EtherDelta.

  • Finally, we have the new sheriffs in town, Automated Market Makers (AMM), which forgo order books all along. With order books if someone wants to exchange token A for token B, there needs to be someone with A that is willing to trade B for an agreed-upon price. Without enough volume in the exchange this can be extremely hard. AMMs remove the need of counter-parties for orders to match, and introduce algorithms to set the price, letting you trade A for B regardless of whether there’s someone on the other end of the trade. This is facilitated through the liquidity pools we mentioned above. Briefly, platforms that use liquidity pools pay their users interest in exchange for keeping their funds in the smart contract that operates the exchange, so they can be tapped for trades. In this approach, individual users are playing the role of financial institutions in traditional markets, where they ensure that the market stays liquid at all times. 

    Cool, right? In practice, this is implemented in a smart contract that maintains user pools, pays them an interest rate for their funds, receives the trade orders from users, and automatically executes them against the pool if it has the required funds. AMMs also require every transaction to be performed on-chain in order to interact with the platform. This will have an impact in terms of performance and cost, as transactions need to be made in L1, which means slower throughput and higher feest. Some examples of AMMs in the Ethereum blockchain are Uniswap and Sushiswap.

DEXes in action!

The first premise of a DEX is that it’s easy to understand, but not the easiest DeFi service to use. Let’s illustrate this with a quick walk-through of how to perform a token exchange with Uniswap.The first thing that needs to be done is to get our hands on some Ether. Even if we don’t want to exchange Ether, we’ll need some to pay for the transactions required to place the order and execute the trade in the system.

Once we have our wallet full of Ether, we can go to Uniswap and connect our wallet, for instance Metamask.

With our wallet connected  we can start swapping (i.e. trading or exchanging) tokens right away. Uniswap will find the most efficient swap to go from one asset to the other according to the liquidity available in the pool. To perform any trade we’ll have to pay a few fees: the basic fee to pay for gas to execute the transaction in Layer 1, and an additional fee to pay for liquidity providers, which are staking their tokens to add liquidity to the system and make our trades possible.

We sign and submit the transaction: 

Then we just have to wait for the transaction to be executed in the smart contracts and for the swap to become effective, and we are good to go.

But there’s still more… L2 DEX

Uniswap is built on top of Ethereum’s L1. But what if we don’t want to pay for Ethereum’s large fees, and worry about whether our transactions are going through or not when the network is congested? To overcome this issue, different projects that build DEXes on Layer 2 are emerging. 

A good example of this type of DEX is MetisSwap. MetisSwap is a Layer 2 Decentralized Exchange application built on Metis Layer 2 Beta Testnet.

If you recall from previous publications, Metis is a Layer 2 platform based on Optimistic rollups that connects to the Ethereum mainnet and adds numerous additional features to the standard L2 projects. The team behind Metis recently released the Beta version of its testnet, which includes an implementation of a Uniswap hard fork DEX called MetisSwap. As you can see in this post, using MetisSwap is quite straightforward if you know how to use other DEXes, such as Uniswap. So what exactly does MetisSwap give us when compared to other DEXes? 

  • It’s built on Metis’s L2 platform, so instead of having to commit every swap transaction to the Ethereum mainnet and pay its corresponding fees, you can trade over L2 and pay Metis’s typical transaction fees of about 1 cent, with the enhanced performance and transaction throughput of a L2 platform. These transactions will eventually be committed on-chain through Metis’ Optimistic rollup, but this is transparent to us, which makes it really convenient to us.

  • Metis has built-in support for DACs (Decentralized Autonomous Companies), which are able to seamlessly create their own tokens. So if you are looking to launch your own crypto project and you want to allow your users to exchange your tokens, you don’t have to worry about creating your own ERC20, having your token listed on an exchange, or waiting for there to be enough liquidity for your token to be exchanged. With MetisSwap, you can create your own token in a few clicks, and start exchanging it for other tokens over L2 without having to write a single line of code.

In other words, MetisSwap offers all of the built-in features of Uniswap but with the advantage of using a L2 platform, and with the added bonus of allowing you to mint and swap your own token as desired.

Closing words

DEXes have seen an increase in interest since the surge of DeFi use cases. Users want to be able to swap their tokens seamlessly without having to rely on third parties, and there are multiple ways to do so. That includes the fully decentralized approach of an online order book and the innovative approach of a Layer 2 AMM, which removes all of the complexity and limitations of more traditional L2 approaches, providing the perfect setup for DeFi users without deep technical knowledge. 

Layer 2 DEXes will be able to take DEXes much further when it comes to matching ease of use, convenience, and feature richness of CEXes...with the added security that comes with decentralization. Stay tuned, the future looks bright!

@adlrocha - The siege of open source software?

Digressions on Github Copilot and more

Github Copilot’s beta is out! And with it, a heated debate on the use of open source software by big tech companies. For those of you who haven’t read about it yet, Github Copilot is an AI tool in the form of (at least for now) a VS Code extension that helps you write code by “giving suggestions for whole lines or entire functions inside your editor”. Github Copilot is powered by Open AI, and is trained on billions of lines of public code to achieve its task. It uses an OpenAI engine called OpenAI Codex which is a more capable engine than GPT-3 in code generation (in the end it seems to be a GPT-3 trained on billions of lines of code).

So far so good, it is not as if Github Copilot was the first AI tool of its kind to help developers in their job of writing code. You are probably familiar with Tabnine, which has been around for a few years now and does basically the same as Github Copilot.

What’s all the fuss about then? Well, it may have been implicit for the case of Tabnine, but as it was a small startup no one really cared that much, but in the case of Github Copilot is blatant: these tools have been trained using the code you’ve worked so hard to produce, and they are suggesting snippets of it to other developers.

Initial reactions to Github Copilot went from the regular “oh man, we are doomed, we’ll be out of our jobs in no time” to “my productivity will skyrocket with this tool”.

Then people started thinking a bit more deeply, and reactions looked a bit more like this:

I sometimes use this joke with my non-engineer friends to explain what I do for a living (and for fun): “we developers are just a dumb interface between StackOverflow and the application/system we want to implement”. So it was only a matter of time before counter-arguments in favor of Github Copilot in the line of the following arised:

But there’s a huge difference between using Github Copilot-generated code and code snippets from StackOverflow in your program: the source of the code. When you use code from a StackOverflow thread, the person answering to that question is willingly sharing his code snippet with you (and others) to help you. Copilot-generated code may be inferred from pieces of code from one of your repositories that for some reason you may be quite hesitant to share: either because it is protected by a non-permissive license, or because you worked hard on it and you are too selfish to share it with others. It doesn’t matter, the thing is that you own that code, and you should be free to do whatever you want with it.

The fact that you are hosting the code in Github shouldn’t be enough reason for anyone to use it to train their AI. I don’t think we signed up to this when we created a Github account (or at least I am personally not aware of it, but maybe there’s something in Github’s terms and conditions I am not aware of. Please let me know if this is the case).

Licenses are there for a reason

Github is full of open source code under permissive licenses that one can openly read and use in its own projects without having to ask permission from anyone. However, depending on the specific license used, there may be some constraints, requirements, and limitations in the use of the code. We may be able to use code from a project as long as we don’t make profit from the derived work; or there are certain licenses that allow the use of the project’s code as long as every work derived from it is open source under the same license.

Github Copilot would have been a great tool if it had been trained exclusively using code under permissive licenses that didn’t require acknowledging the original author of the code (or other license-related requirements). Or even more, if according to the license of the project a developer is working on, the type of code used to train the model or make suggestions could be chosen accordingly.

Developers of open source software use licenses as the communication channel to let other developers and users know what they are allowed to do or not with their work. But my feeling after reading tweets like the following is that Github didn’t pay much attention to this when implementing and training Copilot:

And don’t get me wrong, I am totally in favor of tools like this that improve our productivity and can make our lives easier even if it needs billions of lines of codes to be trained, as long as this is done the right way. If you’ve contributed to open source software, or probably just hosted your code in Github even if it is not under a specific license, you are a small part of the reason why Github Copilot works. Have you been (or will be) rewarded in any way for this contribution? Not at all. You’ll probably have to start paying a subscription if you want to start using Github Copilot.

Github will be profiting from your work, probably even in the case where you explicitly stated in the license of your project that no-one could profit from work derived from your work. Unfortunately, there is no easy way of enforcing this. And what would have happened if everyone started doing things like this?

Many may argue that the same way you can use Google services for free in exchange for your data (which is essentially yours and you are the only owner); Github can use your code to train their models in exchange for all that free hosting and unlimited private repositories you get. But while this is quite clear when you create a Google account, I don’t think this is that clear when creating one in Github.

Time to self-host our critical services?

All of this makes me quite sad. The fact that we rely more and more on big tech services for our day to day lives means that we are quite defenseless against them doing other Github Copilots-related projects using our hard work and personal data. You want to use Github? Then you have to deal with them doing what they want with your code. Period. This is just one more example of how broken the Internet and its dynamics are these days. But what can we do to solve it? I can’t see an easy solution, apart from building an Internet substrate that enables people to escape these twisted dynamics.

Without this substrate where everyone can own its own piece of the Internet without having to rely on others, the only escape to the influence of big tech, and identity shifts like the one from Github is to self-host all the critical services you rely on, like this folk has done. Quoting from that website: 

“I do not agree with GitHub's unauthorized and unlicensed use of copyrighted source code as training data for their ML-powered GitHub Copilot product. This product injects source code derived from copyrighted sources into the software of their customers without informing them of the license of the original source code. This significantly eases unauthorized and unlicensed use of a copyright holder's work.

I consider this a severe attack on the rights of copyright holders so therefore I cannot continue to rely on GitHub's services”

But this is not the panacea. Self-hosting every online critical service we depend on in our day to day is a lot of work. We have to worry about hosting the infrastructure for the service, maintenance, upgrades, security risks, etc. Of course it depends on the level of control you want over your services, but you most probably won’t be able to achieve the SLA of big tech services. 

If in spite of all of these inconveniences you still want to start hosting some of these services yourself, this repo is a great start. It walks you through how to deploy a list of super useful services: your own VPN, web hosting, cloud storage, calendar, chat server, and a long list of other self-hosted open source alternatives.

Can open source software be closed?

Unfortunately, Github Copilot is not an isolated example of how big tech is sieging open source software. Visual Code is another interesting case I recently learned about. You may be thinking that when you install Visual Code in your machine what you are using is a build of the open source code hosted in this repo. Well, apparently this is not the case and you would be better off downloading the code and building it yourself.

“Microsoft’s vscode source code is open source (MIT-licensed), but the product available for download (Visual Studio Code) is licensed under this not-FLOSS license and contains telemetry/tracking. According to this comment from a Visual Studio Code maintainer:

When we [Microsoft] build Visual Studio Code, we do exactly this. We clone the vscode repository, we lay down a customized product.json that has Microsoft specific functionality (telemetry, gallery, logo, etc.), and then produce a build that we release under our license.

When you clone and build from the vscode repo, none of these endpoints are configured in the default product.json. Therefore, you generate a “clean” build, without the Microsoft customizations, which is by default licensed under the MIT license.”

This is why projects like VSCodium, a free/libre open source software binary of VSCode, have to exist. Apparently every time we use VSCode we are sending data to Microsoft. Some people may be comfortable and aware of these practices, but others may think this is outrageous. Why aren’t these companies more transparent with what they do with their users so that at least they can make informed decisions if to use them or not? Is it because they know what the answer would be? 

This was a weaker case than the one of Github Copilot of siege of open source software, but still one worth being aware of. I personally don’t expect this type of cases to stop any time soon.

Elastic is another example of a company that had made a related move in this direction of seiging open source software by changing the license of some of their projects (that millions of people probably depend on) into a more restrictive one to increase their profit. Again, I am not against companies profiting from their work, and their projects they create, this is legit and awesome, I personally would do the same. What I am against is “changing the rules of the game midway”. 

I haven’t talked to any contributor to Elasticsearch, for instance, but I am really curious to know how they felt when they learnt that all of their hard work contributing to an open source project they thought was protected under a specific license eventually changed into a more restrictive one. They probably shared the values of the project they were contributing to, and overnight, because someone unilaterally chose to, one of the key foundations of the project they have voluntarily worked hard to make changed. 

Developers should be more aware of the licenses of the project they contribute to, and the consequences of it; while companies behind that software should be more respectful of their licenses and with their contributors. It all comes down to rewarding fairly everyone for their hard work, because open source software may seem free by design, but behind it there is a lot of hard work, and ethics should prevail. Even if this reward is just sticking to the initial values of the project for respect to its contributors. Ask anyone, open source software is almost never about the money.

Can we fix it?

But coming back to potential solutions to the problem at hand: what if one wants to self-host its own services without having to worry about the overhead of maintaining and ensuring a certain level of SLA not to make the service unusable on a daily basis? Here is where a new substrate for the Internet is needed. A substrate where we can be in control of our data and our services. Regular readers of this newsletter know what is coming next: we need to fix the Internet, and decentralizing it to minimize our reliance on big tech is the first step towards this goal. 

Filecoin and IPFS are good examples of how decentralization and web3 protocols can help us return our control and build self-hosted services with redundancy and a great SLA without the nightmare of having to maintain the infrastructure. With these protocols we collaboratively maintain the infrastructure. We share the burden between all the participants of the system. Is not every man and woman by themselves or delegating everything to big tech giants. It is something in between.

I am really optimistic about the future of the Internet and Web3. We are getting to the point where all the foundations are there, we now have to make it better than Web2 not only for the people behind Web3, but for the users of Web2, i.e. everyone else. You want to join this exciting endeavor? Ping me and let’s have a chat! For the rest, see you next week.

@adlrocha - 2.0

Upgrading the newsletter for subscribers

Until now, this newsletter has been completely free. There was no incentive to pay a subscription. In spite of this, a dozen people chose to pay a subscription as a way of rewarding and promoting all of the work I was doing every week for free. I wasn’t giving any additional value to them: they were getting the same publications and the same attention free subscribers were receiving. Heroes.

One of the goals I set for the newsletter in 2021 was to start monetizing in some way all of the work I was putting in it. With my current availability, it is getting increasingly harder to write high-quality publications every week. I was afraid that without the right motivation I would stop writing. Maybe, if I started making some additional money with the newsletter I would be encouraged to be there every week for my subscribers. What better external motivation than a few additional bucks (or crypto, of course I always accept crypto as payment) by the end of the month? This was the rationale behind this goal.

I’ve been thinking a lot lately about the best way of monetizing the newsletter without depriving it of its essence, and I realized that the best way to achieve this is by giving additional value to paying subscribers. Why would someone be encouraged to promote my work, if they can get it for free? But writing exclusively for my paying subscribers was not an option. I also enjoy writing for a broader audience, actively interacting with it, and having insightful discussions in the process (this is why I started writing in the first place: to build a community and long-lasting connections). So what could I do? 

After some thought, I feel I’ve found the perfect compromise. Welcome @adlrocha newsletter v2.0 (if this works, in the next major release, I should change the name of the newsletter, as it won’t be @adlrocha’s exclusively, but of everyone else supporting it).

Release v2.0 🚀

I am really excited to announce this new release of the newsletter. This release includes big changes for paying subscribers, some minor changes for free subscribers, and some constraints for non-subscribers.

Content now expires! 😱

The big change for all readers of my newsletters is that, from now on, publications will expire for non-subscribers. Everyone on the Internet will be able to read my articles in the first seven days since their publication. After these seven days, only subscribers of the newsletter will have access to them, i.e. The full archive will only be available for subscribers.

I usually share my publications in social networks and HackerNews. It is a great source of new readers and subscribers. Unfortunately, the usually end up being “one time readers”. If I want to build a community around my newsletter, readers should feel part of this project, and have some incentive to subscribe and become part of this community.

My rationale behind this feature is that readers that enjoy my work will subscribe to avoid content from expiring so they can read at their own terms. Once subscribed, if they feel like it they will join the discussions and interact with the community, if not, at least they’ll have the content available any time in their inbox. You’ll come for the content and (if this works) stay for the discussions, the ideas, and the relationships.

New perks for paying subscribers 💸

Paying subscribers have received a significant upgrade in this new release:

  • They can now influence my backlog by adding new topics to it. Is there anything you’ve been hoping for me to write about? Now is your time to make it happen. Suggesting a new topic is simple, the only thing you need to do is to fill in this form adding your subscriber email address, and a brief description of what you want me to write about. Easy peasy. My subscribers backlog is a FIFO queue. As topics start arriving I will be writing sequentially about them until the queue empties, that I’ll come back to my own personal backlog. I am considering making the subscriber’s backlog public in order for subscribers to be able to vote topics in it promoting them to the beginning of the queue, but I’ll leave this for the next minor release (2.1). I will wait for some additional feedback about this feature before jumping into new things.

  • I am also adding a new feature that I called the “Monthly Ask Me Anything Webinar”. Every last week of the month I will share a new form to all paying subscribers asking their availability and willingness to hold an AMA webinar. In this form, I will suggest different formats for the monthly session. The sessions will range from:

    • Traditional AMA and open discussions about any topic of interest of the audience. 

    • Live presentations and demos about any new technology or something I may know about and of the interest of the audience.

    • “Reading parties” where I will share a list of papers and I will read and make a presentation of the most voted one. 

    • Any additional session/format subscribers may come up with. The subscribers form and my DMs are open for anyone to suggest and give feedback.

  • And of course, paying subscribers have access to my full archive. For them, content never expires.

A new class of subscriber: The sponsor 🤴

I honestly don’t expect anyone to become this new class of subscriber (at least for now), but I wanted to start experimenting with the idea of having a top class of subscribers.

In this release, sponsors are paying subscribers with the ability to book an hour of my time for a 1:1. Is there anything you think I can help you with? Book that hour. Do you think I can help you design your next decentralized system? Use the hour. Do you want my opinion on some matter in the crypto space I may know about? Do you want to share ideas, or for me to share with you some of my crazy ideas to see if we can build a company together? That also works for this hour.

Sponsors will be earning additional benefits in future releases, but I need to think a bit more deeply about it. Something I am considering is making sponsors part-owners of the newsletter, sharing a stake of the profits with them, but this is still under design. This is why, instead of having a fixed price for sponsor subscriptions, new sponsors are allowed to name their price in this release (this will be revised in future releases).

Sponsored publications 🗣️

Last but not least, I am quite transparent about the metrics of my newsletter. Someone looking to sponsor their project, product, technology, open positions (you name it) is able to know the reach a sponsored publication on my newsletter would have, and the kind of audience it would reach. This is why I’ve decided to leave a space in every publication for anyone to sponsor whatever they want in it. Do you want to give it a try? Fill in this form.

A brand new design 🖌️

Do you see anything different? The newsletter has gone under a slight redesign.

The first release of (hopefully) many more

This is the first major upgrade of the newsletter in two years. From now on this newsletter will follow development and release cycles analogous to those of software products. These past two years I’ve been focused on building an audience and a way to unleash my passion for learning and writing. During this time I’ve learned a lot, had insightful discussions, and met a ton of incredible people… but I want more for this newsletter.

I don’t want an audience, I want a community of learners, creators, and restless. Will we make it? We’ll see. I will track your engagement throughout the next few months and decide if to make new releases or downgrade to v1.0. Whatever you like the most. In either case, see you next week!

@adlrocha - Polygon: L2 or not L2?

Learn from the concepts, but never marry the project.

It’s time for another L2 comparison! The other day I came across a project I wasn’t aware of: Polygon. Polygon is advertised on its official site as the “Ethereum’s Internet of Blockchains”. What does this mean?

Polygon seems to be tackling all of Ethereum’s current limitations at the same time: its current low throughput (which hopefully will be improved with Ethereum 2.0); the poor UX provided for applications as a result of gas fees and the delayed PoW finality; and what they call “no sovereignty” which translates into the lack of composability of the Ethereum stack; and its governance dependence, which limits the influence decentralized applications can have over the underlying blockchain substrate. They aim to solve all of this by building “a protocol and a framework for building and connecting Ethereum-compatible blockchain networks.”

If you’ve been reading my publications lately, you may already be aware of how several projects in the community are trying to mitigate some (or all) of the aforementioned limitations by building Ethereum-compatible blockchains. They’re building completely new blockchain protocols like Polkadot which are EVM-compatible or implementing Layer 2 solutions built on top of Ethereum’s mainnet, such as Metis, Optimism, or Arbitrium

Polygon is attempting to resolve multiple challenges at the same time, offering  one-click deployment of the preset blockchain network (Polygon side-chains); a set of modules to develop custom networks (like what you can do with Parity’s Substrate); an interoperability protocol for exchanging arbitrary messages with Ethereum and other blockchain networks; and adaptor modules to achieve interoperability for existing blockchains (similar to Polkadot’s bridges).

Source: https://polygon.technology/

When I read this list of promises from Polygon, I thought: “Wow! They are basically trying to do everything!” But the more I read, the more I wonder if they were trying to bite off more than they could chew. In order to get convinced that all of this was possible and could be implemented by the same team, I had to go deep into the tech. Let’s jump into it.

The tech behind Polygon

Polygon is not a single blockchain, but an ecosystem of tools to deploy your own blockchain network, and host your blockchain applications. In certain ways it may be close to a L2 solution, but currently it is more an interoperability project and a blockchain framework (you’ll see in a moment why). 

Polygon’s PoS “main chain”

With Polygon you are able to deploy your own blockchain network, interact with other EVM- or Polygon-compatible blockchains, and give an additional level of security and trust to your network leveraging the Ethereum main chain, Polygon’s main proof of stake chain, or a “security as a service” feature. 

Polygon’s main chain, also known as Matic POS Chain, is an Ethereum commit-chain with a proof-of-stake consensus. The relationship between Matic POS chain and the Ethereum main chain is depicted in the following figure:

Block producers in the Matic network are creating blocks at fast speed. In order to commit these blocks to Ethereum’s main chain, the Matic chain uses a proof of stake consensus. For every few blocks on the block layer, a proposer will be chosen among the stakeholders to propose a checkpoint on the Ethereum main chain. These checkpoints are created by the proposer after validating all the blocks on the block layer of the Matic Network and creating the Merkle tree of the block hashes since the last checkpoint. 

The root of this block commitment is then broadcast to all the stakers in the network. In order for a checkpoint to be accepted as valid, at least ⅔ of all stakers in the network need to accept it. With all the signatures collected, the checkpoint is committed to the Ethereum chain. From there on, anyone in the Ethereum network can challenge the proposed checkpoint for a period of time. If no one challenges it, the checkpoint is considered final and included on the main chain. All of this is orchestrated by a set of Polygon smart contracts deployed in Ethereum.

The Matic POS Chain has a utility token, Matic, which is used for staking and governance purposes, as well as of course to pay for transactions in the chain. In order for a node in the Matic POS chain to become a proposer or a staker, it needs to stake a certain amount of Matic. 

For those of you who are regular readers of my publications, this scheme may have reminded you of an optimistic rollup, with its sequencing of transactions. This is one of the aspects where Polygon resembles a Layer 2 solution. However, the security and trust guarantees of Matic chains checkpointing are weaker than the ones for optimistic rollups. We’ll dive further into this issue after introducing a few more Polygon concepts.

Bridges

Polygon also supports interoperability between chains through what it calls bridges. These bridges can be used for arbitrary message passing and asset exchange between chains. With these bridges, developers are able to migrate tokens, or make smart contract calls from one chain to another. Technically, a bridge is a set of contracts in both chains that orchestrates this migration of assets from the root chain to the source chain. In a nutshell, and disregarding the bridge’s specific implementation, an asset exchange between two chains using bridges has the following stages:

  • A user deposits funds into the bridge in the source (or parent) chain, and a representation of the assets is issued in the destination (or child) chain.

  • The bridge is notified about the new account balances and enables the withdrawal process.

  • From here on, the user can withdraw their assets in the child chain. The assets in the parent chain are burnt and the user’s balance in the child chain is updated.

Polygon currently supports two different implementation of bridges for the interoperability between chains:

  • A PoS bridge, which is faster and more flexible, but a less secure solution than the second bridge currently supported by Polygon, the Plasma Bridge. In the PoS bridge developers need to map the addresses of the source and destination contracts, block the assets, and run the exchange. The PoS bridge uses Matic’s state sync mechanism, which is the scheme used by the Metis POS chain to read from Ethereum. These exchanges take 10 to 30 minutes.

  • The Plasma bridge provides increased security guarantees but with a 7-day withdrawal period associated with all withdrawals from Matic to Ethereum. In the next figure you can get a glimpse of how the Plasma bridge operates for the case of an NFT migration. You can follow this link for further details. 

Polygon SDK

The reason why I mentioned that Polygon is more a framework than just a blockchain is the Polygon SDK. The Polygon SDK is a modular and extensible framework for building Ethereum-compatible blockchain networks. You can think of it as an alternative to Parity’s Substrate but written in Golang, instead of Rust, and exclusively for EVM-compatible chains.

The Polygon SDK provides the following layers that you can configure and modify to implement your own chain: 

  • The blockchain layer is the core of the SDK. It implements everything related to block and the state of the chain. It manages the logic that happens when a new block is included into the blockchain, and defines the state behavior. The state represents the state transition object. It deals with the state changes when a new block is added to the chain and the state handles (the execution of transactions), executing the EVM, and changing the state Merkle Tries of the blockchain according to the transaction being performed.

  • The consensus layer provides an interface for different consensus algorithms. It lets you plug in or implement any consensus algorithm you want into your blockchain. The only consensus currently supported by the Polygon SDK is the Istanbul Byzantine Fault Tolerant (IBFT). But according to Polygon’s documentation, the company is also working on implementations for Clique, Ethash, and PoW.

  • The TxPool module is what you would expect: it represents the transaction pool implementation, where transactions are added from different parts of the system, for their subsequent processing by the consensus and the blockchain layer. 

  • Finally, the SDK includes a p2p networking layer for the communications between peers implemented over libp2p, and a gRPC and JSON RPC APIs to interact with the peer.

What is yet to come

We’ve gone through all of the solutions that are currently implemented through Polygon. But if you remember from my brief introduction to the project, and the figure with the list of solutions, Polygon has way more than this in its roadmap in order to fulfill its vision. Mainly:

  • Polygon chains: Polygon is planning to support two major types of Ethereum-compatible networks: standalone networks and networks that leverage “security as a service”. 

    • Standalone chains are fully independent blockchain networks in charge of their own security. They enjoy full independence, but the level of security of the chain depends on the number of nodes participating in the network, and the specific consensus being used. This can be a great fit for big enterprises or established projects and communities. 

    • On the other hand, secured chains are blockchain networks that use Polygon’s security layer instead of establishing their own independent validator tool like standalone chains. This security as a service can come in the form of a delegated pool of validators, or the combination of other verification schemes like rollups and fraud proofs.

  • Security as a service: Polygon offers a specialized and non-mandatory layer to provide “validators as a service”. These validators periodically check the validity of any Polygon chain for a fee. This runs in parallel to the Ethereum chain, and is fully abstract so it has multiple instances. This looks a lot like Metis’ pool of sequencers (but more about this in the next section).

  • Rollups: Polygon has in its roadmap the implementation of ZK and optimistic rollups. I couldn’t find additional information, but I personally expect these schemes to become part of the aforementioned security layer, as an additional security mechanism that standalone chains can request. 

How does Polygon differ from a L2 solution?

I wouldn’t call Polygon an L2 solution, at least not yet. For me, it is currently more of an Ethereum commit-chain, an interoperability solution, and a blockchain framework to offer flexibility to DApp developers. If you take a true L2 project like Metis, which you may probably be familiar with by now because it has been featured in a few of my previous publications, you see that:

  • While Metis has a clear proposal to tackle Ethereum’s scalability limitations, a user of Polygon needs to navigate through all of its solutions to understand the one that suits its needs: the Matic PoS chain to overcome Ethereum’s scalability limitations, bridges if interoperability of networks is the key issue, or the Polygon SDK to build a brand new EVM-compatible standalone chain.

  • Matic PoS Chain’s checkpoint scheme is quite similar to an optimistic rollup. But if you then look at Polygon’s roadmap, the company is also thinking about implementing optimistic rollups. So how can this be? My take is that Polygon’s rollup implementation will belong to the pluggable security layer. Metis and other L2 solutions on the other hand already come with optimistic rollups “by design” as part of their protocol; it is not an optional layer that needs to be configured ad hoc into your chain. The chain just has it. I was trying to find some numbers on the performance of the Matic PoS chain in order to compare them with the numbers we saw in this comparison between Metis and other optimistic rollup solutions, but couldn’t find anything. However, at a glance, Metis’ sequencing pool, and the use of rangers as verifiers with a stake in the network, seem more robust than the ones in place for the Matic PoS chain. Matic’s PoS chain more closely resembles the optimistic rollup approach from projects like Arbitrium.

  • Companies looking to deploy blockchain applications may choose Polygon to deploy their standalone chains. This may make sense for certain use cases, as many L2 solutions are not focused on corporate use cases, and they lack the isolation and security guarantees required by this kind of organization. However, this won't be the case for solutions like Metis. Metis supports DACs (Decentralized Autonomous Companies) from scratch, which gives organizations access rights and other permissioning schemes without requiring the deployment and maintenance of a complete independent chain (with the overhead and the burden that this may entail). 

  • Polygon standalone chains can also be a great way of horizontally scaling use cases. However, L2 solutions like Metis also allow this horizontal scale by decoupling the state and the execution of transactions without requiring the deployment of a brand new chain.

  • When I was reading about Polygon’s security layer and “validators as a service,” it also reminded me about how Metis already has “by default” rangers and sequencers with a stake securing the L2 and committing blocks to the mainnet. However, while in Polygon’s security layer, validators are “rented” for specific standalone networks. With Metis, rangers and sequencers rotate through every DAC enforcing that everything is working as it is supposed to in all of them (not only a small number of selected ones).

  • Something that many L2 solutions lack and where Polygon excels is in the interoperability between chains. I would be curious to see how Polygon bridges operate with L2 solutions like Metis. Metis, like almost every L2 solution, is EVM-compatible, which means that it would be theoretically possible to use Polygon bridges to perform token exchanges between Metis and other Polygon-compatible chains. Can you imagine how powerful a decentralized application would be by leveraging Metis Layer 2 capabilities (with its IPFS integration), and Polygon interoperability features?

  • Finally, let’s come back to something I briefly brought up at the beginning of the article: the security and trust guarantees of Polygon bridges and its interaction with L1 are weaker than those of L2 solutions based on rollups like Metis. This is no small item. We’ve seen many times how so-so security has led to disastrous results for blockchain projects. Since Metis is a fully decentralized platform running on top of the secure Ethereum network, it’s a more secure option than a centralized commit-chain like Polygon.

It all boils down to who you choose to trust

Polygon PoS bridge is secured by a set of external verifiers. The security of this chain is guaranteed by the verifier's stake in the system, and the penalty of misbehaving. Nothing new on this front, we place our trust on the security of the system in the economic incentives of the consensus. The more verifiers and the more stake in the network, the more we can trust it. Something similar happens for optimistic rollups, due to the economics of the scheme, the more sequencers and verifiers involved in the network, the more we can trust the security of the L2 solution. This is why I mentioned in my comparison of L2 solutions that having an incentive system where not only sequencers but also verifiers have “skin in the game” (as is the case with Metis) can really benefit the performance and security of optimistic rollup implementations. But what about Polygon bridges?

Polygon PoS doesn’t have a single custodian, and also tries to follow a decentralized crypto-economic approach. However, the bridge contract, where user assets are deposited and the overall setup is responsible for their withdrawal and asset exchange, retains admin authority and is controlled by a multi-signature wallet via a proxy. This multi-signature wallet started with a 2/3 multi-signature and is now upgraded to a 5/8 scheme. Among the eight signatories, four are Polygon co-founders, and the other four are key members from other Polygon DeFi projects.

As stated by an independent security analysis of Polygon’s PoS bridge: “Through our examination of the contract code, the owner of the contract can upgrade and replace the contract at any time (without a delay period), which means that the owner can withdraw all user assets in the contract at any time, which is certainly a potential security risk. Therefore, the assets transferred to the Polygon chain through the PoS Bridge are not trustless at this stage.”

The security guarantees of Polygon’s Plasma bridge are a bit stronger but still far from perfect. If you recall from our brief description of the Plasma bridge, a challenge period of seven days is required when withdrawing funds, because of the use of fraud proofs. The important shortcoming of this approach has to do with data availability. While in optimistic rollups, all the data you need to verify a rollup is on L1 and there is no need to interact with the L2 commit-chain, this is not the case for Polygon. In Polygon, a user needs the Merkle root generated in the Matic PoS chain’s checkpoint in order to verify the proof. This translates to a user being unable to identify a malicious actor without interacting with the Matic PoS chain. 

This tweet thread describes perfectly in six tweets why optimistic rollups (and optimistic rollup-based bridge solution) is a way better approach in terms of security. It all comes down to who you trust: a set of validators and the data in L1; or a small number of validators and some data stored in a commit-chain. 

As a final note, this post has a great overview of different implementations of bridges (including Polygon’s bridge) and a brief comparison with L2 protocols.

Closing words

And that is all that I have (at least for now). Today we had the opportunity to learn about another interesting project in the Ethereum ecosystem, Polygon. I approached this project thinking that it was another yet another L2 solution, and what I ended up encountering is an interoperability project and blockchain framework with a vision to improve Ethereum. Lots of interesting concepts and new developments ahead. Let’s see where it gets!

@adlrocha - The outcomes of remote work

A thread on its benefits after one year of working from home.

The publication I was working on for this week needs a bit more of work. Instead of publishing it "half baked" today, I am going to delay it to next week. I didn’t have any backup article lined up for today, so I will take the opportunity to share a with you all a thread collecting my thoughts after more than a year working from home. Enjoy!

(PS: This last tweet is a poll. Click on the link to participate and share your thoughts with the tweetaverse. So far, no one wants to come back to an office, not even Apple’s employees. What about you?).

I know, this is not a conventional publication, but I thought it could be fun to share a bunch of a tweets article-like to enhance their readibility. A new experiment for this newsletter, and as always, I would love to know your opinion.

Have you been working from home and want to share your thoughts in this newsletter? Mention me in your tweet storm and I will add your thread here. Let’s collect as many testimonials about remote work as possible. It may help companies rethink their talent retention and adquisition policies (or not). Have a wonderful Sunday!

Loading more posts…