@adlrocha - DAOs: The operating system of communities

Decentralized Autonomous Organizations

Everyone is excited about NFTs and the market cap of cryptocurrencies these days, but you know something I think people will be increasingly excited about (again) in the next few years? Decentralized Autonomous Organizations (DAOs). It seems like no one talks about DAOs anymore, as if they were only a crypto trend that ended up passing us by. But the truth is that DAOs are alive and well, and many teams are already leveraging them to govern their blockchain projects, with many more to come. We should expect DAOs to become first-class citizens for blockchain’s future and, hopefully, for the broader future of work.

A gentle introduction to the problem and the concept

So what exactly is a Decentralized Autonomous Organization, or DAO? Governance at scale has always been a really hard problem for the blockchain industry and society as a whole. Especially when it comes to fostering fairness for all participants. Traditionally, we have relied on a set of centralized entities to manage the resources of a community, and make decisions on behalf of all the participants of a system. The reason for this has been the inability to scale a governing system to allow the involvement of every member willing to participate in decisions, fairly resolve conflicts, and distribute the influence and power of each member. 

From governments to corporations, we have historically relied on centralized governance. Many of the richest countries in the world have a representative democracy, where citizens are allowed to vote every four to six years for whichever party or candidate they want to delegate decision-making to for that period of time. A representative democracy is not a pure democracy, because citizens don’t have the ability to get involved in every single decision the government makes. Nonetheless, representative democracy has always been regarded as the fairest way to make the system scale.

Something similar happens in corporations, where usually shareholders and the board have final say on the decisions that impact the company, while users and employees also have huge stakes in the corporation. Ideally, all of them should have a say according to their power and involvement in the organization. The problem is that such a scenario would lead to conflicts and a really inefficient system. So we again delegate governance to a central entity. 

And what happens when we introduce a mix of external participants that may not be trusted but still want to participate in the system? Then not only do we have the problem of building scalable governing systems which are fair, we also introduce the problem of “fair exchange,” which says that a fair exchange between untrusted parties is impossible without a trusted third-party. So how can we build a scalable governing system that enables fair exchange and interaction between different participants that may not trust each other? This is where DAOs come into play.

Simply put, DAOs provide an operating system for open collaboration. This operating system allows individuals and institutions to collaborate without having to know or trust each other. They leverage the blockchain and a set of smart contracts as their “trusted third party” that enables the fair exchange and interaction between the different participants within the system. 

DAOs tackle a problem in economics called the principal-agent dilemma. It happens when a person or entity (the “agent”) has the ability to make decisions and take actions on behalf of another person or entity (the “principal”). If the agent is motivated to act in its own self-interest, it may disregard the interests of the principal. This situation allows the agent to take risk on behalf of the principal. What deepens the problem is that there might also be information asymmetry between the principal and the agent. The principal might never know that it is being taken advantage of and has no way to make sure that the agent is acting in its best interest. Doesn’t this remind you of how governments and big corporations work these days?

Thus, DAOs specify all of the rules of operation for an organization or community in a smart contract. This smart contract is the third party through which we are delegating the implementation of the governing rules of the group. Every member can interact with this smart contract to issue a vote, make a proposal, or simply delegate a decision to someone else. 

The rules and transaction records of a DAO are stored transparently on the blockchain. Rules are generally decided by stakeholders’ votes. Typically, the way decisions are made within a DAO are through proposals. If a proposal is voted on by a majority of stakeholders (or fulfills some other rule set in the network consensus rules), it is then implemented.  

DAOs as an operating system

Let’s say that by now I’ve convinced you of the theoretical need of DAOs, but you are wondering “how does this work in practice, and what are they currently used for? Because governance, autonomy, etc. are quite abstract concepts”. Fair enough, let’s jump into some examples of projects implementing DAOs, or at least governing-like DAO schemes. 

Governance in DeFi

An excellent example of DAO-like decentralized governance can be found in big DeFi projects like Uniswap or MakerDAO. These projects have their own governance token that may be used to make proposals for the platform, and vote on existing ones. With this, these platforms ensure that the final call for the development and integrity of the platform is in the hands of its users. Both these projects implement a similar governing system based on this governance token.

MakerDAO’s governance token, for instance, is the ERC-20 MKR token. MKR can be used by its holders to make decisions on the operation and development of the project. It can be used to execute changes in the protocol parameters, determine the Debt Ceilings, or elect the role of different individuals in the project. The voting power of someone in the system is proportional to their MKR balance. The tokens are created and destroyed under different circumstances.

MKR is destroyed when the Maker Protocol’s system surplus exceeds a minimum threshold, resulting in excess Dai being auctioned for MKR that is then destroyed. Inversely, when the Maker Protocol is running a deficit and the system debt exceeds a maximum threshold, MKR is created and auctioned for Dai in order to recapitalize the system.

Uniswap uses a similar approach to governance. It has the UNI token that holders can use to vote and influence the development of the platform. The governance framework defines a specific process to get proposals from its creation to its execution. This is a great example of the kind of rules that can be implemented in a DAO to orchestrate a system.

What these governance systems and frameworks are enabling are publicly-owned and self-sustainable infrastructures that continue to carefully protect their indestructible and autonomous qualities. The blockchain and governance smart contracts are acting as the required third party, allowing rules to be defined and evolved by the actual participants with a stake in the system, not by self-interested parties.

Full-fledged DAOs

We just saw two examples of how governing frameworks are enabling self-sustainable infrastructures and platforms governed by their users. But there are also full-fledged DAOs operating as organizations that direct their funds and efforts towards the desires and targets of their users and stakeholders. A good example of this is MolochDAO

MolochDAO is a Decentralized Autonomous Organization focused on managing a fund to promote technical research in the Ethereum ecosystem. Stakeholders can buy MolochDAO tokens with ETH and become stakeholders of the DAO. This gives them power of vote on the grants and projects that will be funded by the DAO. Finding funding to research hard problems in the Ethereum ecosystem where many companies don't have a clear business case or sustainable funding may be challenging. But thanks to the DAO structure, different investors and individuals may collaborate to invest in the future of Ethereum.

This same concept may also be used to fund work in open-source projects that are not in the blockchain space. Millions of developers are contributing daily to open-source projects without being economically rewarded for it. Even more, many of them can’t make writing open-source projects their full-time job, because there aren't that many companies willing to hire people to contribute to open-source projects. This means a lack of funding for developers to become full-time contributors. Fortunately, blockchains and DAOs again come to the rescue. 

Gitcoin is a good example of this phenomenon. The platform connects open-source projects with funding and developers willing to contribute to them. Gitcoin builds a community that gives developers the opportunity to make a living with open-source projects, and for open-source projects to find the manpower to develop their vision. Of course, Gitcoin is trying to govern the platform as a DAO.

Gitcoin solves the problem of funding open-source projects. But open-source projects themselves are also decentralized communities that require autonomous governance. Even big players in the industry like Github has identified the need for Minimum Viable Governance in open-source ventures. If open-source projects could be governed and funded using a DAO, that would open a whole new world of possibilities.

DACs: Way more than governance

So we’ve seen the value of DAOs for governance in blockchain-related projects. This governance is implemented at the core of the protocol and is part of how the system operates. But DAOs can also be used to orchestrate other decentralized communities outside of the blockchain, such as open-source projects. Unfortunately, these projects can’t focus on building a core protocol to help them govern the system, as doing so would require a lot of additional effort and resources that they lack. Fortunately, a few projects offer a framework to start operating DAOs in no time. One of those projects is Metis.

Metis introduces the concept of DACs (already mentioned in my previous publications). DACs (or Decentralized Autonomous Companies) are a type of DAO that not only solve the governance problem, but also tackle management and incentives of autonomous and decentralized communities. You can see why they are a perfect fit for communities like the ones organized around open-source projects.

Metis DACs work like the DAOs we’ve been discussing throughout this article. Namely, holders of the DAC token are entitled to participate in the decisions and day-to-day operations of the organization, with the slight difference that in order to join the DAC, members need to also stake some of their own tokens in advance. This prevents “destructive members” from joining the organization. Right now, anyone could buy some UNI and try to harm Uniswap by voting for harmful proposals. To prevent this from happening, members willing to join a DAC must have some stake in the DAC. 

Staking in a DAC is not only used to ensure that members have something to lose if they harm the community. It also offers a way to earn reputation points and increase a member’s voting power in the system. Uniswap users that have recently joined the system have the same voting power as loyal, long-standing ones; there is no incentive for loyalty. DACs tackle this problem by improving a member’s reputation according to the time they’ve been staked in a DAC.

Metis’ goal with DACs is to build a fair, transparent, and universal mechanism to empower community members to collaborate without worries. From funds management to governance and incentives, everything can be handled with a DAC in an autonomous way. Metis already offers a framework to easily deploy a DAC, mint the DAC’s token, and start onboarding participants and orchestrating the day-to-day operations of the community.

The future belongs to DAOs

DAOs are an incredible innovation for blockchain platforms. From my point of view, they’re the killer app of blockchain, and we should expect more and more companies, communities, and organizations to start adopting the DAO framework. As shown for the case of DeFI, DAOs are currently focused on governance. But projects like Metis are already introducing new dimensions to DAOs, with the DAC framework able to handle much more than just governance, with incentives and management being just two of the biggest tasks that can be artfully handled by a DAC. If you want to read more about how DACs work and what they can do, check out this article.

What can we expect the impact of DAOs to be in society? Well, I may be optimistic, but thanks to DAOs I envision a future where “employees will be able to 'try-before-they-buy' with prospective employers; employees will have a much greater voice in company decision-making; identity-based voting mechanisms will play a formal role in governance; and full-time, monogamous work at one company may well become the exception, not the rule.”

And you know the funny thing? All of this can be built today...and more and more projects are already doing exactly that.

@adlrocha - Blockchain Middleware

How we are paving the path to the next blockchain unicorn

Blockchain technology and decentralized finance (DeFi) have been all over the media in the past year. DeFi is completely changing the way we do finance, and there are more and more companies, big and small, exploring what DeFi can do for them. 

Still, one of the big challenges that new DeFi or blockchain companies face is the lack of engineering capacity with the wide range of expertise required to tackle a broad range of projects.

Developing a decentralized application involves a number of steps: writing the smart contract that handles the decentralized logic of the application; deploying all the infrastructure required by end users to interact with the blockchain and the smart contract; building fancy UI to make the interaction with your application easy and appealing for users; and optionally, building tooling to allow others to build upon all your hard work. In short, a long list of things to worry about and that requires several different engineering backgrounds. This means that if you come up with a great idea for a decentralized application, you’d typically need to raise a ton of money to hire the wide range of developers required to build it? 

Fortunately, this type of problem is becoming less common, thanks to a new innovation in the blockchain space: blockchain middleware. 

Introducing blockchain middleware

If you come from the IT world, you probably know all about middleware software. For those who are new to the concept, we find variations of the following definition: “Middleware is software that provides common services and capabilities to applications outside of what’s offered by the operating system. Data management, application services, messaging, authentication, and API management are all commonly handled by middleware. Middleware helps developers build applications more efficiently. It acts like the connective tissue between applications, data, and users.”

If we think of blockchain as the operating system of DeFi and the decentralized web, then the definition of blockchain middleware is quite straightforward: it’s all that software that binds everything together (including software for communication, execution, and smart contract deployments) to help developers build applications and interfaces faster, while leveraging blockchain in a flexible, safe, and effective manner.

If we depict the network stack of a decentralized application, blockchain middleware software sits between the applications’ interfaces, and the Layer 1 and/or Layer 2 of a blockchain network.

A good way to start grasping the role middleware protocols can have for the blockchain ecosystem is to think about what HTTP did for the Internet. HTTP can be thought of as a middleware protocol, as it abstracts from the low-level complexity of the underlying transport protocols (mainly TCP and IP), while offering an expressive and standard API to build any Internet application interface on top of it; this helped HTTP-enabled companies like Google, Amazon, and Facebook thrive. 

Of course, these companies would probably have been able to succeed without HTTP, and they could have built their services directly on top of IP or TCP. But imagine if Jeff Bezos had been forced to invest a huge chunk of his resources into building expressive protocols over HTTP to build his online bookstore, instead of focusing on building his core business model. Amazon may have ran out of gas before becoming the juggernaut that it is today. Without middleware protocols, an API change may have required far more than the 30 minutes or so that it takes with them.

This is why I think blockchain middleware can be huge for the ecosystem. It can lower the barrier to entry of new players into the market, speeding up exploration cycles and accelerating innovation. The same way you can easily build a website without knowing anything about IP or TCP, blockchain middleware can enable developers to build new decentralized applications without knowing anything about how to create a new blockchain transaction.

Different levels of middleware

Blockchain middleware sits between the low-level blockchain protocols (either L1 or L2), and the application interfaces. But among middleware software, we also see different types according to the level of abstraction they offer. We can divide blockchain middleware into:

  • Upper Middleware: It abstracts end-users and developers from all the low-level details of the blockchain. This category comprises (i) smart contract development tools like Truffle or Hardhat, which offer a smoother smart contract development process so you don’t have to worry about manually compiling or deploying smart contracts; (ii) interaction APIs that offer easy-to-use web APIs to deploy ERC20 tokens, send transactions to the blockchain, or interact with smart contracts without having to manually tailor transactions just by interacting with an HTTP API such as TrustOS; or (iii) DeFi wallets such as Metamask. 

  • Lower Middleware: This type of middleware handles all infrastructure issues, so you don’t have to worry about deploying blockchain nodes, keeping them in sync, and enforcing their security. In this category of middleware, we find services like Infura, which deploys a pool of Ethereum nodes and offers a simple web API to interact with them. Infura is responsible for the maintenance and SLA of your blockchain nodes, and enables you to easily interact with the blockchain (in the case of Infura, the Ethereum mainnet) by sending calls to a web API.

  • Protocol Middleware: This is the next level of blockchain middleware. This category comprises different decentralized protocols built on top of L1 and L2 to enhance the core functionalities of the blockchain (my personal favorite). A good example of protocol middleware is The Graph Network. The Graph Network is an open network that is continuously indexing data stored in different decentralized networks (like Ethereum and IPFS) to make it queryable by external applications. It offers an interface to query data stored on-chain in these networks. This is a great example of the kinds of benefits middleware provides for developers: if a decentralized application needs to query data on-chain, using The Graph means that a development team won’t have to worry about building a system to query and index on-chain data. It can instead directly leverage The Graph, and thus focus on solving core problems instead. 

If you look at the basic architecture of The Graph below, you see that it sits on top of L1 networks, and offers an API for consumers and applications (i.e. between the network layer and the application interface). This is pure middleware!

Deep dive into a middleware platform

Now we know what blockchain middleware is, and we’ve reviewed some examples of middleware services according to the abstractions and problems they solve. In this section, I want to dive deeper into a recently released middleware platform called Metis Polis to wrap up my illustration of how blockchain middleware could be huge for the future of this technology.

Metis Polis is a middleware platform to manage your deployed smart contracts. The reason why I call Polis a middleware platform is because it aggregates several of the middleware services that every developer has been using to build smart contracts. It offers everything you need to ease the management, maintenance, and interaction of smart contracts. No more worrying about tailoring low-level transactions to your smart contracts, and having to host your own node to interact with the network.

Polis middleware can be divided into these different services:

Smart Contract Domain Service

The address of a smart contract deployed over the Ethereum network is a long string of numbers and letters that looks something like this: 0xd76b5c2a23ef78368d8e34288b5b65d616b746ae. You need to remember this address to interact with the contract's logic. Even more, if your decentralized application leverages several smart contracts for its operation, this means remembering more than one of these intelligible strings. In the Web 2.0 world, this means having to remember by heart the IP of the servers an application interacts with. Polis’s domain service solves this problem, by building a “DNS for smart contracts”. It allows developers to create a domain to associate with a smart contract address, so that anyone who wants to interact with the smart contract can do so without having to know its address. With this feature, companies like Twitter could deploy their own smart contract and make it available to the general public through, say, twitter.metis instead of 0xd76b5c2a23ef78368d8e34288b5b65d616b746ae.

Polis’s smart contract domain service is a protocol middleware that can be seen as an alternative to the Ethereum Naming Service (ENS). It supports domain updates so that the domain owner can point to a different url. Domains are managed by a smart contract to enable domains to point to different urls, and for smart contracts to be able to call methods based on the domain as well. It also enables a domain marketplace where people can trade domains. (So go get your own Metis domain name before someone else takes it… actually I should do this myself, right now!)

Application Management

One of the big barriers for the adoption of decentralized applications is identity management. Wallets are improving their UX, and DApps are becoming increasingly easier to use. Still, the fact that users are responsible for their keys and transactions need to be signed using these keys is a burden for many users -- particularly people who are new to blockchain. 

Polis’s solution to this issue is an upper-level middleware application manager that offers an authentication service to help app developers manage user access without worrying about wallet integration. Web 2.0 users and Web 3.0 newbies are typically not comfortable using a pair of cryptographic keys, and are more used to using passwords and authentication tokens. With Polis middleware, developers are able to generate new users for their applications, authenticated with traditional schemes, so they don’t have to worry about managing a cryptographic key pair. 

Smart Contract API Service

If you recall from my description of upper-level middleware, one of the common forms of middleware from this category is abstraction APIs that enable the interaction and sending of transactions to the blockchain without having to manually tailor low-level transactions or having to learn the low-level Web3 API magic to send transactions to the blockchain. Polis’s smart contract API service solves this exact problem. It provides a web API to seamlessly authenticate and send transactions to any smart contract using HTTP, without having to worry about using a blockchain client to send these transactions. Web 2.0 developers are used to using REST APIs, so in order to bring more Web 2.0 innovators to the blockchain space, we need to speak their language. This will also enable companies to kick off their blockchain projects without having blockchain experts in their teams -- saving weeks of recruitment time and hundreds of thousands of dollars (or more!) in hiring budgets.

Transaction Management and Monitoring

Polis provides a built-in explorer for your smart contracts. No need to use any of the public explorers. Instead, the Polis dashboard enables you to track all of the token transfers and transactions handled by your smart contracts. This is a great example of how projects are trying to aggregate all the tools needed by users and developers in the same place, making the commodities easier to use. Right now, the only tools developers have to track the activity of their smart contracts are public explorers. If they want to track the activity of all the smart contracts within their application, they need to build their own tool, or make dedicated queries for each smart contract. With Polis’s transaction manager, all the transactions for an application can be checked in the same place, even if it comprises several smart contracts.

But Polis not only displays the information about transactions shown by classic explorers. It also includes a monitoring and alerting middleware that allows developers to track additional information about the activity of their smart contracts, such as:

  • Total number of transactions

  • Number of transactions per application

  • Transaction trend

  • Geographical area where the user triggered the transaction

  • Total Tokens that were transferred

  • Total tokens transferred per application

Collectively, this adds up to everything needed to gather metrics from the use of the application at a smart contract level. Measuring decentralized application is not easy, because there is no central infrastructure orchestrating the whole operation of the system -- and we can’t improve what we can’t measure. Polis makes it significantly easier to gather metrics about our decentralized applications, so that we can improve them.

I chose Polis to illustrate the power of blockchain middleware because it aggregates services from the different categories described above: from upper-level middleware like the smart contract API service to low-level middleware like the Application Manager, and protocol middleware such as the smart contract domain service. If you want to try all of these services yourself, check out this tutorial, which walks you through the use of different Polis services over testnet (no need for real Ether to perform transactions). 

Lowering the barriers for innovation and adoption

To summarize, I believe that blockchain middleware will lower the barriers for innovation and adoption in the blockchain world. By abstracting from the low-level complexities of blockchain technology, DApps will become more user-friendly, while engineers with different backgrounds and from other fields will be able to build decentralized services themselves, with little to no blockchain expertise. And all this while we invest on making the Internet more decentralized and open for everyone.

I can’t wait to see the great ideas that people from other fields and without blockchain expertise come up with once they start building their services by leveraging middleware software. Is the next Google or Amazon coming from the blockchain space? We’ll see. What is clear is that the future looks bright!

@adlrocha - The State of DEXes

Decentralizing the gateways to crypto

Decentralized Exchanges (DEX) are a key foundation for the DeFi ecosystem. They give you the ability to trade and swap one cryptocurrency for another peer-to-peer, without the need for third parties such as a centralized exchange or traditional financial institutions.

Still, we have to ask: Why worry about implementing decentralized exchanges if we already have their centralized counterparts? How do DEXes actually work, and more importantly, why are they important for the DeFi space? 

Decentralizing exchanges

If you are familiar with centralized exchanges (CEX), you will have no trouble understanding how decentralized exchanges work. In a centralized exchange, a central entity or corporation (see Coinbase) facilitates the trades between their users through a centralized order book which tracks every order in the platform. CEXes are responsible for aggregating these orders, matching them, and executing the actual buy and sell transactions on behalf of users. 

The fact that users need to delegate the execution of transactions to the exchange translates into them not having full ownership of their keys, as CEXes need to be able send transactions on traders’ behalf to execute their orders. In practice, what CEXes do is pool users’ cryptocurrencies in a number of “hot” wallets controlled by the exchange which are used to execute the actual orders. In many cases, if the order can be matched within users in the platform, the exchange doesn’t need to execute a transaction in the blockchain at all. 

What they do instead is update the balance allowances of the corresponding cryptocurrency for the users involved in the exchange on their centralized platform’s database. In the end, CEXes can be seen as traditional stock exchange brokers, but for cryptocurrencies. They serve as gateways between users and the underlying asset, offering them an interface to interact with them. This makes them really convenient to use, especially for newcomers. But they also have their drawbacks and risks, as we’ll describe in a moment.

Decentralized exchanges, on the other hand, do not rely on any centralized platform or third party to execute user orders. DEXes are able to perform the core operations of a centralized exchange and leverage a set of smart contracts to do it in a decentralized way. DEXes and their underlying infrastructure are also responsible for: receiving user orders, keeping the order book updated, matching orders, and executing them. In the case of DEXes, token exchanges are not done through a third party. Instead they are performed 1-to-1 on-chain between individuals, i.e. a full peer-to-peer exchange. Transactions are triggered by users in the corresponding blockchain, so they remain in full control of their credentials at all times.

CEX vs. DEX, fight!

Both CEXes and DEXes have their own advantages and drawbacks that you should be aware of before choosing one or the other.

  • Ownership of credentials: This was briefly introduced above, and is the most obvious difference. While in CEXes the exchange is the custodian of your keys and your funds, in DEXes every order and every transaction is done directly by the user. Consequently, DEXes support the use of hardware wallets, and give you full control and responsibility over your keys. But with great power comes great responsibility, which means that if you lose control of your keys or your seed phrase, you immediately lose access to your funds without anyone being able to recover them.   

  • Liquidity: Traditionally, CEXes have been more liquid than DEXes, although this is gradually changing. For now though, CEXes have more users than DEXes, which translates into them having more liquidity in their platforms. Also, when trading on a CEX you’re only allowed to trade the tokens listed by the platform, which makes it easier to match orders, and for the corporation behind the exchange to provision the system with additional liquidity if needed by adding funds for their hot wallets. 

    That said, the gap between CEX and DEX liquidity is narrowing, because increased interest in DeFi has resulted in a spike in DEX users. Also, many DEXes are becoming what we call “Automatic Money Makers”, which make use of  liquidity pools, making liquidity less of a problem.

  • Token pairs: DEXes enable true peer-to-peer exchanges between their users. As long as two users are willing to exchange one asset for another, these token pairs will be supported by the exchange. On a CEX, however, tokens need to be explicitly listed for users to be able to trade them. CEXes need to implement the pair exchange to support these token trades, limiting the tokens that can be traded on them.

  • Ease of use: CEXes are full-fledged trading platforms. They are an interface between the different blockchains and their users. The fact that there is a single entity operating the platform and orchestrating orders and transactions with the market means that they can build features that are hard to code in a decentralized manner, using smart contracts (such as limit orders, stop losses, and other cool features from traditional financial markets).

    Also, through their centralized platforms CEXes usually offer the direct purchase of tokens using fiat money. This is why they are so convenient for users looking to make their first crypto investment. In spite of there being ways to achieve this, DEXes in general do not support exchanges between crypto and fiat, which implies that someone looking to use a DEX needs to already own some crypto.

  • Security: This is a huge win for DEXes. It’s no secret that centralized exchanges have been hacked multiple times. Delegating the custody of your keys to the exchange lightens the burden of key management from you, but increases the risks (and the rewards) in the centralized exchange. While with DEXes an attacker needs to compromise the keys of every user to gain access to their funds, in centralized exchanges an attacker can gain access to all the funds in the exchange just by being able to hack the platform and compromise the keys of the hot wallets used to manage and guard users' funds.

  • Privacy and KYC: Regulators have an easier time regulating CEXes when compared to DEXes, because as their name implies they’re run by a single central entity. Regulators in almost every country in the world force CEXes to impement KYC (Know Your Customer) protocols for users to prevent money laundering and other illegal activities. Thus, in CEXes you are not trading privately anymore: the exchange knows each and every transaction you make, and may even need to inform the state of all your transactions. This is not the case for DEXes, where all you need to start trading is an identity in the blockchain and some tokens to exchange. 

  • Fees: CEXes are significantly more expensive than DEXes. For DEXes the blockchain is their main infrastructure, while CEXes need to operate their own system which is also the one responsible for keeping your keys safe. These services need to be paid for in some way, which jacks up the size of the fees they charge, 

How to build a DEX

You now have a clear view of the advantages and disadvantages of DEXes and CEXes. But there are also slight differences between different DEX platforms, depending on how they’re implemented. There are mainly three different approaches for implementing DEXes:

  • Using an on-chain order book. In this design, every transaction is written in the blockchain. Not just the actual purchase or exchange between user balances, but also user orders, i.e. user requests to buy or sell. It is the ultimate decentralization of exchange platforms. However, every operation needs to be completed on-chain, with its corresponding high cost and scalability limitations. Some examples of DEXes that use an on-chain order book are Bitshares and StellarTerm.

  • An alternative to this is to use off-chain order books. In this case, user orders are collected and matched off-chain, while the final transaction is settled on-chain. Since orders aren’t stored on-chain, this method can run into some of the security risks of centralized exchanges, but it doesn’t have the limitations of on-chain order books. In this approach, we trade performance and cost for decentralization (often a huge dilemma in the blockchain and crypto space). Examples of DEXes that use off-chain order books are Binance and EtherDelta.

  • Finally, we have the new sheriffs in town, Automated Market Makers (AMM), which forgo order books all along. With order books if someone wants to exchange token A for token B, there needs to be someone with A that is willing to trade B for an agreed-upon price. Without enough volume in the exchange this can be extremely hard. AMMs remove the need of counter-parties for orders to match, and introduce algorithms to set the price, letting you trade A for B regardless of whether there’s someone on the other end of the trade. This is facilitated through the liquidity pools we mentioned above. Briefly, platforms that use liquidity pools pay their users interest in exchange for keeping their funds in the smart contract that operates the exchange, so they can be tapped for trades. In this approach, individual users are playing the role of financial institutions in traditional markets, where they ensure that the market stays liquid at all times. 

    Cool, right? In practice, this is implemented in a smart contract that maintains user pools, pays them an interest rate for their funds, receives the trade orders from users, and automatically executes them against the pool if it has the required funds. AMMs also require every transaction to be performed on-chain in order to interact with the platform. This will have an impact in terms of performance and cost, as transactions need to be made in L1, which means slower throughput and higher feest. Some examples of AMMs in the Ethereum blockchain are Uniswap and Sushiswap.

DEXes in action!

The first premise of a DEX is that it’s easy to understand, but not the easiest DeFi service to use. Let’s illustrate this with a quick walk-through of how to perform a token exchange with Uniswap.The first thing that needs to be done is to get our hands on some Ether. Even if we don’t want to exchange Ether, we’ll need some to pay for the transactions required to place the order and execute the trade in the system.

Once we have our wallet full of Ether, we can go to Uniswap and connect our wallet, for instance Metamask.

With our wallet connected  we can start swapping (i.e. trading or exchanging) tokens right away. Uniswap will find the most efficient swap to go from one asset to the other according to the liquidity available in the pool. To perform any trade we’ll have to pay a few fees: the basic fee to pay for gas to execute the transaction in Layer 1, and an additional fee to pay for liquidity providers, which are staking their tokens to add liquidity to the system and make our trades possible.

We sign and submit the transaction: 

Then we just have to wait for the transaction to be executed in the smart contracts and for the swap to become effective, and we are good to go.

But there’s still more… L2 DEX

Uniswap is built on top of Ethereum’s L1. But what if we don’t want to pay for Ethereum’s large fees, and worry about whether our transactions are going through or not when the network is congested? To overcome this issue, different projects that build DEXes on Layer 2 are emerging. 

A good example of this type of DEX is MetisSwap. MetisSwap is a Layer 2 Decentralized Exchange application built on Metis Layer 2 Beta Testnet.

If you recall from previous publications, Metis is a Layer 2 platform based on Optimistic rollups that connects to the Ethereum mainnet and adds numerous additional features to the standard L2 projects. The team behind Metis recently released the Beta version of its testnet, which includes an implementation of a Uniswap hard fork DEX called MetisSwap. As you can see in this post, using MetisSwap is quite straightforward if you know how to use other DEXes, such as Uniswap. So what exactly does MetisSwap give us when compared to other DEXes? 

  • It’s built on Metis’s L2 platform, so instead of having to commit every swap transaction to the Ethereum mainnet and pay its corresponding fees, you can trade over L2 and pay Metis’s typical transaction fees of about 1 cent, with the enhanced performance and transaction throughput of a L2 platform. These transactions will eventually be committed on-chain through Metis’ Optimistic rollup, but this is transparent to us, which makes it really convenient to us.

  • Metis has built-in support for DACs (Decentralized Autonomous Companies), which are able to seamlessly create their own tokens. So if you are looking to launch your own crypto project and you want to allow your users to exchange your tokens, you don’t have to worry about creating your own ERC20, having your token listed on an exchange, or waiting for there to be enough liquidity for your token to be exchanged. With MetisSwap, you can create your own token in a few clicks, and start exchanging it for other tokens over L2 without having to write a single line of code.

In other words, MetisSwap offers all of the built-in features of Uniswap but with the advantage of using a L2 platform, and with the added bonus of allowing you to mint and swap your own token as desired.

Closing words

DEXes have seen an increase in interest since the surge of DeFi use cases. Users want to be able to swap their tokens seamlessly without having to rely on third parties, and there are multiple ways to do so. That includes the fully decentralized approach of an online order book and the innovative approach of a Layer 2 AMM, which removes all of the complexity and limitations of more traditional L2 approaches, providing the perfect setup for DeFi users without deep technical knowledge. 

Layer 2 DEXes will be able to take DEXes much further when it comes to matching ease of use, convenience, and feature richness of CEXes...with the added security that comes with decentralization. Stay tuned, the future looks bright!

@adlrocha - The siege of open source software?

Digressions on Github Copilot and more

Github Copilot’s beta is out! And with it, a heated debate on the use of open source software by big tech companies. For those of you who haven’t read about it yet, Github Copilot is an AI tool in the form of (at least for now) a VS Code extension that helps you write code by “giving suggestions for whole lines or entire functions inside your editor”. Github Copilot is powered by Open AI, and is trained on billions of lines of public code to achieve its task. It uses an OpenAI engine called OpenAI Codex which is a more capable engine than GPT-3 in code generation (in the end it seems to be a GPT-3 trained on billions of lines of code).

So far so good, it is not as if Github Copilot was the first AI tool of its kind to help developers in their job of writing code. You are probably familiar with Tabnine, which has been around for a few years now and does basically the same as Github Copilot.

What’s all the fuss about then? Well, it may have been implicit for the case of Tabnine, but as it was a small startup no one really cared that much, but in the case of Github Copilot is blatant: these tools have been trained using the code you’ve worked so hard to produce, and they are suggesting snippets of it to other developers.

Initial reactions to Github Copilot went from the regular “oh man, we are doomed, we’ll be out of our jobs in no time” to “my productivity will skyrocket with this tool”.

Then people started thinking a bit more deeply, and reactions looked a bit more like this:

I sometimes use this joke with my non-engineer friends to explain what I do for a living (and for fun): “we developers are just a dumb interface between StackOverflow and the application/system we want to implement”. So it was only a matter of time before counter-arguments in favor of Github Copilot in the line of the following arised:

But there’s a huge difference between using Github Copilot-generated code and code snippets from StackOverflow in your program: the source of the code. When you use code from a StackOverflow thread, the person answering to that question is willingly sharing his code snippet with you (and others) to help you. Copilot-generated code may be inferred from pieces of code from one of your repositories that for some reason you may be quite hesitant to share: either because it is protected by a non-permissive license, or because you worked hard on it and you are too selfish to share it with others. It doesn’t matter, the thing is that you own that code, and you should be free to do whatever you want with it.

The fact that you are hosting the code in Github shouldn’t be enough reason for anyone to use it to train their AI. I don’t think we signed up to this when we created a Github account (or at least I am personally not aware of it, but maybe there’s something in Github’s terms and conditions I am not aware of. Please let me know if this is the case).

Licenses are there for a reason

Github is full of open source code under permissive licenses that one can openly read and use in its own projects without having to ask permission from anyone. However, depending on the specific license used, there may be some constraints, requirements, and limitations in the use of the code. We may be able to use code from a project as long as we don’t make profit from the derived work; or there are certain licenses that allow the use of the project’s code as long as every work derived from it is open source under the same license.

Github Copilot would have been a great tool if it had been trained exclusively using code under permissive licenses that didn’t require acknowledging the original author of the code (or other license-related requirements). Or even more, if according to the license of the project a developer is working on, the type of code used to train the model or make suggestions could be chosen accordingly.

Developers of open source software use licenses as the communication channel to let other developers and users know what they are allowed to do or not with their work. But my feeling after reading tweets like the following is that Github didn’t pay much attention to this when implementing and training Copilot:

And don’t get me wrong, I am totally in favor of tools like this that improve our productivity and can make our lives easier even if it needs billions of lines of codes to be trained, as long as this is done the right way. If you’ve contributed to open source software, or probably just hosted your code in Github even if it is not under a specific license, you are a small part of the reason why Github Copilot works. Have you been (or will be) rewarded in any way for this contribution? Not at all. You’ll probably have to start paying a subscription if you want to start using Github Copilot.

Github will be profiting from your work, probably even in the case where you explicitly stated in the license of your project that no-one could profit from work derived from your work. Unfortunately, there is no easy way of enforcing this. And what would have happened if everyone started doing things like this?

Many may argue that the same way you can use Google services for free in exchange for your data (which is essentially yours and you are the only owner); Github can use your code to train their models in exchange for all that free hosting and unlimited private repositories you get. But while this is quite clear when you create a Google account, I don’t think this is that clear when creating one in Github.

Time to self-host our critical services?

All of this makes me quite sad. The fact that we rely more and more on big tech services for our day to day lives means that we are quite defenseless against them doing other Github Copilots-related projects using our hard work and personal data. You want to use Github? Then you have to deal with them doing what they want with your code. Period. This is just one more example of how broken the Internet and its dynamics are these days. But what can we do to solve it? I can’t see an easy solution, apart from building an Internet substrate that enables people to escape these twisted dynamics.

Without this substrate where everyone can own its own piece of the Internet without having to rely on others, the only escape to the influence of big tech, and identity shifts like the one from Github is to self-host all the critical services you rely on, like this folk has done. Quoting from that website: 

“I do not agree with GitHub's unauthorized and unlicensed use of copyrighted source code as training data for their ML-powered GitHub Copilot product. This product injects source code derived from copyrighted sources into the software of their customers without informing them of the license of the original source code. This significantly eases unauthorized and unlicensed use of a copyright holder's work.

I consider this a severe attack on the rights of copyright holders so therefore I cannot continue to rely on GitHub's services”

But this is not the panacea. Self-hosting every online critical service we depend on in our day to day is a lot of work. We have to worry about hosting the infrastructure for the service, maintenance, upgrades, security risks, etc. Of course it depends on the level of control you want over your services, but you most probably won’t be able to achieve the SLA of big tech services. 

If in spite of all of these inconveniences you still want to start hosting some of these services yourself, this repo is a great start. It walks you through how to deploy a list of super useful services: your own VPN, web hosting, cloud storage, calendar, chat server, and a long list of other self-hosted open source alternatives.

Can open source software be closed?

Unfortunately, Github Copilot is not an isolated example of how big tech is sieging open source software. Visual Code is another interesting case I recently learned about. You may be thinking that when you install Visual Code in your machine what you are using is a build of the open source code hosted in this repo. Well, apparently this is not the case and you would be better off downloading the code and building it yourself.

“Microsoft’s vscode source code is open source (MIT-licensed), but the product available for download (Visual Studio Code) is licensed under this not-FLOSS license and contains telemetry/tracking. According to this comment from a Visual Studio Code maintainer:

When we [Microsoft] build Visual Studio Code, we do exactly this. We clone the vscode repository, we lay down a customized product.json that has Microsoft specific functionality (telemetry, gallery, logo, etc.), and then produce a build that we release under our license.

When you clone and build from the vscode repo, none of these endpoints are configured in the default product.json. Therefore, you generate a “clean” build, without the Microsoft customizations, which is by default licensed under the MIT license.”

This is why projects like VSCodium, a free/libre open source software binary of VSCode, have to exist. Apparently every time we use VSCode we are sending data to Microsoft. Some people may be comfortable and aware of these practices, but others may think this is outrageous. Why aren’t these companies more transparent with what they do with their users so that at least they can make informed decisions if to use them or not? Is it because they know what the answer would be? 

This was a weaker case than the one of Github Copilot of siege of open source software, but still one worth being aware of. I personally don’t expect this type of cases to stop any time soon.

Elastic is another example of a company that had made a related move in this direction of seiging open source software by changing the license of some of their projects (that millions of people probably depend on) into a more restrictive one to increase their profit. Again, I am not against companies profiting from their work, and their projects they create, this is legit and awesome, I personally would do the same. What I am against is “changing the rules of the game midway”. 

I haven’t talked to any contributor to Elasticsearch, for instance, but I am really curious to know how they felt when they learnt that all of their hard work contributing to an open source project they thought was protected under a specific license eventually changed into a more restrictive one. They probably shared the values of the project they were contributing to, and overnight, because someone unilaterally chose to, one of the key foundations of the project they have voluntarily worked hard to make changed. 

Developers should be more aware of the licenses of the project they contribute to, and the consequences of it; while companies behind that software should be more respectful of their licenses and with their contributors. It all comes down to rewarding fairly everyone for their hard work, because open source software may seem free by design, but behind it there is a lot of hard work, and ethics should prevail. Even if this reward is just sticking to the initial values of the project for respect to its contributors. Ask anyone, open source software is almost never about the money.

Can we fix it?

But coming back to potential solutions to the problem at hand: what if one wants to self-host its own services without having to worry about the overhead of maintaining and ensuring a certain level of SLA not to make the service unusable on a daily basis? Here is where a new substrate for the Internet is needed. A substrate where we can be in control of our data and our services. Regular readers of this newsletter know what is coming next: we need to fix the Internet, and decentralizing it to minimize our reliance on big tech is the first step towards this goal. 

Filecoin and IPFS are good examples of how decentralization and web3 protocols can help us return our control and build self-hosted services with redundancy and a great SLA without the nightmare of having to maintain the infrastructure. With these protocols we collaboratively maintain the infrastructure. We share the burden between all the participants of the system. Is not every man and woman by themselves or delegating everything to big tech giants. It is something in between.

I am really optimistic about the future of the Internet and Web3. We are getting to the point where all the foundations are there, we now have to make it better than Web2 not only for the people behind Web3, but for the users of Web2, i.e. everyone else. You want to join this exciting endeavor? Ping me and let’s have a chat! For the rest, see you next week.

@adlrocha - 2.0

Upgrading the newsletter for subscribers

Until now, this newsletter has been completely free. There was no incentive to pay a subscription. In spite of this, a dozen people chose to pay a subscription as a way of rewarding and promoting all of the work I was doing every week for free. I wasn’t giving any additional value to them: they were getting the same publications and the same attention free subscribers were receiving. Heroes.

One of the goals I set for the newsletter in 2021 was to start monetizing in some way all of the work I was putting in it. With my current availability, it is getting increasingly harder to write high-quality publications every week. I was afraid that without the right motivation I would stop writing. Maybe, if I started making some additional money with the newsletter I would be encouraged to be there every week for my subscribers. What better external motivation than a few additional bucks (or crypto, of course I always accept crypto as payment) by the end of the month? This was the rationale behind this goal.

I’ve been thinking a lot lately about the best way of monetizing the newsletter without depriving it of its essence, and I realized that the best way to achieve this is by giving additional value to paying subscribers. Why would someone be encouraged to promote my work, if they can get it for free? But writing exclusively for my paying subscribers was not an option. I also enjoy writing for a broader audience, actively interacting with it, and having insightful discussions in the process (this is why I started writing in the first place: to build a community and long-lasting connections). So what could I do? 

After some thought, I feel I’ve found the perfect compromise. Welcome @adlrocha newsletter v2.0 (if this works, in the next major release, I should change the name of the newsletter, as it won’t be @adlrocha’s exclusively, but of everyone else supporting it).

Release v2.0 🚀

I am really excited to announce this new release of the newsletter. This release includes big changes for paying subscribers, some minor changes for free subscribers, and some constraints for non-subscribers.

Content now expires! 😱

The big change for all readers of my newsletters is that, from now on, publications will expire for non-subscribers. Everyone on the Internet will be able to read my articles in the first seven days since their publication. After these seven days, only subscribers of the newsletter will have access to them, i.e. The full archive will only be available for subscribers.

I usually share my publications in social networks and HackerNews. It is a great source of new readers and subscribers. Unfortunately, the usually end up being “one time readers”. If I want to build a community around my newsletter, readers should feel part of this project, and have some incentive to subscribe and become part of this community.

My rationale behind this feature is that readers that enjoy my work will subscribe to avoid content from expiring so they can read at their own terms. Once subscribed, if they feel like it they will join the discussions and interact with the community, if not, at least they’ll have the content available any time in their inbox. You’ll come for the content and (if this works) stay for the discussions, the ideas, and the relationships.

New perks for paying subscribers 💸

Paying subscribers have received a significant upgrade in this new release:

  • They can now influence my backlog by adding new topics to it. Is there anything you’ve been hoping for me to write about? Now is your time to make it happen. Suggesting a new topic is simple, the only thing you need to do is to fill in this form adding your subscriber email address, and a brief description of what you want me to write about. Easy peasy. My subscribers backlog is a FIFO queue. As topics start arriving I will be writing sequentially about them until the queue empties, that I’ll come back to my own personal backlog. I am considering making the subscriber’s backlog public in order for subscribers to be able to vote topics in it promoting them to the beginning of the queue, but I’ll leave this for the next minor release (2.1). I will wait for some additional feedback about this feature before jumping into new things.

  • I am also adding a new feature that I called the “Monthly Ask Me Anything Webinar”. Every last week of the month I will share a new form to all paying subscribers asking their availability and willingness to hold an AMA webinar. In this form, I will suggest different formats for the monthly session. The sessions will range from:

    • Traditional AMA and open discussions about any topic of interest of the audience. 

    • Live presentations and demos about any new technology or something I may know about and of the interest of the audience.

    • “Reading parties” where I will share a list of papers and I will read and make a presentation of the most voted one. 

    • Any additional session/format subscribers may come up with. The subscribers form and my DMs are open for anyone to suggest and give feedback.

  • And of course, paying subscribers have access to my full archive. For them, content never expires.

A new class of subscriber: The sponsor 🤴

I honestly don’t expect anyone to become this new class of subscriber (at least for now), but I wanted to start experimenting with the idea of having a top class of subscribers.

In this release, sponsors are paying subscribers with the ability to book an hour of my time for a 1:1. Is there anything you think I can help you with? Book that hour. Do you think I can help you design your next decentralized system? Use the hour. Do you want my opinion on some matter in the crypto space I may know about? Do you want to share ideas, or for me to share with you some of my crazy ideas to see if we can build a company together? That also works for this hour.

Sponsors will be earning additional benefits in future releases, but I need to think a bit more deeply about it. Something I am considering is making sponsors part-owners of the newsletter, sharing a stake of the profits with them, but this is still under design. This is why, instead of having a fixed price for sponsor subscriptions, new sponsors are allowed to name their price in this release (this will be revised in future releases).

Sponsored publications 🗣️

Last but not least, I am quite transparent about the metrics of my newsletter. Someone looking to sponsor their project, product, technology, open positions (you name it) is able to know the reach a sponsored publication on my newsletter would have, and the kind of audience it would reach. This is why I’ve decided to leave a space in every publication for anyone to sponsor whatever they want in it. Do you want to give it a try? Fill in this form.

A brand new design 🖌️

Do you see anything different? The newsletter has gone under a slight redesign.

The first release of (hopefully) many more

This is the first major upgrade of the newsletter in two years. From now on this newsletter will follow development and release cycles analogous to those of software products. These past two years I’ve been focused on building an audience and a way to unleash my passion for learning and writing. During this time I’ve learned a lot, had insightful discussions, and met a ton of incredible people… but I want more for this newsletter.

I don’t want an audience, I want a community of learners, creators, and restless. Will we make it? We’ll see. I will track your engagement throughout the next few months and decide if to make new releases or downgrade to v1.0. Whatever you like the most. In either case, see you next week!

Loading more posts…