@adlrocha - I am joining Protocol Labs!
Thank you, Telefónica, for two great years of impactful work.
Today I want to share some personal news I’ve been wanting to announce for a while: I am becoming a Labber! After following (and using) for years the amazing work being done at the core of Protocol Labs with projects such as IPFS, libp2p, Filecoin (and many more), I am becoming one of them. Specifically, I am joining their ResNetLab as a Research Engineer. For those of you who know me personally you clearly understand how cool it is for me to be able to work in the hard problems tackled by this group, and the luck I have of doing so with the amazing team with whom I am going to work. I’ll definitely do my best to learn as much as possible from them while contributing all that I can with my personal experience and my work. I am beyond excited with the opportunity!
Heading towards my definition of professional success
For those of you that don’t me personally, let me try to further explain my excitement with this opportunity. As Hamming stated in his “You and Your Research” talk at Bell Communications Research Colloquia Series (which I already talked about in this newsletter), the best way to succeed professionally (according to my definition of success, of course, which I think is a really personal concept) is to “work in the right problem in your field”, “work in problems that can become mighty oak trees”, and “work in things you won’t look back and regret in your death bed”. These are the three quotes that currently govern my professional decisions. And now I am fortunate enough to be joining a team and a work environment where I will be able to fulfill all of these requirements (and many more that conform my current definition of professional success) heading with them towards my view of success.
The mission of the Resilient Networks Lab (ResNetLab) is to build resilient distributed systems, by creating and operating a platform where researchers can collaborate openly and asynchronously on deep technical work, while Protocol Labs’ mission is to “drive breakthroughs in computing to push humanity forward”. The group’s mission, the company’s mission, and my role in the company have all the keywords I would use to search my “dream job”: [“research”, “engineering”, “computing”, “resilient distributed networks”, “push humanity”], and I would add a few more keywords that I feel are implicit in the aforementioned missions: [“fixing the Internet”, “open source software”]. Now you start getting my excitement even if you don’t know me, right?
To finish putting you in context and help you understand why this is so cool for me, the other day I was having dinner with a really good friend of mine. We’ve been best friends since high school, so you can imagine that he has endured long hours hearing me talk about my crazy ideas about how the Internet is broken, the urgent need to recover our privacy, the importance of science in society, the benefits of decentralization, and my obsession with self-organizing networks (he is an economist specialized in debt, so he knows nothing about computing, technology, etc.). When I told him the good news he, obviously, didn’t know who Protocol Labs were. I briefly told him about what they do, what is their focus, and what will be my contribution to the company, and his answer was: “wtf! but that is the kind of company you have been looking to start since high school”. Again, he knows nothing about technology, he came to this conclusion after years of spamming him on these topics.
Some of the problems ResNetLab are working on
Now I will take the liberty of directly copy-pasting from the ResNetLab’s site some of the problems they are working on to see if I am able to transmit to you part of my excitement about the hard problems they are working on:
The lab’s genesis comes from a need present in the IPFS and libp2p projects to amp their research efforts to tackle the critical challenges of scaling up networks to planet scale and beyond. The Lab is designed to take ownership of the earlier stages on the research pipeline, from ideas to specs and to code.
Preserve users’ privacy when providing and fetching content: How to ensure that the user’s of the IPFS network can collect and provide information while maintaining their full anonymity. My feeling is that a consistent solution to this problem could not only be huge for IPFS, but to the overall architecture of the Internet.
Mutable data (naming, real-time, guarantees): Enabling a multitude of different patterns of interactions between users, machines and both. In other words, what are the essential primitives that must be provided for dynamic applications to exist, what are the guarantees they require (consistency, availability, persistence, authenticity, etc) from the underlying layer in order create powerful and complete applications in the Distributed Web. The real time guarantees in distributed systems is something I have been looking to explore for a while.
Human-readable naming: You can only have two of three properties for a name: human-meaningful, secure, decentralized. This is Zooko’s Trilemma. Can we have all 3, or even more? Can context related to some data help solve this problem?
Enhanced bitswap/graphsync with more network smarts: Bitswap is a simple protocol and it generally works. However, we feel that its performance can be substantially improved. One of the main factors that hold performance back is the fact that a node cannot request a subgraph of the DAG and results in many round-trips in order to “walk down” the DAG. The current operation of bitswap is also very often linked to duplicate transmission and receipt of content which overloads both the end nodes and the network.
Routing at scale (1M, 10M, 100M, 1B.. nodes): Content-addressable networks face the challenge of routing scalability, as the amount of addressable elements in the network rises by several orders of magnitude compared to the host-addressable Internet of today. Remember when I mentioned self-organizing networks above? This problem and the next one are quite linked with my crazy ideas around this type of networks, so I can’t wait to see what I face at this end.
PubSub at scale (1M, 10M, 100M, 1B.. nodes): As the IPFS system is evolving and growing, communicating new entries to the IPNS is becoming an issue due to the increased network and node load requirements. The expected growth of the system to multiple millions of nodes is going to create significant performance issues, which might render the system unusable. Despite the significant amount of related literature on the topic of pub/sub, very few systems have been tested to that level of scalability, while those that have been are mostly cloud-based, managed and structured infrastructures.
The legacy I leave in Telefónica
This all looks like the perfect story, but I am not going to lie to you, this journey is being quite bittersweet. I am sad to leave my current job at Telefónica and the amazing team of talented individuals I leave there. Don’t get me wrong, in Telefónica I had one of the nearest jobs I could have to my “dream job” in Spain, and in this two years I’ve had a blast and I haven’t stopped learning. It is not that my job there didn’t have all the ingredients in order for me to achieve professional success, is just that in Protocol Labs I will be able to have the focus towards my goals I am currently looking for to speed this quest. So now you also get why leaving my current job has not been an easy thing (quitting a job you enjoy and that you are comfortable with is never easy).
I don’t want to bore you with more personal matters, I know you come every Sunday to this newsletter probably for the tech and not for the gossip. But before I close this personal publication, I want to briefly share some of the things our amazing team in Telefónica managed to achieve in these two years:
It all started with the development of a tokenization platform. We wanted to abstract companies and internal units from the complexities of blockchain technology. We wanted to prevent them from having to worry about “where and how” to issue their tokens (Ethereum?, Hyperledger Fabric?, BigchainDB?, why not all of them?), and having to hire specialized talent (which was lacking at the time, and still is) to build their PoCs. With this system we managed to ease a lot the development of blockchain-based use cases and prototypes (internally and externally), and we saw the development of successful projects (such as Karma) using our technology.
With the success of the tokenization platform we became ambitious, and we came up with the concept of TrustOS, which would end up becoming Telefónica’s blockchain product to “add trust into the operations of corporations”. TrustOS was included in Telefónica’s Activation Programme, and many startups and SMEs are already using it through this programme.
But all the work behind TrustOS hasn’t been exclusively of product development and integration, it is also the result of long hours of research and development that led to:
An exhaustive analysis of performance best practices in Hyperledger Fabric for production networks (the conclusions where shared in a series of publications in this newsletter, and several conferences/meetups).
The filing of five different patents, protecting the IP of schemes included in the product in a broad range of fields such as: the management and recovery of blockchain identities leveraging the SIM card and the telco infrastructure; ways of increasing the trust of private blockchain networks and the execution of smart contracts logic; the deployment of federation of blockchain networks; or the development of new consensus algorithms for private blockchain networks.
The standardization of a new method of idea generation and collaboration for the team (this is something I haven’t shared yet in this newsletter. Expect a follow-up on this).
The participation in several European H2020 Innovation Projects with amazing teams from universities and corporations.
Moreover, as part of our work in TrustOS, we also realized the need of a common decentralized identity framework for authentication and authorization in corporate blockchain networks. It all started as a way of fixing a specific need we had in TrustOS related to identities in general-purpose Fabric networks, but this started growing and being generalized internally up to a point were we felt this could become a full-fledged open source project from which everyone could benefit. This is how TrustID was born, and it became an open source project hosted in the Hyperledger Labs (check the repo).
And many many more things that I may have missed in this brief recap of my last two years there (the hackathons we’ve won, the meetups we’ve given, etc.).
And I know for sure that this is just the beginning for them, I am really looking forward to see all the successes yet to come for this amazing team in the future (because, let’s be honest, I was definitely the one weighting them down with my crazy ideas ;) ). Thank you for these two really fun and exciting years: @jota_ele_ene, @joobid, @diegoescalonaro, @_mtnieto, @aggcastro, @cesssRC, @carlosalca94.
On Monday I start a new exciting professional endeavor without the amazing team from the pic above, let’s see how it goes. Wish me the best, and see you next week :)