On November the 7th, Bitcoin SV lead developer, Daniel Connolly, joined Bitstocks’ CTO, David Arakelian in the studio to discuss the February 2020 Genesis upgrade and unleashing Bitcoin for mass adoption.

In this rather technical session, the pair aim to answer some of the Bitcoin SV community’s burning questions about the upgrade that will restore the BSV protocol nearly entirely to the original Bitcoin protocol.

Spoiler alert: In this podcast, Daniel revealed that the Genesis upgrade is ready to be rolled out, even though its release is scheduled for the 4th of February, 2020.

We invite you to read the redacted transcript below or watch the video. If you’re short on time, the video index will allow you to jump straight to the topics that interest you most.

Watch: Quick Reference with Timestamps

00:38 - Daniel’s background, history, current projects and motivation for working on Bitcoin SV.

02:04 - A day in the life of Bitcoin SV’s lead developer.

04:31 - The tipping point that triggered the block size debate.

09:22 - The features of the original Bitcoin that Genesis will unlock.

13:19 - Enabling more transaction types and layers going forward.

15:28 - The explosion of OP_RETURN applications and fall-out.

18:15 - The misconception of decentralisation through non-mining node operators.

23:29 - Alternative business models for Bitcoin miners.

25:22 - The point of the Nakamoto Consensus mechanism.

28:57 - Mining-related upgrades in 2019.

32:08 - New software releases going live with Genesis.

37:25 - How the assumption of a 1Mb block size affects software development.

38:04 - The sunsetting of P2SH (Pay To Script Hash). 

40:30 - SegWit - Hacking the Bitcoin system against its design.

43:02 - The plight of hobbyist network validators. 

46:39 - The role that Bitcoin Cash is playing in the market.

50:19 - Genesis upgrade - Are we ready?

 

Read: Bitcoin SV Genesis Upgrade - Sooner than you Thought?

David: Hello, everyone and welcome to yet another podcast brought to you by Bitstocks. Today we have a very special guest, Daniel Connolly. The topic of this discussion will be Genesis and unleashing Bitcoin to the masses. It will be a little bit technical, but I hope we can cover all the questions about what Bitcoin should be. And who better to answer these questions than Daniel who is working on the project continuously. 

Daniel, will you introduce yourself and tell us a little bit about your background, your history, what you're working on, and how come you are working on Bitcoin right now?

Daniel: Okay, so my name is Daniel. I've been in IT my whole life. I'm a professional IT. I used to work at the United Nations. I got into Bitcoin around 2011, and it fascinated me from a technical point of view. Later on, all these other things were added on, like the economics of the system and the value of the coin. But what interested me, in the beginning, was the technology.

 

The History of the Bitcoin Block Debate

In 2017, with the big block debate, I was a big supporter of Bitcoin Cash and I contributed to open source projects for that. In the course of doing that, I met Steve Shadders. He was also working on open source software, and he pulled me into nChain. He suggested I look into nChain, and I thought it was a great opportunity to come work in the space. So I took up the offer of working for nChain. So, I've been with nChain for over a year and a half now.

David: So what would you say is your current role at nChain? What are you doing on a day-to-day basis?

Daniel: Well, it started out as developing software and leading teams developing software. And now I'm moving more into the strategy about the infrastructure layer like the Bitcoin SV node implementation, and other projects we have going on. So I'm helping the teams understand Bitcoin. Most of them are up to speed now, and they don't need that so much anymore. And then there’s setting the technical direction for the future.

 

Understanding Bitcoin in a Holistic Way

David: I see. I think you're right in that explaining to people what Bitcoin is, is a very difficult task on its own. Some people just take Bitcoin as a speculative asset while other people, like yourself, are looking at it from a technical angle. But it's very, very difficult to genuinely understand what Bitcoin is and how you can use Bitcoin. For example, for me, it’s only been the last two or three years that I’ve understood Bitcoin as a true incentivised system. It's not just a program which runs, you know, it's very harmonic. It works within the ecosystem where everybody is incentivised to do something. 

That aspect is the most important for me when it comes to understanding Bitcoin and what it can do. So this is why I'm really, really excited about the Genesis upgrade. If you think about the Genesis upgrade, it's about unleashing what Bitcoin should be and how it can start growing. 

I think you did mention about the 2017 block debate, but that debate started much earlier though it wasn’t mainstream yet. The developers and some people were still talking about the size that blocks should be. And I think it was a topic that had always been out there. What do you think was the tipping point that started the genuine discussion about block size? What was the point where people realised that the block size was a serious problem for Bitcoin and we need to solve it? What would you say was the timeline for that? 

Daniel: Yeah, that started much earlier than 2017, and there were big divisions in the Bitcoin Community about that. I think the interesting part about that was that, like you are indicating, it was mostly a technical discussion. What's come out recently is the economic side of it and the shifting perception of thinking of it as this technical system that does this and that to understand that this is a bigger system, and this is an economic system. The code that implements is just a tool to reach that system. 

When we talk about removing the block limitation entirely with Genesis, we see some of the discussion that's come up there about how miners will organise amongst themselves, what is the maximum acceptable size, and how they're going to do this. These are really interesting conversations, and I don't think those things should be left to technology. The technical stuff is complex and can't be solved by a simple formula inside a piece of source code

David: Correct. So I think the point that you're mentioning is very important, because the current developers just look at it one way: what is the perfect technical solution for the problem? But they ignore what is surrounding that problem. That problem might be derived from something completely different than the technical issues. So for me, when I started looking at the different sections of Bitcoin - the incentives for miners to continue securing the chain, or the users to use the chain - it becomes obvious that the block size itself is playing a major role to maintain that sustainable level of harmony. Around 2016 or 2017, I realised that the block size shouldn’t be limited at all. As you know, there are even some people who are suggesting reducing the block size to 300 kilobytes, to keep it decentralised. I'll just put it bluntly: I don't understand this suggestion.

 

 

Tiny Bitcoin Blocks = Huge Transaction Fees

Daniel: Yeah, you know, I've been in this industry (IT) for a long time. I've seen things progress. I used to use FidoNet, which was a pre-internet thing. Since then the internet bandwidth has grown tremendously pushed by companies like Netflix etc. I watch Netflix at home all the time; I don't have any other TV right now. If you compare the bandwidth that takes on my home network to 10 years ago, it’s huge. But now it's trivial; it’s nothing. 300 kb block sizes… what are you supposed to do with that? The one thing I can see happening now is that it would increase the transaction fees, but is that what people want? Do they really want to go back to the $70 transactions?

David: This is something we mentioned in a previous podcast, and I think it's important to say it again.

People who used to pay such high fees have already started looking for alternatives. They don't continue using a system that’s extortion. They look for other options. They’ll keep using the system until they find an alternative. But once they find an alternative, they will stop using the system. So this is why we’ve seen a decline in Bitcoin usage, from merchant points and from normal users. Because the only people who would be willing to pay that amount of money are perhaps companies or people who have to transfer huge amounts of money.

We’ve experienced such cases in our company, but since we don't want to pay that much anymore. It doesn't make sense because we don’t get the value we expected in return. 

 

Final Steps to Unleashing Genesis

So, what I wanted to talk about in this podcast is mainly about the things we need to unlock to produce the Genesis upgrade and to make sure that we can unleash Bitcoin to its full potential. Firstly, it’s the block size. As both of us agree, limiting the block size is not a good idea and increasing the block size limit and completely eradicating is the way forward. As you say, we’ve increased at the bandwidth usage that we've at home. But, what else would you say is required to achieve the Genesis release. What else do you guys think needs to be done apart from the block size?

Unleashing Non-Standard Transaction Types

Daniel: So, we’ve had so-called standard transactions on Bitcoin for a long time already. Standard transactions are the ones that propagate across the network, and they entail someone paying something to somebody. They've been extended slightly with OP_RETURN. But what I'm really excited about is that we'll start propagating non-standard transactions - we’ll start propagating all transactions. 

When I first looked into Bitcoin, the technology of blocks and sharing transactions was fascinating. The standard transaction types that you could do were also interesting, like transferring value from one person to another, or one address to another. But the really interesting thing is the Bitcoin script language, and the sort of things you can do with that, or could do with that. It’s going to be fascinating to see what people come up with. By putting in that limitation to not propagate non-standard transactions, they killed the potential. So unless you were a miner or had a deal with a miner, you couldn't get your non standard transactions mined.  But now, when we remove that limitation, all valid transactions will propagate around the network and will be mined. I'm just waiting to see what will come from that and what people will do with this full scripting language. 

David: It will spark new business ideas, I think. There will be new ideas around how you can transfer or build value. If my memory is correct, all this functionality was baked in when Bitcoin was first released. 

Daniel: Yes. 

David: They had more opcodes and more transaction types, as you are saying. The script language wasn't robust, but it had more functionality, I would say.

Daniel: Yeah.

David: So, the reason why the developers of Bitcoin Core decided to disable some of the opcodes and remove some functionalities was because they were afraid they hadn't had enough time to test it and that that might break Bitcoin. Coming back to the economic incentives again, they were looking from one angle. They were saying, “this might break Bitcoin”, but they didn't see the possibilities of what it could offer like the economic incentives it could bring and the new ways of using the system it could spark. It's very important, which is why I'm mentioning it again, not just to see it from one angle. Bringing this functionality back, we are already seeing loads of different transaction types in OP_RETURN. People are starting to build business ideas around it. Now that they’ve been given the space in the block they needed they don't need to worry about the protocol and the data that they're pushing through. They know that that would just work. So standardising those type of transactions is the way forward. Is that what you're trying to say?

Daniel: Yes, and enabling people to use those transactions. The Op-return explosion was really interesting. I didn't predict there would be such an enthusiasm for it, but there was. What people are developing is really amazing and I hope that Genesis will be another step towards more use cases. I think it might take a little while because the script language, for example, is quite complicated and requires a different way of thinking about Bitcoin transactions. But if people start to explore it, I don’t know what they’ll come up with.

David: I agree that the script language is complicated, but at the same time it's very simple, because it’s doing very small bits of operation at any given time. 

Daniel: Yes.

David: So, the way I see the script language is like... we have Assembler which does instructions, and we have the script language which does instructions. But I don’t see anyone building in Assembler because they don't need to know how things work underneath the hood. 

Daniel: Yep. 

 

The Necessity for IDE’s, or Layers on top of Bitcoin

David: What they have is layers on top, which allows them to build more complex applications by using a more descriptive language. One thing I think is lacking right now is a well thought out and built IDE’s or layers on top of Bitcoin’s script language. Once we have that, it will be easier for people to experiment, to build things without diving into how Bitcoin’s script language works. And, obviously, in the beginning there will be trials and errors because you never create a language that works perfectly out of the box, but I think it will be a step forward. Once we have those layers, it will unleash other functionalities to build quicker on Bitcoin. 

You mentioned the explosion of OP_RETURNs. I saw some drawbacks from that because the explosion happened very quickly and the OP_RETURNs weren’t standardised, so we had the issues of pruning, and how long the data needs to be kept in OP_RETURNs, how valuable that the data is…. So how do you think that will play out? Will the miners keep the data or not? What do you think will happen?

Daniel: So, a couple of the miners I know, are already running so-called prune nodes - they don't keep the entire blockchain. I would expect that to increase. I would expect that data archive services would start to pop up. And in a way, a block explorer is almost like a prototype of that. A block explorer is a separate system that enables you to look at any transaction on the chain. They already have all of the data, and I would have thought it was a natural progression for them to start indexing parts of it and serving specific needs. 

 

Blockchain Data Archiving Services

You could imagine an archive server serving up Twetches. Initially, we'll probably see people offering this service for free. Maybe Unwriter’s tools already do that; I'm not sure. But then, eventually, when the volume starts getting big and these systems start costing a lot to run they'll start moving to a paid model. Whether they do a subscription or payment channel is not the point. There are lots of options there. And in fact, I thought I started seeing that already on the Ethereum a bit. Wasn't there one language there where they had an API out that was publicly accessible, and they started locking it down and requiring you to have an account? I don't think they were charging it for it yet, but that was definitely a model that they seem to be heading towards. I expect that same sort of feature set to come here. If Unwriter started charging for his services, I imagine a lot of people would be fine with that.

David: 

Of course. If there’s an economic incentive for someone to use your service, and they find it at a reasonable cost, then why they shouldn't be paying you? And vice versa: if you think there's a demand for it, you as a producer might just set the bar of a cost and see if there's enough competition for you to find an equilibrium of the price.

Daniel: Yep.

 

The Misconception of Running Your Own Node

David: One other thing I wanted to talk about is the big misconception around people who run their own nodes, and also around people who run their own Bitcoin mining operation. As you said previously, if you want to broadcast a non-standard transaction, you should talk to a miner and make an agreement so they can include your transaction in a block, and then send that block to everyone who wants the blockchain data. 

Now there's a misconception that the people who are running their nodes for validating such transactions are actually in charge of the network. I think that's a big mistake because at the end of the day they are just getting the data and accepting the data. The miners are the people who create the data - they are not the same people as those who are just listening for the data in their own home with a Raspberry Pi, or something like that.

So if we look at that stance, and you look at the importance of the miners’ role in that system, people say that the network will not be decentralised but centralised, because you’ll end up having perhaps one or two big miners. 

How can we come out of the situation and still maintain a decentralised network? Because, for me, the most important part is the miners produce the data, so they’re in charge of accepting the transactions, putting them in a block, and sending them out. Whoever wants to listen to the data and archive the data is free to do so. Nobody prevents you from doing that, but at some stage, you will probably stop doing that because you won’t be incentivised to store data for no reason. You’ll be burning money to store that data. So I think it will be the other way around, where whoever is running their own node will just stop doing that. But according to the person who wants to run the 300kb blocks, miners burn money just to run and help the network to be secure without having any incentives. This is a crazy idea...

So, how would You explain the situation? How would you explain that we will still not be centralised but will have a decentralised network, and it's enough to be secure?

 

50 Shades of Decentralisation 

Daniel: Yes. 

So, centralisation and decentralisation are not like black vs white - there’s a scale. So a centralised system is one where you've got one, and only one miner. I believe that with global use of Bitcoin SV, there’ll be enough people who are mining. And, to mine you do need to validate every single block otherwise you're taking a huge risk. You risk that the block you spent tens of thousands of dollars on mining a block is invalid, and people won't accept it. There's a huge risk there for the miner - so they must validate the blocks.

There will be enough miners - whether that's going to be a hundred, a thousand, ten thousand, I'm not sure. But I think that once there's global use of the chain there will be, I put it around the 10,000 mark of companies that are mining and fully validating the chain.

 

Miner Incentives - Now and in Future

David: Yeah. I also think there's all this discussion about the miners being unfair or greedy, but they misunderstand that that's their business model. As a miner, I have an economic incentive to produce value, and the way I create value is by mining coins. And when I’ve produced those coins, I'm going to sell it to recover the cost and maybe make some profit. So that's my whole business right now.

Daniel: Yes.

David: I’m not greedy; that's just my business model. And in return, I maintain the Bitcoin network’s security. Yeah, so you can operate on the network. Yes. I can accept your transaction. I can create the block, and I can do everything that you want is an end-user. 

So I am very keen to understand why certain people talk about miners in a bad manner. They talk about miners as lone wolves who run to the more profitable chain for this or that reason. This perception doesn’t take into account all the money miners had to pour in to start their business, incentivised by the returns that they can accumulate over time. That's the only thing that drives them.

So if we talk about the miners from a longer-term perspective, I think miners will find other avenues for creating revenue streams. This will be to offer services, as you said. Perhaps somebody wants to retrieve data for certain types of transactions, and then miners can provide that. I don't see that happening with the normal users, because they won’t be able to cope with that type of volume, either to store or to provide that kind of service. So that will be a specialised job for the miners, and it will give them different revenue streams, but it will only be possible once you have big enough blocks. If you’ve got a small data set, it doesn't make sense for someone to offer this type of service it because everyone could look up that data. 

Daniel: Yes. If you’ve got small enough blocks, then anyone can provide it to anyone else for free, right? There will be no point in developing that service. But once the blocks get really large, and that cost becomes too much for hobbyists to bear then we'll start to see these services pop up. And they can differentiate themselves on different aspects of it, like reliability and performance. 

A company that's making money out of this can afford to have a complete completely redundant infrastructure, perhaps with a content delivery network across different parts of the world. If it's just a hobbyist, on the other hand, the costs will get too extreme so they couldn’t do that. Big Blocks unlock all these incentives and the economic side of allowing companies to compete with each other.

I think that the whole point of the Nakamoto Consensus was that miners compete with each other. They’re trying to find a block by a fellow miner which is invalid. If they find one, it’s great for them because they've knocked that miner’s block out, they've taken that revenue from the competitor miner and they can produce their own block.

David: But it’s also a very important mechanism to validate where the network is, and where the demand is. If you propagate a 1Tb block right now, it might not be possible because certain miners, nodes, or exchanges will not be able to cope with it and will drop out of the ecosystem.

As a miner, you have the best incentives to find a good threshold where the network will accept it. So it's a self-regulating mechanism. You always push the boundaries until you arrive at the level you are comfortable with, and you set a new mark. You now have a mechanism of self-equilibrium. As you said, it might push some of the small nodes out of the system, but it might also bring new people in to compete against you which will push your bar higher. We’re back at the economic point of view where people and companies are competing to generate the most value.

 

Competition to Drive Improvement of Node Software

Daniel: Yeah, I'd love to see that in the software too; other node implementations competing. I think there's a market out there. If someone came up with a solution, a software that was reliable and could mine big sets of data and it outperformed the free version, they could probably sell that to miners. That’s another Market opportunity.

David: Correct. I think the main reason we don't have a direct competitor for the nodes software right now is that it does everything at once. This makes it very difficult for someone new to come on board and offer you an alternative to the whole package. The good companies out there all specialise in a single thing, and they do it very well - that's what they improve upon and what they offer.  We’re already seeing the splintering of the software into different focus areas - there are wallets, there are people offering payment systems, there are payment merchants, and they’re all completely disconnected from the node implementation. 

But previously, like the example of Bitcoin Core, it offered everything out of the box - it's your wallet, your mining operation, and your address generator. It's your everything which doesn't make sense. So I think it would be difficult for a company to tackle that issue and offer a better node version. It's a little bit unrealistic because they’d need to learn and understand loads of things. But if we start decoupling the different sections of what a node can offer - like a wallet, password recovery, transaction monitoring, or mining - they will be specialised companies which can compete and offer the best solution for that type of service. In future, we’ll see more competition in Bitcoin SV, because we've already started seeing it happen with wallets, for example. We have dozens of them. It will be the same for mining.

 

Bitcoin API Upgrades

So speaking about mining, my next point is about the Genesis upgrade. We are also trying to make a better API for mining. Is that something planned for this upgrade?

Daniel: We included a new API for mining in the July upgrade… It might even have been earlier than that. So the problem with the old one was that when you requested a block from the node - what we call a candidate block - the mining equipment would return the entire block. So, 1MB of data returned to the miner, the miner would start the mining equipment and start hashing away at it, and when it found a solution it would send back the entire block.

David: So, transfer times are increasing if the block size increases, obviously.

Daniel: Yes, exactly. I mean, that's fine with a 1MB block. You can do that. But, actually, the hashing equipment only needs 80 bytes… or is it 81 bites? It only needs the header; it doesn't need all the other stuff. The new API was a collaboration with Bitcoin Unlimited started a long time ago, and it only returns the header which is a fixed size regardless of how big the block is. So the mining equipment takes just that small amount of data, mines on it, and then submits back to the node to say, I found a solution and here's the solution. It results in tiny amounts of data, no matter how big the block is. That is an approach we're following throughout the node software - we're trying to decouple the size of the block from anything else. If the block gets really big, you still want your software only to use a certain amount of memory.

You want to decouple the requirements of the software from the size of the block in - and that was in our recent release. The blocks are stored on disk, of course, and the way the software used to work was that it would load the entire block into memory, do some processing on it, and then send it out the data that was requested. What it does now is to read just a small part of the block from disk, sends it out, then reads the next part, sends it out, reads the next part… It doesn't load the entire block into memory anymore. This is an approach called streaming which decouples the size of the block from the memory requirements at the node. Now you only need a tiny amount of memory to stream a really large block.

 

From Synchronous to Parallel Processing (CS Main)

David: Okay, so speaking about processing, in general, when I was looking at the Bitcoin client - the node itself - there were loads of actions which were synchronous and blocking cooperation. If you wanted to do certain things with your wallet, it would block certain other aspects of the client to allow you to do certain operations. That's very expensive if you have a high volume of API calls. Firstly, it’s very slow because it doesn't scale, and secondly, it blocks the whole application so you can't do anything apart from just that specific operation that you've requested. 

So, other aspects that are going to be released and are still in development is the parallel processing of certain sections such as transaction validation and block building. I know that switching from synchronous to asynchronous operations when you're programming. It requires a different mindset, and it's complicated. Even if you're doing parallel operations, it's a different way. So how did you guys approach that? Did you experience any challenges or difficulties?

Daniel: What you're referring to there, I think, is the one lock that's taken by almost everything throughout the code - it's called CS Main. We've slowly been removing the dependence on that. What we do is to isolate a part of the code and then we give it its unlock. When you need to access that part of the data structure, you require that small lock specifically for that data structure and not for the entire system. 

We've also introduced shared locks, which means that several processes can read the lock at the same time. But when they want to update it, they have to globally lock it to exclusive access, and then the others can’t read it anymore. 

We're slowly reducing the coverage of CS Main and introducing other smaller, more localised and focussed locks, but it's a big job. CS main is still there, but it's slowly shrinking down, enabling us to parallelise much more. If you took the code we started with in about September last year, and you load it heavily, and you run it on the server with 8 core or something like that, you’ll see that it only ever uses a 100% of CPU. Now a server where they run 8 core goes up to 800% - a hundred percent per core. 

In the release version it would only ever use 100%. With the new release we did in late November last year, or December, that figure went up to 125% and it’s slowly going up all the time as we enable it to parallelise more and more. With the parallel transaction validation that we released just recently, we're seeing it go way up towards the 400% mark - so it’s using 4 cores in parallel to completely load out the machine. If you have a 4 core machine and you're running an old version of the software and it only uses one core…

David: Then you’ve paid extra money for no reason…

Daniel: There's no point putting a bigger machine there because it won't use it. So the point about parallelisation is to enable the software to use all of the resources that the server provides. You could put down a 64 core machine down it will load it heavily.

David: I think it's about time. I would say we are very late, but it's time to bring the software up to date with the hardware.

Daniel:  Yes, when I first started diving into the code in-depth it was shocking to see the CS Main lock.

David: I'm going to play Devil's Advocate a little bit, but if you step back and you start building a system, the most secure way to ensure that you don't have memory leaks or something like that is to just to lock it globally. Do your stuff and release it, right? But you forget that if you don't lay the foundation's right from the beginning, you don't allow yourself any room to expand your code. So you are very dependent on that thing that you've taken for granted because you didn't want to solve it at a given time because it wasn't very important. But over time, it will make your life more and more difficult. As the source code grows, dependencies grow and then suddenly you get used to putting in place global locks, thinking you’ll sort it out later. The year passes by and then suddenly you realise, hold on a second, we have a massive bottleneck, but to solve just that one bottleneck is going to affect the whole system.

Daniel: Yeah, so I've referred before to this idea of a base assumption. And there's a base assumption among many developers that the block size is 1MB. And when you have that base assumption, it affects how you write code it affects how you design your system. The software developed under that base assumption, which is why it does things a certain way. But if you change that assumption to say that the block size can be unlimited, then it impacts almost everything. And that's the things we are steadily changing with the Bitcoin SV Node.

 

Sun Setting of P2SH

David: Another change I wanted to discuss was P2SH. If I remember correctly, P2SH was done by Gavin Andresen? 

Daniel: I think so, yeah.

David: When I read how P2SH was initially implemented I felt that it was done in a very quick and klutzy way. So what's the reason for sunsetting P2SH?

Daniel: There are several reasons for it. In the code itself, it's an exception. In general, if code looks this particular way, then you do this. But P2SH is an exception. It does not follow the standard method of interpreting the script language and removing that makes the code much cleaner and more efficient to execute.

David: I think what you refer to here - I'm having flashbacks - is something I've noticed. When that code was introduced they had to iterate several times to fix other parts of the system because they completely forgot that those other parts of the system depend on that new functionality. So P2SH suddenly became like an extension rather than a functional addition. I think that's what you're referring to, because it wasn't the standardised way of doing it. They did it in a way that it wasn't conforming to how Bitcoin works. It was modifying the code in such a way to accommodate that functionality rather than the other way around. 

Daniel: People say that it transfers the cost of the transaction to the receiver rather than the spender. So when you're submitting a transaction to buy something, you still have a small transaction as you don't pay that much for the fees. But I think there are other ways to solve that problem. I don't think that in itself is enough justification for P2SH. There’s also the 160-bit hash that’s not really secure - it's not enough bits to be secure. When I look at it from a design point of view, it's a hack. It's a case of wanting to achieve this function and then twisting the system a bit to enable it to do that. And that's also the way I see SegWit

 

On System Hacks and SegWit

I'm a bit of a purist in language approach or system design, so the SegWit thing of changing the ‘anyone can spend’ type of transaction into meaning something special, was a kind of hack. It's a hack in the sense of persuading a system to do something it's not designed to do. It might be fun and that kind of activity of making systems do what they're not supposed to do was very popular when I was growing up, but it's really not how a corporate IT system should be built. It's not world-class engineering. If you wanted to do that kind of function, you should design a system properly to enable it and not introduce this little hack.

David: I agree with you in terms of how Segwit was done. It was very clever, but the intentions was to modify the system in such a way that it accommodates your needs. It's not the other way around so it comes back to the same problem as CS Main because now you're introducing things which might create dependencies that you have to deal with in the future. Because you’ve twisted the system in such a way that you don't have a streamlined thought process you end up having conditional things that you always have to be aware of when introducing new functionality. 

That's something that I've seen numerous times in Bitcoin Core’s source code. Even smaller things such as we whether the system might be running on Raspberry Pi or some outdated operating system used by perhaps 0.01%.  Do we really need that? Do we need to worry about that? What's the cost of that? Are we trying to support that small percentage of nodes or people who are just hobbyists. 

Daniel: I do sympathise with the hobbyist who wants to validate every single transaction. I sympathise with their situation of not being able to continue their hobby without spending a huge amount of money when the block size gets big. I sympathise, but I don't think it's enough reason to block the whole network from scaling. This is supposed to be a Global Financial system. The fact that you can't validate it all on a Raspberry Pi is okay. I'm sorry, but that's just the way it is. If you want to keep doing that you need to invest in the hardware to do it. It's holding the network back, to support the smallest use case. The network has to grow and has to get bigger. You can keep up, sure, anyone can validate. That’s the whole point of Bitcoin. But you will need the equipment to do so.

David: Correct. I think the whole concept of be your own bank, or just validate your transactions are very strong words and have different meanings. But at the same time, not everyone wants to be his own bank or her own bank. 

Daniel: No, definitely not.

David: People rely on third-party services to provide a good service - a service that’s always operational, and they might also like the human interaction of the accompanying customer support service. Being your own bank is good. You have your own Financial sovereignty, but at the same time, if you lose your money who are you going to ask help from?

Daniel: Maybe we’ll start to see this as  BTC’s use case; it's for people who want to validate their own transactions and who want to validate the entire BlockChain. Okay, fine, go and do it.

David: Do you personally believe that people who are following Bitcoin BTC are really that niche, or that they do believe it will be available for everyone? The way I see it, the plan for Bitcoin is still that it should be for everyone. It's not advertised as, this is the niche sector that we are covering and this is what you need to do. Their plan is still to scale for everyone, right? That's why they're building the Lightning Network and trying different off chain scaling approaches. So, I still don't understand. 

Regardless of how we look at this thing, there still needs to be like bigger blocks in place. Right now, even the Lightning Network would never work with 1MB size blocks. It won’t work, because even if you start settling even just that transaction a 1 MB block is impossible to do so. And the cost for someone to open a channel and put down some money up front... It's a different approach. If I want to transfer money or pay someone, I don't leave them a hundred pounds and say, now, take five but just keep this and give me 95 in two months time. What I do is, hey, that's just a fiver, okay, here’s a fiver for you.  It’s a very different way of thinking and operating, but that’s a different topic of discussion.

Daniel: Yeah, we could talk about the Lightning Network for a long time.

 

The Use Case for Bitcoin Cash

David: What I also want to talk about with you, as the last point, is the disconnection from Bitcoin Cash to Bitcoin SV when Bitcoin SV was born. What would you say is the role of Bitcoin Cash then? My impression is that they're still trying to limit the block size somewhat. Obviously they are not limiting it to very low. They still have a higher threshold (than BTC), but is this really enough? Can you say 8MB or 10 or 20 is enough to cover all the use cases for people to transact?

Daniel: I don't really keep up with what’s happening in Bitcoin Cash. I've seen them have a bit of that debate. I'm blocked by loads of people on Twitter account; I don't see what they say. They seem to be concentrating on payments. The way we've branched into doing OP_RETURN data and other use cases of the blockchain is something that they don't seem to be pursuing. 

During Bitcoin Cash days, there were a couple of Opcodes that were restored. Those were driven by us - we did that. I did that. Working for nChain, we restored those opcodes by submitting code to the ABC repository to get those done. We were the drivers behind that. The rest of the Bitcoin Cash group didn't seem that interested, and now they seem to have stopped doing that altogether. I don't see any progress in the area of restoring the Bitcoin script language at all. But then again, maybe I just don't see it. Maybe it's happening, I don’t know. They seem to be focusing on payments and not the bigger use cases that we're following. 

As for the block size debate, there seems to be an assumption that you can just decide to increase the block size and it's done. There's a lot of hard work that goes into that. This was one of the main drivers why we changed to the open BSV license - because we were putting all this effort into scaling the node, and we didn't want a competing cryptocurrency to come in and copy that work into their own code; particularly when the codebases are so similar. There's a lot of work that goes into to scaling, and you can't do it just like that. So, if they're not putting in the effort now, it’s going to be very difficult for them to catch up.

David: Yeah, right now, we don't have the volume of data that prevents us from making changes. The impact of changing certain things is still very low because it's still a niche. Bitcoin is still very small compared to other big financial institutions or companies out there. I know I'm comparing you to a company when it's not really a company. What I'm trying to say is that Amazon, for instance, transacts so much data on their own and Bitcoin is nowhere near that volume of data. So the impact of Amazon being down for one hour is greater than maybe Bitcoin having issues for a day.

Daniel: Yeah, I suppose so. Yes, yes.

 

Genesis Upgrade - Are we Ready?

David: But if we unleash and remove all the limits when it comes to like block size and the amount of data that we can store in OP_RETURN, do you think there will be certain things which will be dangerous as well?

Daniel: Are people going to put huge amounts of data onto it... I'm sure at the beginning they will...

David: At the beginning they will, because of the low cost compared to what it might be in the future when there's higher demand which will allow the miners to charge more. But are you worried about any aspects of unleashing what Bitcoin truly was supposed to be?

Daniel: Not really. We've worked hard on parallelising transaction validation, and we're working hard on parts of the node that supports the economics of it. So, parallel block validation is another one that will be coming for Genesis. It’s required for Genesis, because if someone mines a really huge block, one that’s too big, your node will be processing other competing blocks while it’s processing that block. And whichever one finishes first is the one that it will take. That's a mechanism whereby miners can have an incentive not to make the blocks too large, because another block might come in and compete with it. 

David: I think the equilibrium will be reached at some point.

Daniel: Yes, it will reach an equilibrium. So these are things we have to put into the software in time for the Genesis upgrade. On the technical side, no, I know I don't see any dangers. We've got enough security measures in place within the node software, so I don’t see a problem.

David: Sounds good. Well on that note Daniel, thank you very much for coming. It was great having you around. I'm hoping that the Genesis upgrade will go well and we'll unleash what Bitcoin was supposed to be, come February. You mentioned there's a small possibility to do it earlier than February, maybe in January or December. People need to be aware that Genesis is ready so they can start upgrading. 

Daniel: Yes, that's right. 

David: Yeah, so hopefully we'll get that out sooner than February, which is even more exciting. Do you've any other last thoughts or things that you would like to say?

Daniel: No, not really. I mean, I'm really looking forward to Genesis, and I can't wait to see what happens afterwards.

David: Great. Thank you very much.

New call-to-action

You may also be interested in