Senior bitcoin developer says currency 'failed experiment ...

Why i’m bullish on Zilliqa (long read)

Edit: TL;DR added in the comments
 
Hey all, I've been researching coins since 2017 and have gone through 100s of them in the last 3 years. I got introduced to blockchain via Bitcoin of course, analyzed Ethereum thereafter and from that moment I have a keen interest in smart contact platforms. I’m passionate about Ethereum but I find Zilliqa to have a better risk-reward ratio. Especially because Zilliqa has found an elegant balance between being secure, decentralized and scalable in my opinion.
 
Below I post my analysis of why from all the coins I went through I’m most bullish on Zilliqa (yes I went through Tezos, EOS, NEO, VeChain, Harmony, Algorand, Cardano etc.). Note that this is not investment advice and although it's a thorough analysis there is obviously some bias involved. Looking forward to what you all think!
 
Fun fact: the name Zilliqa is a play on ‘silica’ silicon dioxide which means “Silicon for the high-throughput consensus computer.”
 
This post is divided into (i) Technology, (ii) Business & Partnerships, and (iii) Marketing & Community. I’ve tried to make the technology part readable for a broad audience. If you’ve ever tried understanding the inner workings of Bitcoin and Ethereum you should be able to grasp most parts. Otherwise, just skim through and once you are zoning out head to the next part.
 
Technology and some more:
 
Introduction
 
The technology is one of the main reasons why I’m so bullish on Zilliqa. First thing you see on their website is: “Zilliqa is a high-performance, high-security blockchain platform for enterprises and next-generation applications.” These are some bold statements.
 
Before we deep dive into the technology let’s take a step back in time first as they have quite the history. The initial research paper from which Zilliqa originated dates back to August 2016: Elastico: A Secure Sharding Protocol For Open Blockchains where Loi Luu (Kyber Network) is one of the co-authors. Other ideas that led to the development of what Zilliqa has become today are: Bitcoin-NG, collective signing CoSi, ByzCoin and Omniledger.
 
The technical white paper was made public in August 2017 and since then they have achieved everything stated in the white paper and also created their own open source intermediate level smart contract language called Scilla (functional programming language similar to OCaml) too.
 
Mainnet is live since the end of January 2019 with daily transaction rates growing continuously. About a week ago mainnet reached 5 million transactions, 500.000+ addresses in total along with 2400 nodes keeping the network decentralized and secure. Circulating supply is nearing 11 billion and currently only mining rewards are left. The maximum supply is 21 billion with annual inflation being 7.13% currently and will only decrease with time.
 
Zilliqa realized early on that the usage of public cryptocurrencies and smart contracts were increasing but decentralized, secure, and scalable alternatives were lacking in the crypto space. They proposed to apply sharding onto a public smart contract blockchain where the transaction rate increases almost linear with the increase in the amount of nodes. More nodes = higher transaction throughput and increased decentralization. Sharding comes in many forms and Zilliqa uses network-, transaction- and computational sharding. Network sharding opens up the possibility of using transaction- and computational sharding on top. Zilliqa does not use state sharding for now. We’ll come back to this later.
 
Before we continue dissecting how Zilliqa achieves such from a technological standpoint it’s good to keep in mind that a blockchain being decentralised and secure and scalable is still one of the main hurdles in allowing widespread usage of decentralised networks. In my opinion this needs to be solved first before blockchains can get to the point where they can create and add large scale value. So I invite you to read the next section to grasp the underlying fundamentals. Because after all these premises need to be true otherwise there isn’t a fundamental case to be bullish on Zilliqa, right?
 
Down the rabbit hole
 
How have they achieved this? Let’s define the basics first: key players on Zilliqa are the users and the miners. A user is anybody who uses the blockchain to transfer funds or run smart contracts. Miners are the (shard) nodes in the network who run the consensus protocol and get rewarded for their service in Zillings (ZIL). The mining network is divided into several smaller networks called shards, which is also referred to as ‘network sharding’. Miners subsequently are randomly assigned to a shard by another set of miners called DS (Directory Service) nodes. The regular shards process transactions and the outputs of these shards are eventually combined by the DS shard as they reach consensus on the final state. More on how these DS shards reach consensus (via pBFT) will be explained later on.
 
The Zilliqa network produces two types of blocks: DS blocks and Tx blocks. One DS Block consists of 100 Tx Blocks. And as previously mentioned there are two types of nodes concerned with reaching consensus: shard nodes and DS nodes. Becoming a shard node or DS node is being defined by the result of a PoW cycle (Ethash) at the beginning of the DS Block. All candidate mining nodes compete with each other and run the PoW (Proof-of-Work) cycle for 60 seconds and the submissions achieving the highest difficulty will be allowed on the network. And to put it in perspective: the average difficulty for one DS node is ~ 2 Th/s equaling 2.000.000 Mh/s or 55 thousand+ GeForce GTX 1070 / 8 GB GPUs at 35.4 Mh/s. Each DS Block 10 new DS nodes are allowed. And a shard node needs to provide around 8.53 GH/s currently (around 240 GTX 1070s). Dual mining ETH/ETC and ZIL is possible and can be done via mining software such as Phoenix and Claymore. There are pools and if you have large amounts of hashing power (Ethash) available you could mine solo.
 
The PoW cycle of 60 seconds is a peak performance and acts as an entry ticket to the network. The entry ticket is called a sybil resistance mechanism and makes it incredibly hard for adversaries to spawn lots of identities and manipulate the network with these identities. And after every 100 Tx Blocks which corresponds to roughly 1,5 hour this PoW process repeats. In between these 1,5 hour, no PoW needs to be done meaning Zilliqa’s energy consumption to keep the network secure is low. For more detailed information on how mining works click here.
Okay, hats off to you. You have made it this far. Before we go any deeper down the rabbit hole we first must understand why Zilliqa goes through all of the above technicalities and understand a bit more what a blockchain on a more fundamental level is. Because the core of Zilliqa’s consensus protocol relies on the usage of pBFT (practical Byzantine Fault Tolerance) we need to know more about state machines and their function. Navigate to Viewblock, a Zilliqa block explorer, and just come back to this article. We will use this site to navigate through a few concepts.
 
We have established that Zilliqa is a public and distributed blockchain. Meaning that everyone with an internet connection can send ZILs, trigger smart contracts, etc. and there is no central authority who fully controls the network. Zilliqa and other public and distributed blockchains (like Bitcoin and Ethereum) can also be defined as state machines.
 
Taking the liberty of paraphrasing examples and definitions given by Samuel Brooks’ medium article, he describes the definition of a blockchain (like Zilliqa) as: “A peer-to-peer, append-only datastore that uses consensus to synchronize cryptographically-secure data”.
 
Next, he states that: "blockchains are fundamentally systems for managing valid state transitions”. For some more context, I recommend reading the whole medium article to get a better grasp of the definitions and understanding of state machines. Nevertheless, let’s try to simplify and compile it into a single paragraph. Take traffic lights as an example: all its states (red, amber, and green) are predefined, all possible outcomes are known and it doesn’t matter if you encounter the traffic light today or tomorrow. It will still behave the same. Managing the states of a traffic light can be done by triggering a sensor on the road or pushing a button resulting in one traffic lights’ state going from green to red (via amber) and another light from red to green.
 
With public blockchains like Zilliqa, this isn’t so straightforward and simple. It started with block #1 almost 1,5 years ago and every 45 seconds or so a new block linked to the previous block is being added. Resulting in a chain of blocks with transactions in it that everyone can verify from block #1 to the current #647.000+ block. The state is ever changing and the states it can find itself in are infinite. And while the traffic light might work together in tandem with various other traffic lights, it’s rather insignificant comparing it to a public blockchain. Because Zilliqa consists of 2400 nodes who need to work together to achieve consensus on what the latest valid state is while some of these nodes may have latency or broadcast issues, drop offline or are deliberately trying to attack the network, etc.
 
Now go back to the Viewblock page take a look at the amount of transaction, addresses, block and DS height and then hit refresh. Obviously as expected you see new incremented values on one or all parameters. And how did the Zilliqa blockchain manage to transition from a previous valid state to the latest valid state? By using pBFT to reach consensus on the latest valid state.
 
After having obtained the entry ticket, miners execute pBFT to reach consensus on the ever-changing state of the blockchain. pBFT requires a series of network communication between nodes, and as such there is no GPU involved (but CPU). Resulting in the total energy consumed to keep the blockchain secure, decentralized and scalable being low.
 
pBFT stands for practical Byzantine Fault Tolerance and is an optimization on the Byzantine Fault Tolerant algorithm. To quote Blockonomi: “In the context of distributed systems, Byzantine Fault Tolerance is the ability of a distributed computer network to function as desired and correctly reach a sufficient consensus despite malicious components (nodes) of the system failing or propagating incorrect information to other peers.” Zilliqa is such a distributed computer network and depends on the honesty of the nodes (shard and DS) to reach consensus and to continuously update the state with the latest block. If pBFT is a new term for you I can highly recommend the Blockonomi article.
 
The idea of pBFT was introduced in 1999 - one of the authors even won a Turing award for it - and it is well researched and applied in various blockchains and distributed systems nowadays. If you want more advanced information than the Blockonomi link provides click here. And if you’re in between Blockonomi and the University of Singapore read the Zilliqa Design Story Part 2 dating from October 2017.
Quoting from the Zilliqa tech whitepaper: “pBFT relies upon a correct leader (which is randomly selected) to begin each phase and proceed when the sufficient majority exists. In case the leader is byzantine it can stall the entire consensus protocol. To address this challenge, pBFT offers a view change protocol to replace the byzantine leader with another one.”
 
pBFT can tolerate ⅓ of the nodes being dishonest (offline counts as Byzantine = dishonest) and the consensus protocol will function without stalling or hiccups. Once there are more than ⅓ of dishonest nodes but no more than ⅔ the network will be stalled and a view change will be triggered to elect a new DS leader. Only when more than ⅔ of the nodes are dishonest (66%) double-spend attacks become possible.
 
If the network stalls no transactions can be processed and one has to wait until a new honest leader has been elected. When the mainnet was just launched and in its early phases, view changes happened regularly. As of today the last stalling of the network - and view change being triggered - was at the end of October 2019.
 
Another benefit of using pBFT for consensus besides low energy is the immediate finality it provides. Once your transaction is included in a block and the block is added to the chain it’s done. Lastly, take a look at this article where three types of finality are being defined: probabilistic, absolute and economic finality. Zilliqa falls under the absolute finality (just like Tendermint for example). Although lengthy already we skipped through some of the inner workings from Zilliqa’s consensus: read the Zilliqa Design Story Part 3 and you will be close to having a complete picture on it. Enough about PoW, sybil resistance mechanism, pBFT, etc. Another thing we haven’t looked at yet is the amount of decentralization.
 
Decentralisation
 
Currently, there are four shards, each one of them consisting of 600 nodes. 1 shard with 600 so-called DS nodes (Directory Service - they need to achieve a higher difficulty than shard nodes) and 1800 shard nodes of which 250 are shard guards (centralized nodes controlled by the team). The amount of shard guards has been steadily declining from 1200 in January 2019 to 250 as of May 2020. On the Viewblock statistics, you can see that many of the nodes are being located in the US but those are only the (CPU parts of the) shard nodes who perform pBFT. There is no data from where the PoW sources are coming. And when the Zilliqa blockchain starts reaching its transaction capacity limit, a network upgrade needs to be executed to lift the current cap of maximum 2400 nodes to allow more nodes and formation of more shards which will allow to network to keep on scaling according to demand.
Besides shard nodes there are also seed nodes. The main role of seed nodes is to serve as direct access points (for end-users and clients) to the core Zilliqa network that validates transactions. Seed nodes consolidate transaction requests and forward these to the lookup nodes (another type of nodes) for distribution to the shards in the network. Seed nodes also maintain the entire transaction history and the global state of the blockchain which is needed to provide services such as block explorers. Seed nodes in the Zilliqa network are comparable to Infura on Ethereum.
 
The seed nodes were first only operated by Zilliqa themselves, exchanges and Viewblock. Operators of seed nodes like exchanges had no incentive to open them for the greater public. They were centralised at first. Decentralisation at the seed nodes level has been steadily rolled out since March 2020 ( Zilliqa Improvement Proposal 3 ). Currently the amount of seed nodes is being increased, they are public-facing and at the same time PoS is applied to incentivize seed node operators and make it possible for ZIL holders to stake and earn passive yields. Important distinction: seed nodes are not involved with consensus! That is still PoW as entry ticket and pBFT for the actual consensus.
 
5% of the block rewards are being assigned to seed nodes (from the beginning in 2019) and those are being used to pay out ZIL stakers. The 5% block rewards with an annual yield of 10.03% translate to roughly 610 MM ZILs in total that can be staked. Exchanges use the custodial variant of staking and wallets like Moonlet will use the non-custodial version (starting in Q3 2020). Staking is being done by sending ZILs to a smart contract created by Zilliqa and audited by Quantstamp.
 
With a high amount of DS; shard nodes and seed nodes becoming more decentralized too, Zilliqa qualifies for the label of decentralized in my opinion.
 
Smart contracts
 
Let me start by saying I’m not a developer and my programming skills are quite limited. So I‘m taking the ELI5 route (maybe 12) but if you are familiar with Javascript, Solidity or specifically OCaml please head straight to Scilla - read the docs to get a good initial grasp of how Zilliqa’s smart contract language Scilla works and if you ask yourself “why another programming language?” check this article. And if you want to play around with some sample contracts in an IDE click here. The faucet can be found here. And more information on architecture, dapp development and API can be found on the Developer Portal.
If you are more into listening and watching: check this recent webinar explaining Zilliqa and Scilla. Link is time-stamped so you’ll start right away with a platform introduction, roadmap 2020 and afterwards a proper Scilla introduction.
 
Generalized: programming languages can be divided into being ‘object-oriented’ or ‘functional’. Here is an ELI5 given by software development academy: * “all programs have two basic components, data – what the program knows – and behavior – what the program can do with that data. So object-oriented programming states that combining data and related behaviors in one place, is called “object”, which makes it easier to understand how a particular program works. On the other hand, functional programming argues that data and behavior are different things and should be separated to ensure their clarity.” *
 
Scilla is on the functional side and shares similarities with OCaml: OCaml is a general-purpose programming language with an emphasis on expressiveness and safety. It has an advanced type system that helps catch your mistakes without getting in your way. It's used in environments where a single mistake can cost millions and speed matters, is supported by an active community, and has a rich set of libraries and development tools. For all its power, OCaml is also pretty simple, which is one reason it's often used as a teaching language.
 
Scilla is blockchain agnostic, can be implemented onto other blockchains as well, is recognized by academics and won a so-called Distinguished Artifact Award award at the end of last year.
 
One of the reasons why the Zilliqa team decided to create their own programming language focused on preventing smart contract vulnerabilities is that adding logic on a blockchain, programming, means that you cannot afford to make mistakes. Otherwise, it could cost you. It’s all great and fun blockchains being immutable but updating your code because you found a bug isn’t the same as with a regular web application for example. And with smart contracts, it inherently involves cryptocurrencies in some form thus value.
 
Another difference with programming languages on a blockchain is gas. Every transaction you do on a smart contract platform like Zilliqa or Ethereum costs gas. With gas you basically pay for computational costs. Sending a ZIL from address A to address B costs 0.001 ZIL currently. Smart contracts are more complex, often involve various functions and require more gas (if gas is a new concept click here ).
 
So with Scilla, similar to Solidity, you need to make sure that “every function in your smart contract will run as expected without hitting gas limits. An improper resource analysis may lead to situations where funds may get stuck simply because a part of the smart contract code cannot be executed due to gas limits. Such constraints are not present in traditional software systems”. Scilla design story part 1
 
Some examples of smart contract issues you’d want to avoid are: leaking funds, ‘unexpected changes to critical state variables’ (example: someone other than you setting his or her address as the owner of the smart contract after creation) or simply killing a contract.
 
Scilla also allows for formal verification. Wikipedia to the rescue: In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics.
 
Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code.
 
Scilla is being developed hand-in-hand with formalization of its semantics and its embedding into the Coq proof assistant — a state-of-the art tool for mechanized proofs about properties of programs.”
 
Simply put, with Scilla and accompanying tooling developers can be mathematically sure and proof that the smart contract they’ve written does what he or she intends it to do.
 
Smart contract on a sharded environment and state sharding
 
There is one more topic I’d like to touch on: smart contract execution in a sharded environment (and what is the effect of state sharding). This is a complex topic. I’m not able to explain it any easier than what is posted here. But I will try to compress the post into something easy to digest.
 
Earlier on we have established that Zilliqa can process transactions in parallel due to network sharding. This is where the linear scalability comes from. We can define simple transactions: a transaction from address A to B (Category 1), a transaction where a user interacts with one smart contract (Category 2) and the most complex ones where triggering a transaction results in multiple smart contracts being involved (Category 3). The shards are able to process transactions on their own without interference of the other shards. With Category 1 transactions that is doable, with Category 2 transactions sometimes if that address is in the same shard as the smart contract but with Category 3 you definitely need communication between the shards. Solving that requires to make a set of communication rules the protocol needs to follow in order to process all transactions in a generalised fashion.
 
And this is where the downsides of state sharding comes in currently. All shards in Zilliqa have access to the complete state. Yes the state size (0.1 GB at the moment) grows and all of the nodes need to store it but it also means that they don’t need to shop around for information available on other shards. Requiring more communication and adding more complexity. Computer science knowledge and/or developer knowledge required links if you want to dig further: Scilla - language grammar Scilla - Foundations for Verifiable Decentralised Computations on a Blockchain Gas Accounting NUS x Zilliqa: Smart contract language workshop
 
Easier to follow links on programming Scilla https://learnscilla.com/home Ivan on Tech
 
Roadmap / Zilliqa 2.0
 
There is no strict defined roadmap but here are topics being worked on. And via the Zilliqa website there is also more information on the projects they are working on.
 
Business & Partnerships
 
It’s not only technology in which Zilliqa seems to be excelling as their ecosystem has been expanding and starting to grow rapidly. The project is on a mission to provide OpenFinance (OpFi) to the world and Singapore is the right place to be due to its progressive regulations and futuristic thinking. Singapore has taken a proactive approach towards cryptocurrencies by introducing the Payment Services Act 2019 (PS Act). Among other things, the PS Act will regulate intermediaries dealing with certain cryptocurrencies, with a particular focus on consumer protection and anti-money laundering. It will also provide a stable regulatory licensing and operating framework for cryptocurrency entities, effectively covering all crypto businesses and exchanges based in Singapore. According to PWC 82% of the surveyed executives in Singapore reported blockchain initiatives underway and 13% of them have already brought the initiatives live to the market. There is also an increasing list of organizations that are starting to provide digital payment services. Moreover, Singaporean blockchain developers Building Cities Beyond has recently created an innovation $15 million grant to encourage development on its ecosystem. This all suggests that Singapore tries to position itself as (one of) the leading blockchain hubs in the world.
 
Zilliqa seems to already take advantage of this and recently helped launch Hg Exchange on their platform, together with financial institutions PhillipCapital, PrimePartners and Fundnel. Hg Exchange, which is now approved by the Monetary Authority of Singapore (MAS), uses smart contracts to represent digital assets. Through Hg Exchange financial institutions worldwide can use Zilliqa's safe-by-design smart contracts to enable the trading of private equities. For example, think of companies such as Grab, Airbnb, SpaceX that are not available for public trading right now. Hg Exchange will allow investors to buy shares of private companies & unicorns and capture their value before an IPO. Anquan, the main company behind Zilliqa, has also recently announced that they became a partner and shareholder in TEN31 Bank, which is a fully regulated bank allowing for tokenization of assets and is aiming to bridge the gap between conventional banking and the blockchain world. If STOs, the tokenization of assets, and equity trading will continue to increase, then Zilliqa’s public blockchain would be the ideal candidate due to its strategic positioning, partnerships, regulatory compliance and the technology that is being built on top of it.
 
What is also very encouraging is their focus on banking the un(der)banked. They are launching a stablecoin basket starting with XSGD. As many of you know, stablecoins are currently mostly used for trading. However, Zilliqa is actively trying to broaden the use case of stablecoins. I recommend everybody to read this text that Amrit Kumar wrote (one of the co-founders). These stablecoins will be integrated in the traditional markets and bridge the gap between the crypto world and the traditional world. This could potentially revolutionize and legitimise the crypto space if retailers and companies will for example start to use stablecoins for payments or remittances, instead of it solely being used for trading.
 
Zilliqa also released their DeFi strategic roadmap (dating November 2019) which seems to be aligning well with their OpFi strategy. A non-custodial DEX is coming to Zilliqa made by Switcheo which allows cross-chain trading (atomic swaps) between ETH, EOS and ZIL based tokens. They also signed a Memorandum of Understanding for a (soon to be announced) USD stablecoin. And as Zilliqa is all about regulations and being compliant, I’m speculating on it to be a regulated USD stablecoin. Furthermore, XSGD is already created and visible on block explorer and XIDR (Indonesian Stablecoin) is also coming soon via StraitsX. Here also an overview of the Tech Stack for Financial Applications from September 2019. Further quoting Amrit Kumar on this:
 
There are two basic building blocks in DeFi/OpFi though: 1) stablecoins as you need a non-volatile currency to get access to this market and 2) a dex to be able to trade all these financial assets. The rest are built on top of these blocks.
 
So far, together with our partners and community, we have worked on developing these building blocks with XSGD as a stablecoin. We are working on bringing a USD-backed stablecoin as well. We will soon have a decentralised exchange developed by Switcheo. And with HGX going live, we are also venturing into the tokenization space. More to come in the future.”
 
Additionally, they also have this ZILHive initiative that injects capital into projects. There have been already 6 waves of various teams working on infrastructure, innovation and research, and they are not from ASEAN or Singapore only but global: see Grantees breakdown by country. Over 60 project teams from over 20 countries have contributed to Zilliqa's ecosystem. This includes individuals and teams developing wallets, explorers, developer toolkits, smart contract testing frameworks, dapps, etc. As some of you may know, Unstoppable Domains (UD) blew up when they launched on Zilliqa. UD aims to replace cryptocurrency addresses with a human-readable name and allows for uncensorable websites. Zilliqa will probably be the only one able to handle all these transactions onchain due to ability to scale and its resulting low fees which is why the UD team launched this on Zilliqa in the first place. Furthermore, Zilliqa also has a strong emphasis on security, compliance, and privacy, which is why they partnered with companies like Elliptic, ChainSecurity (part of PwC Switzerland), and Incognito. Their sister company Aqilliz (Zilliqa spelled backwards) focuses on revolutionizing the digital advertising space and is doing interesting things like using Zilliqa to track outdoor digital ads with companies like Foodpanda.
 
Zilliqa is listed on nearly all major exchanges, having several different fiat-gateways and recently have been added to Binance’s margin trading and futures trading with really good volume. They also have a very impressive team with good credentials and experience. They don't just have “tech people”. They have a mix of tech people, business people, marketeers, scientists, and more. Naturally, it's good to have a mix of people with different skill sets if you work in the crypto space.
 
Marketing & Community
 
Zilliqa has a very strong community. If you just follow their Twitter their engagement is much higher for a coin that has approximately 80k followers. They also have been ‘coin of the day’ by LunarCrush many times. LunarCrush tracks real-time cryptocurrency value and social data. According to their data, it seems Zilliqa has a more fundamental and deeper understanding of marketing and community engagement than almost all other coins. While almost all coins have been a bit frozen in the last months, Zilliqa seems to be on its own bull run. It was somewhere in the 100s a few months ago and is currently ranked #46 on CoinGecko. Their official Telegram also has over 20k people and is very active, and their community channel which is over 7k now is more active and larger than many other official channels. Their local communities also seem to be growing.
 
Moreover, their community started ‘Zillacracy’ together with the Zilliqa core team ( see www.zillacracy.com ). It’s a community-run initiative where people from all over the world are now helping with marketing and development on Zilliqa. Since its launch in February 2020 they have been doing a lot and will also run their own non-custodial seed node for staking. This seed node will also allow them to start generating revenue for them to become a self sustaining entity that could potentially scale up to become a decentralized company working in parallel with the Zilliqa core team. Comparing it to all the other smart contract platforms (e.g. Cardano, EOS, Tezos etc.) they don't seem to have started a similar initiative (correct me if I’m wrong though). This suggests in my opinion that these other smart contract platforms do not fully understand how to utilize the ‘power of the community’. This is something you cannot ‘buy with money’ and gives many projects in the space a disadvantage.
 
Zilliqa also released two social products called SocialPay and Zeeves. SocialPay allows users to earn ZILs while tweeting with a specific hashtag. They have recently used it in partnership with the Singapore Red Cross for a marketing campaign after their initial pilot program. It seems like a very valuable social product with a good use case. I can see a lot of traditional companies entering the space through this product, which they seem to suggest will happen. Tokenizing hashtags with smart contracts to get network effect is a very smart and innovative idea.
 
Regarding Zeeves, this is a tipping bot for Telegram. They already have 1000s of signups and they plan to keep upgrading it for more and more people to use it (e.g. they recently have added a quiz features). They also use it during AMAs to reward people in real-time. It’s a very smart approach to grow their communities and get familiar with ZIL. I can see this becoming very big on Telegram. This tool suggests, again, that the Zilliqa team has a deeper understanding of what the crypto space and community needs and is good at finding the right innovative tools to grow and scale.
 
To be honest, I haven’t covered everything (i’m also reaching the character limited haha). So many updates happening lately that it's hard to keep up, such as the International Monetary Fund mentioning Zilliqa in their report, custodial and non-custodial Staking, Binance Margin, Futures, Widget, entering the Indian market, and more. The Head of Marketing Colin Miles has also released this as an overview of what is coming next. And last but not least, Vitalik Buterin has been mentioning Zilliqa lately acknowledging Zilliqa and mentioning that both projects have a lot of room to grow. There is much more info of course and a good part of it has been served to you on a silver platter. I invite you to continue researching by yourself :-) And if you have any comments or questions please post here!
submitted by haveyouheardaboutit to CryptoCurrency [link] [comments]

An open apology to members of this sub, especially deadalnix, Contrarian, and Zectro

Deadalnix - I may disagree with your style (and I've said as much) and your tactics (I've said as much) but your goals, ideas, and work are greatly appreciated here. We wouldn't be here without you and your team.
In this post I heavily criticized the ABC plan -- not because of the technicals, but because of the tactics of implementation. I have updated that post with my retraction. I'm still not happy about the (mis)communications but ultimately after learning more about CTOR and reading the various objections to it (there are few and they are weak) I've concluded that CTOR + Graphene is the best path forward to reach the goal of massive onchain scaling. Scaling onchain Bitcoin is the only reason I'm here. The ABC / BU / XT plan has my support and I urge all groups to work together as quickly as possible to realize its benefits.
To Contrarian__ and a lesser extent Zectro -- you've been a real pain in my ass for the last year. Turns out, you were right all along. Please forgive my willingness to go along and see how things turn out. Some things do need to be dragged into the sunshine to die. Thanks to both of you for your work compiling and exposing the myriad lies, plagiarisms, and frauds committed by the Attacker in Chief.
When CSW showed up in the big-block camp 18 months or so ago, to me, he was a breath of fresh air. He wasn't afraid to stir up the pot. He wasn't going to "go along to get along." I even wrote this to him.
I got bamboozled. Turns out that singularity87 was right all along when he wrote:
What someone somewhere worked out, is that all you have to do to take down a community is say that you are on their side. It is an astoundingly effective form of psychological attack.
Guilty as charged.
To my other peers in the sub that I haven't named, please accept my apology for welcoming this cancer into our midst. Mea culpa.
submitted by jessquit to btc [link] [comments]

A reminder of who Craig Wright is and the benefits to BCH now he has gone.

This needs to be repeated every so often on this subreddit so new people can understand the history of the fork of BCH into BCH and BSV
From Jonald Fyookball's article
https://medium.com/@jonaldfyookball/bitcoin-cash-is-finally-free-of-faketoshi-great-days-lie-ahead-bb0c833e4c5d
Craig S. Wright (CSW) leaving the Bitcoin Cash community is a wonderful thing. This self-described “tyrant” has been expunged, and now we can get back to our mission of bringing peer-to-peer electronic cash to the world.
The markets will rebound when they see the chaos is over, but regardless of the price, we will keep building. Nothing will stop the sound money movement. Calling Out Bad Behavior
As Rick Falkvinge recently explained, there is a difference between small-minded gossiping about personalities and legitimately calling out bad behavior.
CSW’s bad behavior must be called out, because he has done tremendous damage to Bitcoin Cash (and possibly even the entire cryptocurrency sector).
The brief history is that he gained his reputation by claiming to be Bitcoin’s creator (Satoshi Nakamoto). He said he would provide “extraordinary proof” but he has never done so.
Supposedly, he did some “private signings” to a few people, and this allowed him to gain influence in the BCH community. The destruction he has been causing was not widely recognized until after a huge mess had been made.
Thanks to u/Contrarian__ for the following compliation of CSW’s misgivings:
Some background on Craig’s claim of being Satoshi, for the uninitiated:
He faked blog posts He faked PGP keys He faked contracts and emails He faked threats He faked a public key signing He has a well-documented history of fabricating things bitcoin and non-bitcoin related He faked a bitcoin trust to get free money from the Australian government but was caught and fined over a million dollars. 
And specifically concerning his claim to be Satoshi:
He has provided no independently verifiable evidence He is not technically competent in the subject matter His writing style is nothing like Satoshi’s He called bitcoin “Bit Coin” in 2011 when Satoshi never used a space He actively bought and traded coins from Mt. Gox in 2013 and 2014 He was paid millions for ‘coming out’ as Satoshi as part of the deal to sell his patents to nTrust — for those who claim he was ‘outed’ or had no motive 
Caught Red Handed Plagiarizing
No respectable academic, scientist, or professional needs to stoop so low as to steal and take credit for the work of others — least of all Satoshi. Yet, CSW has already been caught at least 3 times plagiarizing.
His paper on selfish mining has full sections copied almost verbatim from a paper written by Liu & Wang. His “Beyond Godel” paper which purports to claim that Bitcoin script is turing complete, is heavily plagiarized. A paper on block propagation was blatantly and intentionally plagiarized. 
Can’t Even Steal Code Correctly
CSW was also caught attempting to plagiarize a “hello world” program (the simplest of all computer programs).
He apparently does not understand base58 or how Bitcoin address checksums work (both of these are common knowledge to experienced Bitcoiners), and has made other embarrasing errors. So How Did Such an Obvious Fraud Gain So Much Power and Influence?
There are no easy answers here. It seems that as humans, we are very susceptible to manipulation and misinformation. The greatest weapon against sinister forces is a well-educated populace. This is something that can only improve over the long run.
The “Satoshi factor” is a powerful one and appeals to the glamorization of a mythical figure. Even people such as myself, who are technically astute, gave CSW all benefit of the doubt until the evidence staring us in the face could no longer be denied.
The seduction of the BCH community was also facilitated by CSW becoming a strong advocate for the on-chain/big-block scaling movement at a time when the community was dying to hear it. This message, delivered with a brazen, in-your-face style, was a sharp contrast to anything seen before.
In addition, CSW was able to find obscure topics (“2pda”), network topology, etc, that seemed to establish him as an expert with esoteric knowledge above and beyond anyone else. Basically, he was using technobabble, but it wasn’t immediately obvious except to very technical people… who were then attacked and discredited.
Eventually, as more and more of the community began to realize his technical claims were bogus, CSW banned those people from his twitter feed and slack channel, leaving only a group of untechnical “believers”, which the larger BCH community referred to as “the church” AKA the Cult-of-Craig.
Finally, if some believed that CSW possesed Satoshis’s stash of 1M BTC, then they may have been gnawing to get a piece of it. But it may turn out that these are the coins that never were. Broken Promises
If this article so far seems like an “attack piece” on CSW, remember it is important to get all the facts out in the open. We’ll get to the silver lining and bright future in a moment… but let’s continue here to “get it all out”.
One of the biggest ways that CSW has damaged the community is to make an endless series of broken promises. This caused others to wait, to waste time on his unproven ideas and solutions, and to postpone or drop their own ideas and initiatives.
He said he was building a mining pool to “stop SegWit” He said he was bringing big companies to use the BCH chain He said that he was providing a fungibility solution based on blind threshold signatures He said he was providing novel technology based on oblivious transfers He said he was providing a method where people could do atomic swaps without using timelocks He said he was going to show everyone how we can do bilinear pairings using secp256k1 He said he was going to release source code for nakasendo He said he was releasing some information that would “kill the lightning network” He said he was going to show everyone how the selfish mining theory is wrong He said he was going to show everyone how we can tokenize everything in the universe squared He said a few times “big things are coming in 2 months” 
How CSW Has Damaged the BCH Community
In addition to the broken promises, the BCH community was wounded due to:
The division of the community (with classic divide and conquer tactics) Loss of focus. Huge amounts of drama and distraction from building and adoption Investor confidence has been shaken due to uncertainty and chaos. BCH is a laughing stock to outsiders due to CSW’s antics Gemini deployment of BCH and other rollouts paused Loss of developer talent due to toxic and abrasive personality Various patent and legal threats 
The Hash War Event and Split into BitcoinSV
Every 6 months, BCH has a scheduled network upgrade. This is technically a “hard fork” but a non-contentious fork does not result in a split of the chain — it is simply new network rules being activated.
Bitcoin Cash has multiple independent developer groups including Bitcoin ABC, Bitcoin Unlimited, Bitcoin XT, Bitprim, BCHD, bcash, parity, Flowee, and others.
The nChain group, led by CSW, introduced an alternate set of changes a week before the agreed cut-off date, intentionally causing a huge controversey. These changes were incompatible with the changes being discussed between the other groups.
nChain objected to the changes being proposed (cannonical transaction ordering) despite specifically agreeing to it almost a year earlier. The last minute objections were in my opinion, an attempt at sabotage.
An emergency meeting was held in Bangkok to attempt to resolve the differences between the nChain group and the rest of the community. Not only did CSW refuse to listen to the other presentations, he walked out of the meeting after his own speech had been given. The other nChain people refused to discuss the technical issues.
After this, nChain built their own software (“BitcoinSV”) to attempt to compete for the Bitcoin Cash network. But rather than split off to follow their own set of rules, they threatened to attack Bitcoin Cash.
Their attitude was “you follow our rules or we burn it all down”.
The CSW sycophants adopted a strange interpretation of the Bitcoin whitepaper and proselytized the idea that if nChain could “out hash” everyone else, the market should be obliged to follow them.
This faulty thinking was eloquently debunked by u/CatatonicAdenosine. As it turns out, nChain was unable in any case to win at their own game. But Here’s the Obviously Good News…
CSW is gone. It’s over.
He can do whatever he wants on the BitcoinSV chain. He will never be allowed to influence Bitcoin Cash again. And all the negative things and negative people that were a consequence of his involvement in Bitcoin Cash are gone with him.
As a community, we will redouble our efforts and get back to our mission of peer-to-peer electronic cash. We will learn to work together better than ever, and we will learn to detect and punish bad behavior sooner.
The attempted attacks with hashpower also sparked innovation and a focus on the problem of how to stop such attacks in the future. This is only making Bitcoin Cash (BCH) and the entire class of Proof-of-Work coins stronger.
Nothing will stop us.
The reason why millions of dollars were spent to attack and also to defend Bitcoin Cash is because it’s something truly worth fighting over.
It’s sound money.
It’s permissionless.
It’s what Satoshi Nakamoto wrote about in 2008. It’s Bitcoin, a Peer-to-Peer Electronic Cash System.
Bitcoin 
Go to the profile of Jonald Fyookball Jonald Fyookball More from Jonald Fyookball Jimmy Song Tries to Claim Bitcoin Cash is “Fiat Money”… Seriously? Go to the profile of Jonald Fyookball Jonald Fyookball Related reads 600 Microseconds Go to the profile of Awemany Awemany Related reads The scams in Crypto Go to the profile of Craig Wright (Bitcoin SV is the original Bitcoin.) Craig Wright (Bitcoin SV is the original Bitcoin.) Responses
submitted by stewbits22 to btc [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Peter Todd's RBF (Replace-By-Fee) goes against one of the foundational principles of Bitcoin: IRREVOCABLE CASH TRANSACTIONS. RBF is the most radical, controversial change ever proposed to Bitcoin - and it is being forced on the community with no consensus, no debate and no testing. Why?

Many people are starting to raise serious questions and issues regarding Peter Todd's "Opt-In Full RBF", as summarized below:
(1) RBF violates one of the fundamental principles of the Bitcoin protocol: irrevocable cash transactions.
Interesting point!
Th[is] really is [a] drastically different vision of what Bitcoin according to the core dev team...
It would be nice [if] they [wrote their] own "white paper" so we know where they are going...
Ant-n
https://www.reddit.com/btc/comments/3ujj1s/serious_gametheory_question_if_youre_a_miner_and/cxflx55
"From a usability / communications perspective, RBF is all wrong. When the main function of your technology is to PREVENT DOUBLE SPENDING, you don't add an "opt-in" feature which ENCOURAGES DOUBLE SPENDING."
BeYourOwnBank
https://www.reddit.com/bitcoinxt/comments/3uixix/from_a_usability_communications_perspective_rbf/
(2) Who even requested RBF in the first place? What urgent existing "problem" is RBF intended to solve? If you claim to be a supporter of RBF, would you be willing to go on the record and comment here on how it would personally benefit you?
Still waiting for an answer to the fundamental question: where is the demand for this "feature" coming from?
tsontar
https://www.reddit.com/btc/comments/3ujc4m/consensus_jgarzik_rbf_would_be_antisocial_on_the/
Lots of back and forth bit no answer to the fundamental question: where is the demand for this "feature" coming from?
tsontar
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxfjxp7
Intentionally doing zero-conf for any reason other than expediting a payment to the same recipients is nothing more than attempted fraud. There needs to be a good reason for enabling this, and last time I looked the case has not been made.
People with a black and white view of the world who believe "0 conf bad, 1 conf good" simply do not understand how bitcoin works. By its random nature, bitcoin never makes final commitment to a transaction. Even with six confirmations there is still a chance the transaction will be reversed. In other words, bitcoin finality is not black and white. Instead, there is a probability distribution of confidence that a transaction will not be reversed. Software changes that make it easier to defraud people who have been reasonably accepting 0 conf transactions are of highly questionable value, as they reduce the performance (by increasing delay for a given confidence).
If transactions with appropriate fees start failing to ever confirm because of "block size" issues, then bitcoin is simply broken and, if it can not be fixed bitcoin will end up as dead as a doornail.
tl121
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxf9udt
Transactions spending the same utxo were (until now) not relayed (except by XT nodes). So it wasn't as simple as just sending a double spend, because the transaction wouldn't propagate. FSS-RBF seemed like a good option to get your tx unstuck if you paid too little. Pure RBF I'm not sure what the point of it is. What problem is it solving?
peoplma
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxfdb37
When F2Pool implemented RBF at the behest of Peter Todd they were forced to retract the changes within 24 hours due to the outrage in the community over the proposed changes.
So the opposite is actually true. The community actively do not want this change. Has there been any discussion whatsoever about this major change to the protocol?
yeeha4
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfbvvn
yeehaw4: "When F2Pool implemented RBF at the behest of Peter Todd they were forced to retract the changes within 24 hours due to the outrage in the community over the proposed changes." / pizzaface18: "Peter ... tried to push a change that will cripple some use cases of Bitcoin."
BeYourOwnBank
https://www.reddit.com/btc/comments/3ujm35/uyeehaw4_when_f2pool_implemented_rbf_at_the/
(3) RBF breaks zero-conf. Satoshi supported zero-conf. Were any actual merchants who have figured out pragmatic business approaches using zero-conf even consulted on this radical, controversial change?
My business accepts bitcoin and helps people with minor cash transfers and purchases. Fraud has NEVER been an issue as long as the transactions have been broadcast on the blockchain with appropriate fees. We usually send people their cash as soon as the transaction is broadcast.
Now we have to wait 10 minutes to avoid getting cheated out of hundreds of dollars, vastly increasing the service cost of accepting bitcoin. And we have to tell customers we promote bitcoin to that they are likely to be cheated if they don't wait 10 minutes while buying their bitcoin. It is such a spectacularly stupid thing to do, adding uncertainty and greater potential for fraud at every link of the transaction chain. Thanks a lot, Peter.
trevelyan22
https://www.reddit.com/btc/comments/3ujc4m/consensus_jgarzik_rbf_would_be_antisocial_on_the/cxfjn78
Jeez, we need to give this "zero-conf was never safe" meme a rest already. Cash was also "never safe", but it's widely used because it works reasonably well in the context it's used. These people would probably advocate for a cashless society as well.
imaginary_username
https://www.reddit.com/bitcoinxt/comments/3ujq69/uriplin_on_rbitcoin_inadvertently_reveals_the/cxfisut
I believe it'll be possible for a payment processing company to provide as a service the rapid distribution of transactions with good-enough checking in something like 10 seconds or less.
The network nodes only accept the first version of a transaction they receive to incorporate into the block they're trying to generate. When you broadcast a transaction, if someone else broadcasts a double-spend at the same time, it's a race to propagate to the most nodes first. If one has a slight head start, it'll geometrically spread through the network faster and get most of the nodes.
A rough back-of-the-envelope example:
1 0
4 1
16 4
64 16
80% 20%
So if a double-spend has to wait even a second, it has a huge disadvantage.
The payment processor has connections with many nodes. When it gets a transaction, it blasts it out, and at the same time monitors the network for double-spends. If it receives a double-spend on any of its many listening nodes, then it alerts that the transaction is bad. A double-spent transaction wouldn't get very far without one of the listeners hearing it. The double-spender would have to wait until the listening phase is over, but by then, the payment processor's broadcast has reached most nodes, or is so far ahead in propagating that the double-spender has no hope of grabbing a significant percentage of the remaining nodes.
— satoshi
https://bitcointalk.org/index.php?topic=423.msg3819#msg3819
"RBF is agaisnt Satoshi's Vision. Peter Todd and others attacking Satoshi's vision again, while Gavin Andresen upholds his original vision steadfastly."
Plive
https://www.reddit.com/btc/comments/3ukc52/rbf_is_agaisnt_satoshis_vision_peter_todd_and/
Zero conf was always dangerous, true, but the attacker is rolling a dice with a double spend. And it is detectable because you have to put your double spend transaction on the network within the transaction propagation time (which is measured in seconds). That means in the shop, while the attacker is buying the newspaper, the merchant can get an alert from their payment processor saying "this transaction has a double spend attempt". Wrestling them to the ground is an option. Stealing has to be done in person... No different then from just shop lifting. The attacker takes their chance that the stealing transaction won't be the one that is mined.
With rbf, the attacker has up to the next block time to decide to release their double spend transaction. That means the attacker can be out of the shop and ten minutes away by car before the merchant gets the double spend warning from their payment processor. Stealing is not in person and success is guaranteed by the network.
Conclusion: every merchant and every payment processor will simply refuse to accept any rbf opt in transaction. That opt in might as well be a flag that says "enable stealing from you with this transaction"... Erm no thanks.
There might be a small window while wallet software is updated, but after that this " feature " will go dark. Nobody is going to accept a cheque signed "mickey mouse", and nobody is going to accept a transaction marked rbf.
Strangely, that means all this fuss about it getting merged is moot. It will inevitably not be used.
kingofthejaffacakes
https://www.reddit.com/bitcoinxt/comments/3ujq69/uriplin_on_rbitcoin_inadvertently_reveals_the/cxfkkr3
(4) What new problems could RBF create?
This opens up a new kind of vandalism that will ensure that no wallets use this feature.
The way it works is that if you make a transaction, and then double spend the transaction with a higher fee, the one with the higher fee will take priority.
DeftNerd
https://www.reddit.com/btc/comments/3ujc4m/consensus_jgarzik_rbf_would_be_antisocial_on_the/cxfhd0m
RBF as released is a really, really stupid policy change that will open up Bitcoin to blackmail and wholesale theft of transactions.
Bitcoin XT can easily be better than the confused, agenda-ridden rubbish being released by Blockstream and their fellow-travellers.
laisee
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxfkeah
This is truly unprecedented. There is MAJOR MONEY and MAJOR FORCES trying to destroy Bitcoin right now. We are witnessing history here. This might completely destroy the Bitcoin experiment
scotty321
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxf53xn
I [too am] curious as to why Todd has been pushing that hard for RBF. People can double-spend if they really want to already, without any help from BS implementation.
thaolx
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxf4t8l
(5) RBF apologists such as eragmus have been trying to placate objections by repeatedly emphasizing that this version of RBF is ok, saying that this is only "Opt-In (Full) RBF". But does the "opt-in" nature of this particular implementation of RBF really mitigate its potential problems?
"opt-in" is a bit of a red-herring.
As I understand: say I'm a vendor who doesn't want to accept RBF transactions. So I don't opt-in. I'm still stuck accepting RBF transactions because the sender, not the receiver, has the control.
tsontar
https://www.reddit.com/btc/comments/3ujc4m/consensus_jgarzik_rbf_would_be_antisocial_on_the/cxflg13
bitcoin is a push system.
how do I opt-out of a transaction generated and confirmed entirely outside my control?
tsontar
https://www.reddit.com/btc/comments/3ujj1s/serious_gametheory_question_if_youre_a_miner_and/cxflhki
You are right you cannot opt-out.. You will have to wait ten minutes if you have recived a RBF Tx..
The user experience doesn't seem to be a priority for the core dev team...
Ant-n
https://www.reddit.com/btc/comments/3ujj1s/serious_gametheory_question_if_youre_a_miner_and/cxfls9o
It's opt-in in theory, but that means everyone in the community who writes software which deals with transactions now has to develop code to deal with the ramifications.
discoltk
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfec1o
Yes it is opt-in, which means I have to anticipate ... congestion beforehand to use it. This has caused me troubles recently. Normally I use low-fee mode to transact and switch mode when the network is congested. A few times either I did not know about the congestion or forgot to switch mode and my txn got stuck for 12-48h. So for me this opt-in does nothing of help. If I was conscious about the congestion I would have switch to high-fee mode, no RBF needed.
...Or I have to enabled RBF for all my txns. Then there's problem of receivers have to all upgrade their wallet after the wallet devs choose to implement it. And just to add one more major complication when consider 0-conf.
thaolx
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfbbn6
What is the point of opt in rbf if it's not a good way to pay lower miner fees? According to nullc, if you guess too low then you end up paying for two transactions
specialenmity
https://www.reddit.com/bitcoinxt/comments/3ujq69/uriplin_on_rbitcoin_inadvertently_reveals_the/cxfoi99
(6) Who would benefit from RBF?
"Hopefully this will give Bitcoin payment processors a financial incentive to support Lightning Network development."
https://www.reddit.com/bitcoinxt/comments/3ujq69/uriplin_on_rbitcoin_inadvertently_reveals_the/
It seems to me like RBF is addressing a problem (delays due to too-low fees) which would not exist if we had larger blocks. It seems fishy to make this and lightning networks to solve the problem when there's a much simpler solution in plain view.
We should set the bar for deceit and mischief unusually high on this one bc there is so much at stake, an entire banking empire.
ganesha1024
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfde8f
RBF seems at best to be a duct-tape solution to a problem caused by not raising the block size. in the process it kills zero conf (more or less).
rglfnt
https://www.reddit.com/btc/comments/3ujm35/uyeehaw4_when_f2pool_implemented_rbf_at_the/cxfkqoh
PT [Peter Todd] is part of a group of devs who propose to create artificial scarcity in order to drive up transaction fees.
IOW [In other words], he's a glorified central planner.
A free market moves around such engineered scarcity. See also: the music business.
tl;dr stop running core.
tsontar
https://www.reddit.com/btc/comments/3ujm35/uyeehaw4_when_f2pool_implemented_rbf_at_the/cxfljrk
This maybe a needed feature if Bitcoin get stuck with 1MB..
You might need to jack-up the fee several time to get your fees in a blocks in the future..
It seems that 1MB crrippecoin is really part of their vision.
Ant-n
https://www.reddit.com/btc/comments/3ujj1s/serious_gametheory_question_if_youre_a_miner_and/cxfluyt
RBF makes sense in a world where blocks are small and always full.
It creates a volatile transaction pricing market where bidders try to outbid each other for the limited space in the current block of txns.
It serves the dual goals of limiting transactions and maximizing miner revenue resulting from the artificial scarcity being imposed by the block size limit.
The unfortunate side effect is that day to day P2P transactions on the Bitcoin network will become relatively expensive and will be forced onto another layer, or coin.
tsontar
https://www.reddit.com/bitcoinxt/comments/3uixix/from_a_usability_communications_perspective_rbf/cxfksk7
RBF offers nothing in a world where there is always a little extra space in the block for the next transaction. It only makes sense in a world where blocks are full.
tsontar
https://www.reddit.com/bitcoinxt/comments/3uixix/from_a_usability_communications_perspective_rbf/cxflcn1
Unless your goal is to harm bitcoin.
Anen-o-me
https://www.reddit.com/bitcoinxt/comments/3uixix/from_a_usability_communications_perspective_rbf/cxflljw
(7) RBF violates two common-sense principles:
- "KISS" (Keep It Simple Stupid);
- "If it ain't broke, don't fix it"
To say it a bit harsher but IMO warranted: P. Todd seems to be busy inventing useless crap and making things complicated for wallet devs...
awemany
https://www.reddit.com/btc/comments/3ujc4m/consensus_jgarzik_rbf_would_be_antisocial_on_the/cxfkwvi
(8) Why is the less-safe version of RBF the one being released ("Full") rather than the "safe(r)" version (FSS - First-Seen Safe)?
Peter Todd had proposed two different versions of RBF: "Full" vs "FSS" (First-Seen Safe).
"Full" is the more dangerous version, because it allows general double-spending (I can't even believe we're even saying things like "allows general double-spending" - but that's the kind of crap Peter Todd is trying to foist on us).
"FSS" is supposedly a bit "safer", because is only allows double-spending a transaction with the same output.
What's being released now is "Opt-In Full RBF".
First-seen-safe restricts replace-by-fee to only replacing transactions with the same output (prevents double spending).
The reason this feature is being added is they see Bitcoin as a settlement network, so when there's a backlog users should be able to replace their transaction with a higher-fee one so it's included. It's to deal with the cripplingly low blocksizes.
Someone should just implement and merge first-seen-safe, since that's much more non-controversial. Keeps 0-confs safe(r) while enabling re-submitting transactions.
tytyty_
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxff3ej
I would have preferred first-seen-safe RBF, certainly. It can be a useful tool to just bump the transaction fee on an existing transaction.
coinaday
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxf5eno
Ok, so if the only benefit of RBF is to unstick stuck transactions by increasing the fee; why did you use "Full RBF" instead of "FSS RBF"? Full RBF allows the sender to increase the fee and change who the receiver is. FSS (First-Seen-Safe) RBF only allows the sender to increase the fee, but does not allow the sender to change who the receiver is.
Tldr: FSS RBF should be enough to enable your wanted benefit of being able to resend stuck transactions by increasing their fee, but you chose Full RBF anyway. Why?
todu
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfm5qb
The benefit of opt-in RBF:
Now, when a transaction is not going through because fee was accidentally made too low or if there is a spam attack on the network, a user can "un-stuck" his/her transaction by re-sending it with a higher fee. No more being held to the mercy of miners maybe confirming your transaction, or not. The user gets some power back.
If this was the actual problem at hand, why not restrict the RBF to only increasing the fee, but not changing the output addresses.
RBF in it's current form is nothing but a tool to facilitate double spending. That is, it lowers the bar for default nodes to assist facilitating double spending. Which is VERY BAD for Bitcoin, imho.
Serisouly, I don't know what's gotten into those devs ACK'ing this decrease in Bitcoin's trustwortiness.
Kazimir82
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfn295
(9) Peter Todd has a track record of trying to break features which aren't perfect - even when real-world users find those features "good enough" to use in practice. Do you support Peter Todd's perfectionist and vandalist approach over the pragmatist "good-enough" approach, and if so, why or why not?
Destroying something just because it isn't perfect is stupid. By that logic we should even kill Bitcoin itself.
kraml
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxfcmc7
How did a troll like peter todd get in control of bitcoin? This is fucking unbelievable.
Vibr8gKiwi
https://www.reddit.com/bitcoinxt/comments/3ujq69/uriplin_on_rbitcoin_inadvertently_reveals_the/cxfk89n
(10) Could the "game theory" on RBF backfire, and end up damaging Bitcoin?
And what if some/all miners simply hold RBF-enabled transactions into a separate pool and extract maximum value per transaction i.e. wait until senders cough up more & more ...
A very dangerous change that will actively encourage miners to collaborate on extracting higher fees or even extorting senders trying to 'fix' their transactions.
laisee
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxfkozk
Peter Todd has a history of loving Game Theory, but he hasn't really applied those principals to the technological changes he's unilaterally making.
I don't understand how so many people could have been driven away or access removed so now he's able to make these changes despite community outcry.
DeftNerd
https://www.reddit.com/bitcoinxt/comments/3uii16/on_black_friday_with_9000_transactions_backlogged/cxfkyok
A miner could simply separate all RBF-enabled TX into a separate list and wait for higher and higher fees to be paid. It's kind of like putting a "Take my money, Pls!!!" sign on your forehead and and going shopping.
laisee
https://www.reddit.com/bitcoinxt/comments/3uixix/from_a_usability_communications_perspective_rbf/cxfkha2
opens door for collusion and possibly extortion ... sender has flagged willingness to pay more.
laisee
https://www.reddit.com/bitcoinxt/comments/3uixix/from_a_usability_communications_perspective_rbf/cxfl64y
(11) RBF is a controversial, radical change to the Bitcoin protocol. Why has Peter Todd been allowed to force this on our community with no debate, no consensus and no testing?
It's not uncontroversial. There is clearly controversy. You can say the concerns are trumped up, invalid. But if the argument against even discussing XT is that the issue is controversial, the easy ACK'ing of this major change strikes many as hypocritical.
There is not zero impact. Someone WILL be double spent as a result of this. You may blame that person for accepting a transaction they shouldn't, or using a wallet that neglected to update to notify them that their transaction was reversible. But it cannot be said that no damage will result due to this change.
And in my view most importantly, RBF is a cornerstone in supporting those who believe that we need to keep small blocks. The purpose for this is to enable a more dynamic fee market to develop. I fear this is a step in the direction of a slippery slope.
(12) How does the new RBF feature activate?
Does anyone know how RBF activates? I mean if wallets are not upgraded this could be very dangerous for users. Because even if its opt-in this could kill zero confirmation for good.
seweso
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxf3ui0
(13) PT on TP: Peter Todd fulfills the toilet-paper prophecy! [comic]
raisethelimit
https://www.reddit.com/btc/comments/3ujjzn/pt_on_tp_peter_todd_fulfills_the_toiletpape
(14) RBF: A Counter-Argument - by Mike Hearn
https://medium.com/@octskyward/replace-by-fee-43edd9a1dd6d
(15) If you're against RBF, what can you do?
the solution to all this, is actually rather simple. Take the power away from these people. Due to the nature of bitcoin, we've always had that power. There never was a need for an "official" or "reference" implementation of the software. For a few years it was simply the most convenient, the mo[s]t efficient, and the best way to work out all the initial kinks bitcoin had. It was also a sort of restricted field in that (obviously) there were few people in the world who truly understood to the degree required to make a) design change proposals, and b) code for them (and note that while up until now this has been the case, it's not necessary for these 2 roles to be carried out by the same people). The last few months' debates over the blocksize limit have shown and educated thst a lot of people now truly understand what's what. And what's more one of the original core-devs (Gavin), already gave us the gift of proving in the real world that democracy in bitcoin can truly exist via voting with the software one (or miners) runs, without meaning to.
BitcoinXT was a huge gift to the community, and it's likely to reach its objective in a few months. It seems an implementation of bitcoin UL will test the same principle far sooner than we thought.
So the potential for real democracy exists within the network. And we're already fast on our way to most of the community stop[p]ing using core as the reference client. Shit like what Peter pulled yesterday, I predict, will simply accelerate the process. So the solution is arriving, and it's a far better solution th[a]t it would be to, say, locking Peter out of the project. Thi[s] will be real democracy.
I also predict in a couple of years a lot of big mining groups/companies/whatever will have their own development teams making their internal software available for everyone else to use. This will create an at[]mosphere of true debate of real issues and how to solve them, and it will allow people (miners) to vote with their implementations on what the "real" bitcoin should be and how it should function.
Exciting times ahead, the wheels are already in motion for this future to come true. The situation is grave, I won't deny that, but I do believe it's very, very temporary.
redlightsaber
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxfn6r4
Yeah I think the time has come to migrate away from "core". There's obviously fishiness going on with the censorship and lack of transparency.
loveforyouandme
https://www.reddit.com/btc/comments/3uighb/on_black_friday_with_9000_transactions_backlogged/cxf6yi8
Vote with your feet: don't run Blockstream Core.
SatoshisDaughter
https://www.reddit.com/btc/comments/3ujc4m/consensus_jgarzik_rbf_would_be_antisocial_on_the/cxfdc4h
submitted by BeYourOwnBank to Bitcoin [link] [comments]

Bitcoin Verde v1.1.0 : Schnorr Signatures

For those unacquainted:
Bitcoin Verde is an indexing full-node, written from the ground up in Java. Bitcoin Verde comes with a block explorer, and a stratum mining pool. While we consider v1.1.0 still a beta release, our public node (https://bitcoinverde.org) has been stable since January.
Bitcoin Verde is the first to write a Schnorr Signature verification algorithm for Java (using Pieter Wuille's specification) ; if others implementations or wallets need to use our implementation as a reference, it can be located here. Verde's Schnorr implementation has been tested against the same suite of tests as ABC's (Test file located here). We intend to submit a pull request to Bouncy Castle sometime in the future.
Things that are new this release:
Similar to our previous release, Bitcoin Verde is very likely incompatible with Windows. Furthermore, it's an indexing node, and because of that will have more system requirements than a traditional node due to database indexes and the inherent underlying database structure. Our fully synced bitcoinverde.org node is currently using 615G disk space, and 21G of memory. Bitcoin Verde can be configured to run with far less memory, with a minimum around 2G. (Disable UTXO caching, disable TX Bloom Filter, set max database memory to ~1.5G). We highly recommend running BV on an SSD or M2. Traditional HDD drives are awful at random reads, and the last attempted initial-block-download we performed on an HDD took about 2+ weeks to complete.
If you have any problems with your node, please reach out to us at [email protected]. We'd be happy to help and troubleshoot problems. Alternatively, you can report bugs or issues to our github.
We've been diligently working on a mobile wallet based on the Bitcoin Verde codebase. We hope to provide a more modern replacement for bitcoinj, while also supporting SLP tokens. We are also ensuring all Bitcoin Verde SPV code can be transpiled to Objective-C (with Swift Bindings) for use on iOS. Finally, Bitcoin Verde is experimenting with a new message to improve the initial synchronization of SPV wallets called addrblocks, which will greatly improve the ability to validate SLP tokens. Additionally/alternatively, we have been looking at supporting BIP-157 for a more privacy-oriented way to achieve near-instant SPV synchronization; we look forward to sharing these thoughts in the near future.
Again, thank you to the XT team for your support. Another thank you for the ABC team for welcoming us into your conversations, and for helping us understand some of the nuanced aspects of this HF.
To install/upgrade your nodes, clone/pull master at https://github.com/softwareverde/bitcoin-verde, and reference the README under Installing/Upgrading.
submitted by FerriestaPatronum to btc [link] [comments]

The Origins of the Blocksize Debate

On May 4, 2015, Gavin Andresen wrote on his blog:
I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.
I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now... please send me an email ([email protected]) if I am missing any arguments
In other words, Gavin proposed a hard fork via a series of blog posts, bypassing all developer communication channels altogether and asking for personal, private emails from anyone interested in discussing the proposal further.
On May 5 (1 day after Gavin submitted his first blog post), Mike Hearn published The capacity cliff on his Medium page. 2 days later, he posted Crash landing. In these posts, he argued:
A common argument for letting Bitcoin blocks fill up is that the outcome won’t be so bad: just a market for fees... this is wrong. I don’t believe fees will become high and stable if Bitcoin runs out of capacity. Instead, I believe Bitcoin will crash.
...a permanent backlog would start to build up... as the backlog grows, nodes will start running out of memory and dying... as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable.
He also, in the latter article, explained that he disagreed with Satoshi's vision for how Bitcoin would mature[1][2]:
Neither me nor Gavin believe a fee market will work as a substitute for the inflation subsidy.
Gavin continued to publish the series of blog posts he had announced while Hearn made these predictions. [1][2][3][4][5][6][7]
Matt Corallo brought Gavin's proposal up on the bitcoin-dev mailing list after a few days. He wrote:
Recently there has been a flurry of posts by Gavin at http://gavinandresen.svbtle.com/ which advocate strongly for increasing the maximum block size. However, there hasnt been any discussion on this mailing list in several years as far as I can tell...
So, at the risk of starting a flamewar, I'll provide a little bait to get some responses and hope the discussion opens up into an honest comparison of the tradeoffs here. Certainly a consensus in this kind of technical community should be a basic requirement for any serious commitment to blocksize increase.
Personally, I'm rather strongly against any commitment to a block size increase in the near future. Long-term incentive compatibility requires that there be some fee pressure, and that blocks be relatively consistently full or very nearly full. What we see today are transactions enjoying next-block confirmations with nearly zero pressure to include any fee at all (though many do because it makes wallet code simpler).
This allows the well-funded Bitcoin ecosystem to continue building systems which rely on transactions moving quickly into blocks while pretending these systems scale. Thus, instead of working on technologies which bring Bitcoin's trustlessness to systems which scale beyond a blockchain's necessarily slow and (compared to updating numbers in a database) expensive settlement, the ecosystem as a whole continues to focus on building centralized platforms and advocate for changes to Bitcoin which allow them to maintain the status quo
Shortly thereafter, Corallo explained further:
The point of the hard block size limit is exactly because giving miners free rule to do anything they like with their blocks would allow them to do any number of crazy attacks. The incentives for miners to pick block sizes are no where near compatible with what allows the network to continue to run in a decentralized manner.
Tier Nolan considered possible extensions and modifications that might improve Gavin's proposal and argued that soft caps could be used to mitigate against the dangers of a blocksize increase. Tom Harding voiced support for Gavin's proposal
Peter Todd mentioned that a limited blocksize provides the benefit of protecting against the "perverse incentives" behind potential block withholding attacks.
Slush didn't have a strong opinion one way or the other, and neither did Eric Lombrozo, though Eric was interested in developing hard-fork best practices and wanted to:
explore all the complexities involved with deployment of hard forks. Let’s not just do a one-off ad-hoc thing.
Matt Whitlock voiced his opinion:
I'm not so much opposed to a block size increase as I am opposed to a hard fork... I strongly fear that the hard fork itself will become an excuse to change other aspects of the system in ways that will have unintended and possibly disastrous consequences.
Bryan Bishop strongly opposed Gavin's proposal, and offered a philosophical perspective on the matter:
there has been significant public discussion... about why increasing the max block size is kicking the can down the road while possibly compromising blockchain security. There were many excellent objections that were raised that, sadly, I see are not referenced at all in the recent media blitz. Frankly I can't help but feel that if contributions, like those from #bitcoin-wizards, have been ignored in lieu of technical analysis, and the absence of discussion on this mailing list, that I feel perhaps there are other subtle and extremely important technical details that are completely absent from this--and other-- proposals.
Secured decentralization is the most important and most interesting property of bitcoin. Everything else is rather trivial and could be achieved millions of times more efficiently with conventional technology. Our technical work should be informed by the technical nature of the system we have constructed.
There's no doubt in my mind that bitcoin will always see the most extreme campaigns and the most extreme misunderstandings... for development purposes we must hold ourselves to extremely high standards before proposing changes, especially to the public, that have the potential to be unsafe and economically unsafe.
There are many potential technical solutions for aggregating millions (trillions?) of transactions into tiny bundles. As a small proof-of-concept, imagine two parties sending transactions back and forth 100 million times. Instead of recording every transaction, you could record the start state and the end state, and end up with two transactions or less. That's a 100 million fold, without modifying max block size and without potentially compromising secured decentralization.
The MIT group should listen up and get to work figuring out how to measure decentralization and its security.. Getting this measurement right would be really beneficial because we would have a more academic and technical understanding to work with.
Gregory Maxwell echoed and extended that perspective:
When Bitcoin is changed fundamentally, via a hard fork, to have different properties, the change can create winners or losers...
There are non-trivial number of people who hold extremes on any of these general belief patterns; Even among the core developers there is not a consensus on Bitcoin's optimal role in society and the commercial marketplace.
there is a at least a two fold concern on this particular ("Long term Mining incentives") front:
One is that the long-held argument is that security of the Bitcoin system in the long term depends on fee income funding autonomous, anonymous, decentralized miners profitably applying enough hash-power to make reorganizations infeasible.
For fees to achieve this purpose, there seemingly must be an effective scarcity of capacity.
The second is that when subsidy has fallen well below fees, the incentive to move the blockchain forward goes away. An optimal rational miner would be best off forking off the current best block in order to capture its fees, rather than moving the blockchain forward...
tools like the Lightning network proposal could well allow us to hit a greater spectrum of demands at once--including secure zero-confirmation (something that larger blocksizes reduce if anything), which is important for many applications. With the right technology I believe we can have our cake and eat it too, but there needs to be a reason to build it; the security and decentralization level of Bitcoin imposes a hard upper limit on anything that can be based on it.
Another key point here is that the small bumps in blocksize which wouldn't clearly knock the system into a largely centralized mode--small constants--are small enough that they don't quantitatively change the operation of the system; they don't open up new applications that aren't possible today
the procedure I'd prefer would be something like this: if there is a standing backlog, we-the-community of users look to indicators to gauge if the network is losing decentralization and then double the hard limit with proper controls to allow smooth adjustment without fees going to zero (see the past proposals for automatic block size controls that let miners increase up to a hard maximum over the median if they mine at quadratically harder difficulty), and we don't increase if it appears it would be at a substantial increase in centralization risk. Hardfork changes should only be made if they're almost completely uncontroversial--where virtually everyone can look at the available data and say "yea, that isn't undermining my property rights or future use of Bitcoin; it's no big deal". Unfortunately, every indicator I can think of except fee totals has been going in the wrong direction almost monotonically along with the blockchain size increase since 2012 when we started hitting full blocks and responded by increasing the default soft target. This is frustrating
many people--myself included--have been working feverishly hard behind the scenes on Bitcoin Core to increase the scalability. This work isn't small-potatoes boring software engineering stuff; I mean even my personal contributions include things like inventing a wholly new generic algebraic optimization applicable to all EC signature schemes that increases performance by 4%, and that is before getting into the R&D stuff that hasn't really borne fruit yet, like fraud proofs. Today Bitcoin Core is easily >100 times faster to synchronize and relay than when I first got involved on the same hardware, but these improvements have been swallowed by the growth. The ironic thing is that our frantic efforts to keep ahead and not lose decentralization have both not been enough (by the best measures, full node usage is the lowest its been since 2011 even though the user base is huge now) and yet also so much that people could seriously talk about increasing the block size to something gigantic like 20MB. This sounds less reasonable when you realize that even at 1MB we'd likely have a smoking hole in the ground if not for existing enormous efforts to make scaling not come at a loss of decentralization.
Peter Todd also summarized some academic findings on the subject:
In short, without either a fixed blocksize or fixed fee per transaction Bitcoin will will not survive as there is no viable way to pay for PoW security. The latter option - fixed fee per transaction - is non-trivial to implement in a way that's actually meaningful - it's easy to give miners "kickbacks" - leaving us with a fixed blocksize.
Even a relatively small increase to 20MB will greatly reduce the number of people who can participate fully in Bitcoin, creating an environment where the next increase requires the consent of an even smaller portion of the Bitcoin ecosystem. Where does that stop? What's the proposed mechanism that'll create an incentive and social consensus to not just 'kick the can down the road'(3) and further centralize but actually scale up Bitcoin the hard way?
Some developers (e.g. Aaron Voisine) voiced support for Gavin's proposal which repeated Mike Hearn's "crash landing" arguments.
Pieter Wuille said:
I am - in general - in favor of increasing the size blocks...
Controversial hard forks. I hope the mailing list here today already proves it is a controversial issue. Independent of personal opinions pro or against, I don't think we can do a hard fork that is controversial in nature. Either the result is effectively a fork, and pre-existing coins can be spent once on both sides (effectively failing Bitcoin's primary purpose), or the result is one side forced to upgrade to something they dislike - effectively giving a power to developers they should never have. Quoting someone: "I did not sign up to be part of a central banker's committee".
The reason for increasing is "need". If "we need more space in blocks" is the reason to do an upgrade, it won't stop after 20 MB. There is nothing fundamental possible with 20 MB blocks that isn't with 1 MB blocks.
Misrepresentation of the trade-offs. You can argue all you want that none of the effects of larger blocks are particularly damaging, so everything is fine. They will damage something (see below for details), and we should analyze these effects, and be honest about them, and present them as a trade-off made we choose to make to scale the system better. If you just ask people if they want more transactions, of course you'll hear yes. If you ask people if they want to pay less taxes, I'm sure the vast majority will agree as well.
Miner centralization. There is currently, as far as I know, no technology that can relay and validate 20 MB blocks across the planet, in a manner fast enough to avoid very significant costs to mining. There is work in progress on this (including Gavin's IBLT-based relay, or Greg's block network coding), but I don't think we should be basing the future of the economics of the system on undemonstrated ideas. Without those (or even with), the result may be that miners self-limit the size of their blocks to propagate faster, but if this happens, larger, better-connected, and more centrally-located groups of miners gain a competitive advantage by being able to produce larger blocks. I would like to point out that there is nothing evil about this - a simple feedback to determine an optimal block size for an individual miner will result in larger blocks for better connected hash power. If we do not want miners to have this ability, "we" (as in: those using full nodes) should demand limitations that prevent it. One such limitation is a block size limit (whatever it is).
Ability to use a full node.
Skewed incentives for improvements... without actual pressure to work on these, I doubt much will change. Increasing the size of blocks now will simply make it cheap enough to continue business as usual for a while - while forcing a massive cost increase (and not just a monetary one) on the entire ecosystem.
Fees and long-term incentives.
I don't think 1 MB is optimal. Block size is a compromise between scalability of transactions and verifiability of the system. A system with 10 transactions per day that is verifiable by a pocket calculator is not useful, as it would only serve a few large bank's settlements. A system which can deal with every coffee bought on the planet, but requires a Google-scale data center to verify is also not useful, as it would be trivially out-competed by a VISA-like design. The usefulness needs in a balance, and there is no optimal choice for everyone. We can choose where that balance lies, but we must accept that this is done as a trade-off, and that that trade-off will have costs such as hardware costs, decreasing anonymity, less independence, smaller target audience for people able to fully validate, ...
Choose wisely.
Mike Hearn responded:
this list is not a good place for making progress or reaching decisions.
if Bitcoin continues on its current growth trends it will run out of capacity, almost certainly by some time next year. What we need to see right now is leadership and a plan, that fits in the available time window.
I no longer believe this community can reach consensus on anything protocol related.
When the money supply eventually dwindles I doubt it will be fee pressure that funds mining
What I don't see from you yet is a specific and credible plan that fits within the next 12 months and which allows Bitcoin to keep growing.
Peter Todd then pointed out that, contrary to Mike's claims, developer consensus had been achieved within Core plenty of times recently. Btc-drak asked Mike to "explain where the 12 months timeframe comes from?"
Jorge Timón wrote an incredibly prescient reply to Mike:
We've successfully reached consensus for several softfork proposals already. I agree with others that hardfork need to be uncontroversial and there should be consensus about them. If you have other ideas for the criteria for hardfork deployment all I'm ears. I just hope that by "What we need to see right now is leadership" you don't mean something like "when Gaving and Mike agree it's enough to deploy a hardfork" when you go from vague to concrete.
Oh, so your answer to "bitcoin will eventually need to live on fees and we would like to know more about how it will look like then" it's "no bitcoin long term it's broken long term but that's far away in the future so let's just worry about the present". I agree that it's hard to predict that future, but having some competition for block space would actually help us get more data on a similar situation to be able to predict that future better. What you want to avoid at all cost (the block size actually being used), I see as the best opportunity we have to look into the future.
this is my plan: we wait 12 months... and start having full blocks and people having to wait 2 blocks for their transactions to be confirmed some times. That would be the beginning of a true "fee market", something that Gavin used to say was his #1 priority not so long ago (which seems contradictory with his current efforts to avoid that from happening). Having a true fee market seems clearly an advantage. What are supposedly disastrous negative parts of this plan that make an alternative plan (ie: increasing the block size) so necessary and obvious. I think the advocates of the size increase are failing to explain the disadvantages of maintaining the current size. It feels like the explanation are missing because it should be somehow obvious how the sky will burn if we don't increase the block size soon. But, well, it is not obvious to me, so please elaborate on why having a fee market (instead of just an price estimator for a market that doesn't even really exist) would be a disaster.
Some suspected Gavin/Mike were trying to rush the hard fork for personal reasons.
Mike Hearn's response was to demand a "leader" who could unilaterally steer the Bitcoin project and make decisions unchecked:
No. What I meant is that someone (theoretically Wladimir) needs to make a clear decision. If that decision is "Bitcoin Core will wait and watch the fireworks when blocks get full", that would be showing leadership
I will write more on the topic of what will happen if we hit the block size limit... I don't believe we will get any useful data out of such an event. I've seen distributed systems run out of capacity before. What will happen instead is technological failure followed by rapid user abandonment...
we need to hear something like that from Wladimir, or whoever has the final say around here.
Jorge Timón responded:
it is true that "universally uncontroversial" (which is what I think the requirement should be for hard forks) is a vague qualifier that's not formally defined anywhere. I guess we should only consider rational arguments. You cannot just nack something without further explanation. If his explanation was "I will change my mind after we increase block size", I guess the community should say "then we will just ignore your nack because it makes no sense". In the same way, when people use fallacies (purposely or not) we must expose that and say "this fallacy doesn't count as an argument". But yeah, it would probably be good to define better what constitutes a "sensible objection" or something. That doesn't seem simple though.
it seems that some people would like to see that happening before the subsidies are low (not necessarily null), while other people are fine waiting for that but don't want to ever be close to the scale limits anytime soon. I would also like to know for how long we need to prioritize short term adoption in this way. As others have said, if the answer is "forever, adoption is always the most important thing" then we will end up with an improved version of Visa. But yeah, this is progress, I'll wait for your more detailed description of the tragedies that will follow hitting the block limits, assuming for now that it will happen in 12 months. My previous answer to the nervous "we will hit the block limits in 12 months if we don't do anything" was "not sure about 12 months, but whatever, great, I'm waiting for that to observe how fees get affected". But it should have been a question "what's wrong with hitting the block limits in 12 months?"
Mike Hearn again asserted the need for a leader:
There must be a single decision maker for any given codebase.
Bryan Bishop attempted to explain why this did not make sense with git architecture.
Finally, Gavin announced his intent to merge the patch into Bitcoin XT to bypass the peer review he had received on the bitcoin-dev mailing list.
submitted by sound8bits to Bitcoin [link] [comments]

Proposal for Making Decentralized Development Work (or at least less prone to toxicity)

The Bitcoin Cash community is now facing stress tests, not of the technological but human kind. Developers are NOT in harmony. Is this what our community wants? I don't think so. Our community said we wanted decentralized development, having been traumatized by Core. However, just throwing varied devs into an unstructured environment saying okay... Go! is not conducive to making a complicated, unique-to-the-world, multi-billion dollar, multi-discipline open source project work; not when there are all sorts of potential communication issues (perhaps semantic/language, maybe personality), financial interests (like the 5 million euro token solution incentive, of which implementation devs are participating), and varied technical challenges and opinions. What we get is what we have: tensions building up, cliques forming, and emotions and biases influencing what should be purely technical actions. We start having attempts at gaining sway for individual approaches, as everyone is sure their way is best, for whatever reason(s).
We end up avoiding one problem (Core-like obstinance) and creating another, which likely still leads to centralized development as people get fed up and either quit (devs are human, underpaid and typically stressed already), or fork off. Developers don't grow on trees. Lose enough and we're left with centralized development anyway, definitely at a loss as the most productive/competent people can usually make their fortunes/successes easily elsewhere. So we should take care of devs, and not just throw them into a gladiator pit, saying fend competitively for yourself. Devs are not expected to be great negotiators. They shouldn't have to solve this on their own.
I propose a 3 part solution. I caution: adopt at least Part 1, or propose something better; but leaving this unaddressed, waiting until the next time something explodes is not a good option.
Part 1 - Lowest Common Denominator (LCD) Development
The Bitcoin white paper doesn't mention tokens, yet here we all are. Obviously, we all feel Bitcoin works well enough as described in the white paper to succeed. The idea behind LCD decentralized development is developers start with the broadest areas of agreement first to work on. What's the one thing everyone in this community agrees on? Easy: big blocks. We need not an 8MB limit, not a 32MB limit, but to remove the limit entirely to forevermore solve at least that issue, the reason for our fork. So developers work first prioritizing fixing block size as there should be the least disagreement there. Immediately devs start working together = improvement! Working together productively can build mutual bonds of goodwill and respect, which can help with Part 2. So Part 1 is identifying areas of existing agreement and prioritizing working there first.
Part 2 - A Structure for Non-consenus Proposals
The white paper, as good as it is, didn't foresee everything. There are ways Bitcoin can work better. However, this is the most challenging area for "leaderless" development. How do we settle on one way forward? Easy. We don't, not unless everyone agrees. If our community is to stick to this dev model we must accept that stalled progress on issues is part of the risk that comes with the turf. Developers, if they have time and energy first make a convincing improvement proposal, such as a way to implement tokens. This should be recorded and logged with technical objections.
Time helps immensely with solving technical issues. If there are objections, clearly delineated, they can be addressed over time. Take the block size as an example. Time and time again big-blockers gave compelling arguments why raising the block size various ways was safely tolerable. When Core still objected more arguments were presented as well as scaling down the risk boundaries (eg going from unlimited to XT, all the way down to just 2X). It became clear Core wasn't objecting for technical but other reasons, because the technical arguments were clearly on the big-blockers' side, as Bitcoin Cash now 1 year old clearly proves. In the event an improvement proposal is repeatedly objected to, but for reasons obviously not technical the issue can be brought before the wider community for the last possibility: evaluating the need to fork off. Personally I don't think there is any reason the BCH community should ever split beyond getting our key objective of bigger blocks. I'm just illustrating this structure works and how it would have worked against Core's blockage.
Part 3 - Neutral Communication Area
Development teams have their own internal methods of communication. However, for implementations to remain in consensus a broader area where devs from different implementations can converse should exist. This channel should not be under the power of anyone directly involved with an implementation because of the possibility of intended or unintended heavy handed moderation due to (unavoidable) bias. My proposal is a common area development forum be set up. I don't think it should be chat style. I think a traditional forum with static, dated conversations is best for devs on different time schedules. In the absence of something new being set up we can simply use threads on /btc. This has the benefit of being uncensored and neutral, and also typically exposing technical discourse to those who may possess technical insights but not work primarily on an implementation.
Thoughts?
submitted by cryptos4pz to btc [link] [comments]

LIVESTREAM/CHAT 2PM Discussing SALVATION/OSAS with a ... BITCOIN GRAPH AMERICAN VIDEO (22 October 2020) - YouTube Sans limites TV - YouTube Rock the Word 153: Episode 4 I Unlocked Luxe In One Day And This Is How (Fortnite ...

Bitcoin vs. Dogecoin: a Shallow Semantic Look; Blockchain Technology: IBM on “Device Democracy” Book Reviews. Book Review: “Debt: The First 5,000 Years” by David Graeber; Book Review: “How America Lost Its Secrets” by Edward Jay Epstein; Book Review: “The New Better Off” by Courtney E. Martin; Book Review: “Valley of the Gods” by Alexandra Wolfe; Book Review: “Wonderland ... Some Bitcoin points of objections are raised in discussions when it comes to the arena of Bitcoin. My sponsor in Utopian Global, ... All previously split coins including Bitcoin XT, Bitcoin Cash etc don’t achieve acceptance or value anywhere near the the original Bitcoin Core and are usually consigned to an Altcoin existence. Just another Altcoin, not a threat, not a contender and certainly ... Hearn’s objections to the current state of bitcoin are varied and frequently technical in nature, but at heart there are two failures: ... For a change such as the switch to XT to succeed, more ... r/btc: /r/btc was created to foster and support free and open Bitcoin discussion, Bitcoin news, and exclusive AMA (Ask Me Anything) interviews from … Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts. r/btc. log in sign up. User account menu. 178. Bitcoin.com has added Bitcoin Unlimited next to Core and XT. Close. 178. Posted by. u/MagmaHindenburg ... The Bitcoin XT group, led by Bitcoin Foundation Chief Scientist Gavin Andresen and core developer Mike Hearn, seek to scale the network and increase the block size to 8 MB. On August 15th, they made available a full release of Bitcoin XT, which is a patch set on top of Bitcoin Core. The Bitcoin XT proponents hope to gain enough support to make ...

[index] [40822] [27062] [43661] [8557] [37037] [24470] [38823] [2889] [22644] [1668]

LIVESTREAM/CHAT 2PM Discussing SALVATION/OSAS with a ...

Grant Cardone came through for another interview with VladTV, and this time around he brought more financial advice, including his five steps to becoming a m... Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Click here to start making money with Bitcoin http://bit.ly/3kt7Kso Click here to start making money with www.fiverr.com , www.upwork.com #1: Google Adsense ... This video is unavailable. Watch Queue Queue. Watch Queue Queue I'm just a simple man who made his mind up about what Bitcoin was when mining it from home actually made you more than what the electricity cost. I made my mind up about it when it first arrived.

#