Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consensus plan for next major release (1.21) #1798

Closed
patricklodder opened this issue Mar 14, 2021 · 128 comments
Closed

Consensus plan for next major release (1.21) #1798

patricklodder opened this issue Mar 14, 2021 · 128 comments
Milestone

Comments

@patricklodder
Copy link
Member

This issue describes the planned protocol features that need to be activated by consensus for the next major release (planned to be 1.21)

Summary

With the next major release, we plan to propose 2 protocol changes for activation at once: segwit and csv. Both have been deployed on Bitcoin's mainnet for a longer time and are considered non-contentious. As such it will save us all time when we do them at once instead of sequential. VersionBits / BIP9 will not be proposed at this time because of the conflict in the version with AuxPow, and this would be a much more dangerous, and possibly contentious change.

A block versions 5 and protocol version 70016 will be proposed for this implementation.

Details

In scope

  • Segregated witness a.k.a. segwit
    • Functionality: Separates spending proofs (i.e. signatures) from transactions when embedded in blocks
    • Rationale: Further reduces transaction malleability.
    • Scope: BIP141, BIP143 and BIP147
    • Note: we will carefully consider P2W* addressing.
  • CheckSequenceVerify a.k.a. CSV
    • Functionality: Allows consensus-enforced locking of an output until a relative time and exposes the field to redeem scripts
    • Rationale: Allows for much more sophisticated redeem scripts, must-have feature for Lightning-like L2 networks.
    • Scope: BIP68, BIP112 and BIP113

Not in scope

Deployment

Deployment is planned to be proposed through a single soft fork, identified by block version 5 (full version: 0x00620104) and a protocol update to version 70016 (0x011180) through the usual 95% SuperMajority consensus rule (full activation with 1900 blocks having v5 in a 2000 block window) similar to the BIP65 soft fork that was proposed with Dogecoin Core 1.14.

Comments / questions? Let us know here.

@patricklodder patricklodder added this to the 1.21 milestone Mar 14, 2021
@patricklodder patricklodder added this to To do in Next Major - 1.21 via automation Mar 14, 2021
@shibe2
Copy link
Contributor

I support general idea of separating signatures from signed data. However, I oppose SegWit soft fork as implemented in Bitcoin with block extension and "weight". I think it was done that way mostly because BTC developers decided to "freeze base protocol" and implement any further changes as backwards-compatible extensions. That decision might make sense for BTC's investment and large transaction use cases, but not for Dogecoin's "people's money" use cases. If Dogecoin's popularity will grow significantly, we may have to technically diverge from BTC and address scaling issues in a different way. I want Dogecoin to be technically and organizationally prepared for positive changes in case they will come.

I propose discussing SegWit and CSV separately and re-assessing whether they may be contentious.

@patricklodder
Copy link
Member Author

Could you elaborate your concern with concept of block extensions? Asking as this is also a characteristic of EB and a requirement for TapRoot, so you're basically challenging an entire chain of protocol extensions. Open to counter-proposals - we can discuss that here.

@shibe2
Copy link
Contributor

Basically, it all can be implemented in a straightforward way. BTC does it in a more complicated way only to maintain backwards-compatibility. By following their development path, we would make it more difficult to diverge in the future, should it be needed. And I think it will likely be needed because of different economic priorities.

Transaction malleability can be mitigated in different ways. As for SegWit, it can be implemented now with little effort by using BTC code. But I see it as technical debt, it may complicate things in the future. I propose evaluating expected benefits of these protocol changes and decide where they are worth implementing at all at this point.

I am a bit pessimistic about cool new features, they often go unused in Dogecoin. For example, I think that multi-signature transactions are useful in many cases, but I don't see them used as widely as it could be. And this feature was there from the beginning, I think. We can put effort into solving transaction malleability and implementing some new features now and see some benefits a couple of years later. By that time it may turn out that the way we did it was not a good way. On the other hand, when there is already a demand for some feature, its benefits are more clear.

@patricklodder
Copy link
Member Author

Replying in reverse order, because it suits my narrative better, sorry:

I am a bit pessimistic about cool new features, they often go unused in Dogecoin. [..] When there is already a demand for some feature, its benefits are more clear.

Absolutely. Our developer ecosystem is very small though, and this is a problem because we don't have much people working on libraries and integrations. But then, looking at how slow the v4 soft fork went, even it goes 2-3x faster this time, it's my opinion that we should rather think now than later about what we foresee in the future, because even if coding would take an hour, fork activation will take a very long time.

The good news is that these are not sexy features, at all.

I propose evaluating expected benefits of these protocol changes and decide where they are worth implementing at all at this point.

The reason for segwit (or more specifically, witness segregation) is purely to reduce malleability. I think this is needed because the risk of not doing it is that there could - even today - be weaknesses in the implementation that are currently not found or found but not disclosed. After all, this actually happened at a time when the Bitcoin market cap was less than ours is today, but our current codebase is not much younger than that post. The security of the chain is the most important thing there is because without it, there is no value.

The reason for CSV is that it allows for bi-directional payment channels using P2SH. So this is more of an enabler than a feature. It could enable Lightning, or something more intricate that allows transparent payment channels with cross-chain settlement. As a framework, this enables a lot of L2 solutions that we can work with later so I think this is worth doing. Assuming for now that we want to keep the issuing chain at least intact at its core, we'll likely need a payment channel solution eventually, because there is no way to do high tps on a PoW chain because of blocksize vs speed-of-light constraints - nor any chain mechanic that doesn't preselect its block producer for that matter. If our transaction volume explodes, I think that we're bound to need cross-chain payment channel solutions at some point. It doesn't have to be Lightning as designed for Bitcoin, but the ability to at least set up payment channels without needing to pre-fund multisig addresses needs to be there if we'd want anything remotely like an L2 solution.

It's also important to note that a fully secure payment channel using CSV needs to have some guarantees about malleability too.

By following their development path, we would make it more difficult to diverge in the future, should it be needed. And I think it will likely be needed because of different economic priorities.

The question regarding segwit is how far we would take "standards compliance", where the standard is - like it or not - the Bitcoin Core implementation. Until now, staying compatible with Bitcoin's base stucts and serialization, Litecoin's scrypt and Namecoin's AuxPow has enabled a handful of wallet/service integrations that would otherwise have been difficult and maybe would not have happened at all. Divergence will probably lead to friction with adoption, and we really are in need of adoption. Overtaking the segwit extension framework from Bitcoin, or using EB, or doing non-standard structs/serialization, all have different impacts on 3rd party implementations and we have to weigh those carefully. As a reminder, no one can possibly like how the 1.14 fee enforcement is still haunting us (this literally is 6 lines of non-consensus code) and makes me wonder if we can actually make our own standards and get it adopted, especially when it's providing the same functionality as a Bitcoin solution, but in a bespoke jacket.

Did you have a particular implementation in mind that is better than Bitcoin's segwit extension, i.e. something that exists and is field proven?

@shibe2
Copy link
Contributor

Did you have a particular implementation in mind that is better than Bitcoin's segwit extension, i.e. something that exists and is field proven?

AFAIK, BCH implemented provisions of BIP 62.

Pros: Simple, doesn't need new type of wallet, doesn't have stupid notion of "weight" which doesn't make sense for Dogecoin.

Cons: Signatures are still mixed with signed data.

Divergence will probably lead to friction with adoption, and we really are in need of adoption.

If/when BTC code will no longer suit Dogecoin's needs, it will be because adoption will be great, and we will need a new approach to allow further growth. And differences in implementation don't necessarily cause incompatibility.

@patricklodder
Copy link
Member Author

AFAIK, BCH implemented provisions of BIP 62.

In general, I think it's good to enforce those rules that we currently only treat as non-standard for mempool acceptance (basically only strict DER is enforced through BIP66), but this does not solve the problem that signatures change the tx hash (point 9 of BIP62.) I think that makes this solution suboptimal at best when dealing with spending queued multisig transaction outputs (as then, multiple parties can mutate the hash), so from where I'm sitting, it's not necessarily good enough.

doesn't need new type of wallet

Is your concern that wallets need to support P2WPKH and P2WSH output types? Wallets will have to change no matter what, because we desperately need better payment channel solutions.

Or bech32 serialization? bech32 is already a much wider standard than just Bitcoin, and not coins close to us like LTC or DGB... but for example all cosmos-sdk implementations use it as a standard, including BSC and Terra. Even if we would not be doing segwit, bech32 is something we will have to look into (but as the note said, carefully.)

doesn't have stupid notion of "weight" which doesn't make sense for Dogecoin

So this is very circumvent, I agree, and I suspect it mostly comes from the narrative that segwit is good for increasing blocksize. Which is something we should be extremely careful with on a 1-minute chain. How would you propose limiting witness size? Just keep it in check as part of the current block size limit? I'd be fine with that and that would kill the need for "weight".

Signatures are still mixed with signed data.

Signature segregation (at least from tx hash input perspective) is sitting somewhere between a "SHOULD HAVE" and a "MUST HAVE" for me, see my first note in this comment. Leaning towards the latter.

If/when BTC code will no longer suit Dogecoin's needs, it will be because adoption will be great, and we will need a new approach to allow further growth.

I don't think we have the luxury to wait for that moment to make decisions. We need to have something in our pocket that is better than centralized bridging, with the next soft fork activation. I think of all the options that we have (EB, SB or our proposal), the current proposal is the most feasible as it's only touching well known, field-proven implementations on which scalability solutions can quickly be adapted. Looking into implementation specifics of segwit is fine with me, with the note that the less we change, the easier transition will be. But that doesn't mean we cannot change anything.

Just to be clear about the alternatives: for EB, there is a conceptual issue with miner opt-in, which means we weaken security on whatever is touching extensions, and I'm not sure how SB could allow for x-chain agnosticism without falling back to an EB-like optionality - and neither are really field proven in the way we'd want to use them.

@shibe2
Copy link
Contributor

I think, BIP 62 could be beneficial regardless of whether we implement SegWit or not. So let's evaluate it on its own. Then if we decide to implement it, we can evaluate what benefits SegWit could provide on top of that. I think that BIP62 would be less contentious, so I propose this order of evaluation.

but this does not solve the problem that signatures change the tx hash (point 9 of BIP62.) I think that makes this solution suboptimal at best when dealing with spending queued multisig transaction outputs (as then, multiple parties can mutate the hash), so from where I'm sitting, it's not necessarily good enough.

What protocols would suffer from that kind of malleability? Could they be amended to tolerate it? I think, it would be good to fix it, but I want to evaluate real benefits.

How would you propose limiting witness size? Just keep it in check as part of the current block size limit?

Yes, that would work. Currently, the cost for the network is approximated using total number of bytes in the block. I don't see why SegWit would need a different formula. What the total block size limit should be is a separate question. I am generally in favor of always having the limit that is higher than current demand.

I don't think we have the luxury to wait for that moment to make decisions.

I also don't think that we should wait that long. But while making decisions now, we need to consider pros and cons of each solution for different time frames.

For shorter time frame:

  • What will it take to implement and activate the change?
  • What benefits we will get soon after it's implemented?

For longer time frame:

  • Does it create technical debt?
  • How will it affect future available choices?
  • What benefits it could give in the long term?

We need to have something in our pocket that is better than centralized bridging, with the next soft fork activation.

What for?

@vovanloc0798
Copy link

Hihi

@jaybny
Copy link

why would dogecoin need a layer 2? no reason for lightning doge.. so probably don't need segwit either. just my two cents.

@shibe2
Copy link
Contributor

why would dogecoin need a layer 2?

Faster transactions, incremental payments, interaction with other systems (bridges, sidechains, etc).

@GiverofMemory
Copy link

My suggestion is to propose these things also to the dogecoindev reddit to get actual user feedback and not just dev feedback. Please see this issue: #1849 to double blocksize at the same time as the introduction of segwit, which has received massive community support.

@GiverofMemory
Copy link

I support general idea of separating signatures from signed data. However, I oppose SegWit soft fork as implemented in Bitcoin with block extension and "weight". I think it was done that way mostly because BTC developers decided to "freeze base protocol" and implement any further changes as backwards-compatible extensions. That decision might make sense for BTC's investment and large transaction use cases, but not for Dogecoin's "people's money" use cases. If Dogecoin's popularity will grow significantly, we may have to technically diverge from BTC and address scaling issues in a different way. I want Dogecoin to be technically and organizationally prepared for positive changes in case they will come.

I propose discussing SegWit and CSV separately and re-assessing whether they may be contentious.

I think if we doubled blocksize at the same time as segwit, we would accomplish our goals of being peoples money. Also I think segwit addresses need to be optional. #1849

@GiverofMemory
Copy link

Just because something is deemed "non-contentious" on BTC does not imply it is non-contentious with the dogecoin community. Please propose changes to them on the dogecoindev subreddit: https://www.reddit.com/r/dogecoindev/. BTC has weeded out everyone who wants it to scale on-chain. Dogecoin was created as on-chain scaling (10x as many blocks as BTC) so we have people in this community who support scaling on-chain.

@shibe2
Copy link
Contributor

Yeah, we need to discuss that. But preferably it should be a discussion with solid arguments. For each proposed change we should prepare a list of practical benefits that we expect from it. Then we will be able to evaluate benefits versus drawbacks and required effort.

@Drael64
Copy link

Drael64 commented Apr 22, 2021

Makes sense to me to float significant proposed moves with the reddit community, in as plain language as humanly possible of course. They may not necessarily understand the full implications, and level of argument or understanding should be weighted - ie great arguments for or against perhaps outweigh raw popularity but equally it doesn't seem very democratic to leave the userbase entirely out of the loop because non-coder folk like me, generally simply aren't going to jump on here. Twitter perhaps could be separately included via the dev channel.

@Drael64
Copy link

Drael64 commented Apr 22, 2021

Equally, I'm not 100% on all the implications, but I do agree that getting scaling in, for TPS as soon as is reasonable (when necessary optimizations or vital updates are complete), whether that's the obvious block size, or something else seems wise.

Particularly as, from what I can gather, segwit, other than security (which is important), is primarily 'for the future', in terms of opening options rather than providing any immediate improvement to scaling. At the rate of adoption we are seeing in the retail space (both online and in person), it does seem appropriate to be considering things like TPS, fees, right now, whether that is the efficiency of the current design, or expanding block size.

Not personally weighing on the specifics as they are beyond my technical understanding. But for the general notion of considering TPS, whether that's efficiency or block size, or some other solution more of a short/medium term aim, than a 'for tomorrow' aim.

I suppose the other reason I might put in here, is that currently payment processors are rushing to keep cryptocurrency payments within their centralized (and heavily fee'd) control. Whether that be paypal, venmo, cash app, visa or mastercard. This might be cheered within the cryptocurrency community as acceptance, but at best it's a bridge that actually diminishes the benefits of cryptocurrency - personal control, freedom, low fees/lack of middlemen or authorities (who can cut people off at will). The less mature the native settlement solution, the more a standard is set for relying on a third party for settlement. In time, much of the point of cryptocurrency is defeated - if speed and fees, and ease of use fall far behind those walled gardens.

The reason why they are doing this isn't support. It's to maintain their hegemony, keep getting their cut, and be ahead of perceived competition. There's ultimately nothing supportive about it. Not trying to make this 'political' per se, just stating facts as I perceive them. Just another reason not to sleep too hard on efficiency in general. Dogecoin in particularly could get ahead in that race with currently popularity, and in doing so stand up for what cryptocurrency is for.

@GiverofMemory
Copy link

Yes and I'm in no rush to double blocksize or get segwit, I just think both should be done at the same time so that the same fate of BTC (never scaling on chain) doesn't befall us as well. Lets keep both parts of our community On-chain vs off-chain together always.

@GiverofMemory
Copy link

"Just another reason not to sleep too hard on efficiency in general. Dogecoin in particularly could get ahead in that race with currently popularity, and in doing so stand up for what cryptocurrency is for."

Yes, I see this more of setting a culture for on-chain scaling since there may always be attacks from the paypal's and venmo's of the world to try to keep us from scaling on-chain so their off-chain solution becomes the only thing viable.

We can double the blocksize now and have 0 repercussions based on moore's law of disk space, bandwidth increases, etc. So we should in order to set the tone for the future where we will need to scale both on and off-chain to meet demand.

@GiverofMemory
Copy link

GiverofMemory commented Apr 23, 2021

The devs here need to check themselves, are they here to fully serve the dogecoin community, or build a resume for themselves to get accepted to higher profile projects like BTC?

We need to put our foot in the door to say that "yes we will work on segwit and off-chain transactions, but at the same time will work on reasonable on-chain scaling as well". Don't let the BTC/BCH drama impact this community, please. We need to take the best of both worlds and if you are not willing to do that then please consider another coin to work on instead.

Let me turn this on it's head, what is the problem you are trying to solve by implementing Segwit right now? Already we have our transactions and signatures off-chain in the sense that only the block hash makes it into the litecoin blocks that are receiving the proof of work. With segwit, we will have 3 layers, Litecoin proof of work, dogecoin blocks, and dogecoin signatures. That seems a bit much for now.

@GiverofMemory
Copy link

Sporklin talked about how segwit was not needed in dogecoin here: https://www.reddit.com/r/dogecoin/comments/be8pe8/fully_exited_all_positions_in_doge/el4v0bj/?utm_source=share&utm_medium=ios_app&utm_name=iossmf&context=3

Another post with input from langer_hans: https://www.reddit.com/r/dogecoin/comments/2hdjuj/dogecoin_is_an_independent_codebase/?utm_source=share&utm_medium=ios_app&utm_name=iossmf

@veryscience
Copy link

veryscience commented May 2, 2021

I'm not sure how difficult it is to implement but would a dynamic block size be better than just doubling the block size arbitrarily? Monero is a POW coin that uses dynamic block sizes and it seems to be going really well for them.

@shibe2
Copy link
Contributor

AFAIK, Monero's block size algorithm allows creating larger blocks when transaction fees are high. But in practice it does not bring transaction fees down, everyone have to pay high fees just to prevent blocks from shrinking back to minimum. This would not suit Dogecoin,

@GiverofMemory
Copy link

GiverofMemory commented May 5, 2021

AFAIK, Monero's block size algorithm allows creating larger blocks when transaction fees are high. But in practice it does not bring transaction fees down, everyone have to pay high fees just to prevent blocks from shrinking back to minimum. This would not suit Dogecoin,

Ya doesn't sound very elegant. To me the most elegant solution is remove max block size and reign-in blockchain growth using min tx fee. Other than that, a commitment to consider block size increases/blocktime decreases every 7 years is prudent.

@gubatron
Copy link
Contributor

gubatron commented May 5, 2021

if there's difficulty adjustments for hashrate, there can also be blocksize adjustments based on median blocksize for an N number of blocks so you don't have huge blocksize variability. You could put a temporary limit of say M megabytes.

@tromp
Copy link

everyone have to pay high fees just to prevent blocks from shrinking back to minimum.

Obviously that's what you want. If you can keep growing the blocksize while reverting to minimum fees, then an attack that breaks the network with huge blocks beyond its capacity would be too affordable.

@GiverofMemory
Copy link

GiverofMemory commented May 6, 2021

if there's difficulty adjustments for hashrate, there can also be blocksize adjustments based on median blocksize for an N number of blocks so you don't have huge blocksize variability. You could put a temporary limit of say M megabytes.

Yes but you run the risk of "getting it a bit wrong" and just not scaling enough at some point. But this idea could perhaps reduce the number of hard forks required.

Can anyone provide any argument against the "Remove max block size and reign in blockchain growth with adjustments to min tx fee?" I think that would keep hardforks to an absolute minimum and allow for fine tuning every minor release if necessary (which it shouldn't be needed very often). Also this fits with the original vision of Satoshi, he originally didn't have a max blocksize and only implemented it because of Hal Finney FUD.

Lets also always keep in mind that our blocks are currently NOT FULL and have never been full; so min tx fee is keeping our blocks from spam attacks currently (and at every time in the past), just fine.

@GiverofMemory
Copy link

everyone have to pay high fees just to prevent blocks from shrinking back to minimum.

Obviously that's what you want. If you can keep growing the blocksize while reverting to minimum fees, then an attack that breaks the network with huge blocks beyond its capacity would be too affordable.

But it's counter productive because larger blocksize is supposed to reduce fees. I completely understand why it was done, but it creates a conflicting scenario and creates adverse incentives for mining pools to manipulate fees.

@Crybso
Copy link

Crybso commented May 18, 2021

The thing that worries me is that those that argue for side-chains are also arguing against on-chain scaling.
I think the two options should be explored and would benefit each others.
Side-chains can offer some interesting functionalities on top of the main chain, but this doesn't mean the main protocol should be frozen in time. Technology has given us cheaper hardware and faster internet which means scaling can be also be improved on-chain.
Also the community supports a doubling of blocksize as can be seen in the links here #1849

@Drael64
Copy link

Drael64 commented May 18, 2021

this doesn't mean the main protocol should be frozen in time

I very much agree. Technology has certainly given us some technical advances in onchain scaling techniques as well.

In some ways this is a rehash of that blocksize discussion linked.

Onchain scaling benefits offchain scaling. And genuinely significant improvements there can be made with minimal compromise in terms of onchain scaling - in my opinion.

I think genuinely we could happily settle on something like:

  • Maybe decrease blocktime

  • Maybe increase block size

  • Definitely look at any and all methods to completely minimize any side effects as discussed here and there

  • AND also do maybe segwit because it might be useful for layer 2/side chain and solves some other issues like malleablility

;As a median position.

Whether that all makes sense as a coding workload, or someone wants to do it - or if there aren't more elegant ways to reach that general ballpark of average thinking is another matter. Ie, some of those things may be hard to fit with each other, and of course, they'd be technically challenging.

And of course there's the issue of priority, if those things were undertaken, or supported by the developers here (who I offer my full respect to BTW for donating so much of their time to this passion project), over what time frame.

But having had this conversation in a few places here, that does feel like what someone would say, if every brain typing here were squashed into one person. Whether there's wisdom in that median position, I'll leave to wiser minds.

@veryscience
Copy link

I think we need to question his motives because they do not seem align with the vision that the community has for the project.

I see two possible reasons why his interests would differ: He either wants to secure a job as a lightning dev, or he's getting paid directly by corporations that would make money from second layer solutions.
Unless you see other reasons, but I cannot explain his behaviour otherwise.

I think we should refrain from personal attacks. It goes against everything Dogecoin stands for in its "Do Only Good Everyday" philosophy.

To be fair to @delbonis I feel like he's given some very thought out responses and wouldn't be commenting here if he didn't want what's best for Dogecoin. I'm not saying you have to agree with him but there is a reason that Bitcoin forked in 2017 and that's because there's a schism in the community about which legs of the Blockchain Trilemma are most important. Choosing one over the other isn't necessarily wrong and I think its good to keep diverse opinions in threads like these so that we can avoid being an echo chamber and ignoring important long term factors.

@veryscience
Copy link

veryscience commented May 18, 2021

I don't want to speak for him but from what he's posted it sounds like he's worried about bandwidth requirements with bigger blocks. It gets harder for regular people to run a node and less nodes means less decentralization. You can agree or disagree with how big an issue that is though.

As an additional note to that. The main 3 devs have been talking about how they're worried not enough nodes are being run and its affecting the network. So avoiding further disincentivizing running of nodes might be something we need to worry about in the short term.

But that said, I'm not against increasing the blocksize once we start filling up blocks (which we aren't even close to filling up today)

@Crybso
Copy link

Crybso commented May 18, 2021

I think we should refrain from personal attacks. It goes against everything Dogecoin stands for in its "Do Only Good Everyday" philosophy.

I agree,
But @delbonis why do you think that we should not increase or even decrease capacity on layer 1?

@veryscience

But that said, I'm not against increasing the blocksize once we start filling up blocks (which we aren't even close to filling up today)

That is a good point, once they fix the minimum fee's we will see more transactions, but the 1MB block per minute should handle that for a while. The question was is more for the long term vision, and if Dogecoin will repeat the same mistakes as Bitcoin when we tought that segwit would improve scaling but it did not.

@sruPL
Copy link

I don't want to speak for him but from what he's posted it sounds like he's worried about bandwidth requirements with bigger blocks. It gets harder for regular people to run a node and less nodes means less decentralization. You can agree or disagree with how big an issue that is though.

As an additional note to that. The main 3 devs have been talking about how they're worried not enough nodes are being run and its affecting the network. So avoiding further disincentivizing running of nodes might be something we need to worry about in the short term.

I'm also facing numerous issues with sync'ing on my Dogecoin Core. At first, I had to download bootstrap to even get it running and it took me several attempts. Then it would be fine for some days or weeks, but it often runs into sync'ing issues where it just gets stuck. i literally have 15-20 connections, 500 GB downloaded data but the number of blocks and % progress is literally stuck. I'm in progress of sync'ing my wallet again after deleting some files from the Dogecoin Core folder, deleting files of blocks and chainslate seems to be the only way to un-stuck it, but again it takes a long time to fully sync it from the scratch.

On the other hand, ADA Cardano is working to provide full-blockchain light-wallet (not Yoroi). Their main wallet is actually just 6GB I think which is a big improvement given they've had Cardano running now for a few years? and they are aiming to provide even better light client wallets. It could be beneficial for us to research the technology they are trying to implement.

@veryscience
Copy link

I think the syncing issue you're facing is very high on the devs priority list. I've seen them talking about getting that fixed as soon as possible. But i don't have any threads handy for official discussion on it.

@delbonis
Copy link

delbonis commented May 18, 2021

This was a big one, but here we go...


@Drael64

If those txn's are secured, confirmed and accepted, agreed upon, that seems like unnecessary data to me. Even banks don't store historical transaction data (they tape back up, then later destroy those). And they only keep them for disputes, tax etc.

This is a really great observation that we should discuss further here. But unfortunately blockchains do need history in order to validate the current state. If everyone could trust each other perfectly well, then we wouldn't need a blockchain, we'd just take on faith that people had the amount of money they claim and then update our own balances accordingly when we received a payment. But of course we can't do that, because we can't magically trust people with everything.

But cryptography is fun and lets is shift around how we place trust, both between different parties and through time. Let's set up a toy example. I'm listening to a podcast and I have to make a payment of 100 sats every minute to keep listening. But podcasts are pretty long and it's wasteful to put 60 transactions onto the ledger. The podcaster isn't going to stream me the podcast unless they know they can get the payment and I don't want to pay them for the whole stream ahead of time because there's nothing stopping them from running off with the cash.

So what we can do is I make a transaction that pays them for the first minute, and instead of broadcasting it I give it to the podcaster. The podcaster knows that they can at any time publish the transaction and claim my payment, but he's expecting me to want to listen to the podcast and I'm expecting him to keep streaming it to me. So every minute, I give him a new transaction spending the same coins that gives him more total funds. If at any time I leave, the podcaster can broadcast the final transaction to the ledger and throw away all of the older transactions. Each one of those transactions we can see as a payment that happened, but was settled between me and the podcaster, but never published to the chain. We don't need to keep that history after all.

Now, this is a broken protocol because I could RBF the output I'm spending and replace the transaction the podcaster is trying to claim the funds with. The easiest way to fix this is to instead move a bunch of funds to pay for the whole podcast into a multisig and then cooperatively make the transactions off of the multisig so I can't RBF it. But this is getting farther into the weeds for a toy example than we really need to go. This is a really simple design of a payment channel.

Storing all that past history seems storage wasteful and makes the chain harder to sync/propagate.

This is the important part here. When interacting with cryptocurrencies, the blockchain is the source of truth. All security in the system including everything built on top of it is dependent on the base layer. By keeping a robust and strong base layer we can build systems on top of it that pull the trust (that we're able to put into it because of Proof-of-whatever) into higher layers. We can act as though other users in the system are trustworthy because we're able to recover from it and punish them in the event that they do.

But strong doesn't mean big. Because the blockchain would only ever be used in cases where fraud occurs, transactions on the base layer would only have to happen occasionally and to move coins in and out of cold storage.

But this is only possible when the underlying ledger supports a minimum set of features to handle these higher layer protocols. It's difficult/impossible to do the justice transaction game that Lightning relies on with transaction malleability because a malicious counterparty would malleate a commitment transaction and break the revocation. Eltoo lets you go farther with Lightning but you need to be able to rebind the transaction on top of a new utxo without resigning it, so that is something holding lots of ledgers back, but it will in time come around once the game theory is better understood, and will probably come around when a more robust covenants proposal has been made.

@GiverofMemory

By valuable I mean that Lightning has helped Bitcoin scale in any way. Again there is No Evidence that Lightning has been valuable to bitcoin in helping it scale.

They're both speculative assets. Investors are speculating on the future value of the coin, and over long time scales (say, multiple years) the market weighs the true value of assets. Investors seem to have come to the conclusion that long term sustainability necessitates support for off-chain settlement mechanisms.

Which makes a lot of sense. You were referring to supply and demand earlier, let's take a look at it. Take some ledger has a block size of B, there's N users making a some average number of transactions per day per user, and fees are F on average. If we double the block size to 2B, then by supply and demand the fees would reduce to roughly F/2, +/- accounting for fixed costs. But usage is expected to grow over time. And in a few years maybe there's 2N users. By supply and demand, it's not unreasonable to expect fees to be around the F again that we started with. You can't escape that tradeoff. A few years go by and the block size has been raised to 32B. This is where the externalities of having to communicate around all that bandwidth start to come into play. Eventually you might increase the block size enough such that it meets the needs of everyone who might ever want to use the blockchain. (And you can rephrase this around increasing the block rate and you get the same conclusion.)

That's okay because at that point miners are making a ton of money, sure, but they don't really have an incentive to allow average everyday users to sync a full node. Oh, and they don't actually need to keep historical blocks to mine on top of them. So who helps old nodes sync to tip? Who has an incentive to do that when the blockchain is many terabytes in size? It's difficult (or impossible) to run a store without having a hard reliance on a third party. The main way to escape this is by changing the game. Which makes sense, because "scaling blockchains" isn't about reducing the fees it takes to make a transaction. It's about giving the whole system a higher throughput, which might mean that the payments that a user sees in their wallet might not necessarily be the "same thing" as they were before. Lowering the transaction fees that users actually experience from the outside looking in is a side effect of improving the overall throughput of the system.

The problematic aspect is that increasing the block size through a forced hard fork is changing the social contract with users. They might no longer be able to participate in the ledger, and if they have lower end machines or a limited network connection. If you increase the requirements to sync the chain, you risk putting them in a situation where they are no longer able to run a full node (pruned or not is irrelevant to this discussion) and force them to put more trust into third parties. This robs them of the agency and liberty to fully validate the chain that they had before, and is morally unjustifiable in my eyes. Off-chain protocols are much more flexible. There's a variety of them and different systems can be deployed to suit the needs of users. Nobody is forced into a specific kind of off-chain protocol, and while it may be more expensive for them at times, they are free to not opt-in if they don't want to.

@Crybso

Wrong. Miners make profits by mining. 1.4 TB of yearly storage would contain over 3 Billion transactions!
This alone is a good enough incentive, that would be >30M $ to be shared between competing miners for 1 cent per transaction.

If you keep thinking about this it won't be hard to see why a company that (1) sells mining hardware and (2) owns a mining pool would be opposed to scaling overall blockchain throughput by moving activity off chain and would be in favor of increasing block sizes. See the last two paragraphs above.

@delbonis has not refuted any of my arguments proving that scaling is possible on layer 1 today.

I did, actually. But I don't think you've been reading what I've been writing so I'm not really surprised.

He insists on building "off-chains" instead, and even wants to argue for decreasing scaling on layer 1! Why?

Because they work better and are more sustainable for everyone involved.

The rest of your response are ridiculous hostile accusations. I want to promote the cool technology that can benefit everyone. Your aggressive response and refusal to challenge your preconcieved notions doesn't seem to align with the idea of "Do Only Good Everyday" that Dogecoin promotes.

@Drael64 (again)

(like coinjoin or similar)

Coinjoins are very useful, but they're primarily for privacy, and you need to participate in several of them to get acceptable levels of anonymity. And due to the way they work, if every tx was a coinjoin tx, then the overall number of payments would stay the same but the perceived TPS of the ledger might decrease.

One of the other benefits that L2 protocols give us is the potential for much improved privacy. Since intermediate hops in Lightning nodes only know the identity of the node before and after them, for any onion routed payment with more than ~3 intermediate hops has very good deniability, even with public channels. This is going to improve with taproot by using PTLCs instead of HTLCs which avoids linking hops from different parts of the path based on the payment hash. This is exactly the same concept that protects Tor traffic.

@gubatron

I've reached the conclusion that all of these ideas are not mutually exclusive and they all help the cause of DOGE becoming 1000x better.

Good! Thank you for reconsidering your opinions, others here haven't been able to do the same.

LN can be seen as a bidirectional sidechain

Well no, LN is not a sidechain. Each party in a channel maintains "consensus" (if it makes sense to call it that) over the balances in the one channel they have between each other. Every channel is completely independent. That's one of its less widely discussed strengths. Deploying upgrades to it does not require the level of coordination that it would take with a consensus system like a sidechain or any other blockchain, and can be rolled out very incrementally and in a mixed-and-matched way.

@Crybso (again)

lightning still requires user to make on-chain transactions to fund a channel

True, but there's tricks you can play. See the link I posted on channel factories. This allows you to create n * (n - 1) channels between n parties in a single transaction. With 20 parties that's 380 channels, from a single transaction output. And there's other techniques that have been developed more recently to deal with the added cost of coordinating between all of these parties.

And like I said before, if these chains do one day support rollups (which come with a different set of tradeoffs), there's not much preventing opening Lightning channels inside a rollups.

And you need a node online 24/7 or risk losing funds.

No? Where did you hear this? Even assuming no watchtowers, you only need to sync to tip periodically. With small blocks this is very easy. And your counterparty has no way of knowing if you are or aren't using a watchtower, so while it might be possible for them to post old state, it's an extreme risk for them to take to try to misbehave like that. Watchtowers are not relying on a third party, there's a number of ways to make them almost completely trustless and private.

[...]

The thing that worries me is that those that argue for side-chains are also arguing against on-chain scaling.

I am not arguing for sidechains. They have their place and can be useful for some situation, but I see them more as a temporary solution until more robust systems have been developed and the necessary dependencies on the base layer are made available.

I argue against raising the blocksize or decreasing the block time because both of those compromise

Also the community supports a doubling of blocksize as can be seen in the links here #1849

That is hardly a representative sample, especially when most of them are completely anonymous people on the internet such as yourself. Upvotes on a reddit thread are not necessarily endorsement of the idea being proposed in the thread, as well as can be gamed very easily.

Also note that the second link in that thread ("Scaling: Bitcoin was supposed to implement Segwit AND double blocksize at the same time. Lets do that for scaling.") is based around a misrepresentation of events by a very obviously biased party. SegWit2X was an attempt to perform a hostile takeover on Bitcoin, in part by one of those certain companies that I previously mentioned.

@Drael64 (again)

I appreciate your attempt to find a middle ground, but sometimes the middle ground isn't necessarily an optimal position. Obviously reducing all the complexity of this discussion to a single axis is a massive oversimplifcation, but I am still strongly opposed to block size increase especially if it comes with a hard fork, which will leave many many of the nontechnical users in this community on the wrong chain spending to the wrong addresses.

Yes, SegWit slightly increases total block sizes, but it does this by changing the accounting procedure to weigh the important parts of blocks differently than the less important parts. This gets back to what you mentioned in an earlier comment:

All you should really need is in/out for transactions that are already secured and settled

SegWit doesn't literally do this, but it's a similar idea where we can treat deeply-buried witness data as less important than the flow of funds.

@Drael64
Copy link

Drael64 commented May 19, 2021

I don't want to speak for him but from what he's posted it sounds like he's worried about bandwidth requirements with bigger blocks. It gets harder for regular people to run a node and less nodes means less decentralization. You can agree or disagree with how big an issue that is though.

From what I gathered from tromp

"You could reveal the amounts, and use a plain schnorr signature to prove the commitment is to that amount, which reduces the "rangeproof" to about 100 bytes."

Thus potentially enabling you to increase blocksize by using a coinjoin like application, whilst effectively decreasing the size of the overall blockchain, with very little added overhead.

Of course someone correct me if I've interpreted that incorrectly. Which could make it easier to sync in some respects? Would certainly make the initial download substantially smaller, and reduce storage requirements - and those alone may increase participation.

Purely for blockchain size, not for privacy.

@Drael64
Copy link

Drael64 commented May 19, 2021

This is a really great observation that we should discuss further here. But unfortunately blockchains do need history in order to validate the current state.

This was in the context of mimblewimble's approach of txn history pruning, whatever that is? (I don't pretend to understand it).

Thanks for explaining how lightening can help with malleable transactions. That does seem to give it a few useful applications, such as recurring service charges (should that ever be a thing). Setting up automatic payments is familiar to people.

Although couldn't a third party payment processor like paypal do the same thing? Maybe not ideal, but if it's for a narrow range of use cases, perhaps that also makes sense?

"Obviously reducing all the complexity of this discussion to a single axis is a massive oversimplifcation, but I am still strongly opposed to block size increase especially if it comes with a hard fork,"

For me I think part of my own advocacy for onchain scaling comes from, as a discussed in another issue, user friction. The more behaviors are familiar, intuitive, the more people are likely to do them. Every complexity, or extra step, even slight reduces adoption substantially. Lightening until well adopted adds extra steps, adding user friction (meaning lower adoption), which makes it a bit of a user friction catch-22. Well even then it adds one, but one might be tolerable.

Additionally the setting up of payment channels is a catch-22 there as well. People need to use it, to make the fees profitable, and people are less likely to use it, until many channels are set up (for multi-hop to function in all cases).

If users are non-technical which you pointed out, this effect will be exaggerated. If lightning could be employed in a way that was essentially invisible to the user, or it functioned seemlessly without setting up direct channels from day 1, I'd be all for it.

As it stands I believe that barely anyone will use it (unless for some reason they HAVE to use it, like a recurring payment, or malleable payment).

It doesn't help that there is far less incentive to do so on a coin that is already pretty fast, already cheap. Essentially you get minor improvements, in most use cases, for extra steps. I think perhaps LTC's lower usage of lightning may reflect this premise, and if so we would expect the low level of adoption to be ever more true for dogecoin. BTC's case - high fees, slow transactions, should be the ideal scenario for it's useage, but even there it's used minimally next to the main net.

This would effectively make it so that in practice, it wouldn't actually help scaling at all.

Basically my reasoning here is something like - what is easiest for users to adopt? rather than what is the best technical solution, and then how can we try to make users adopt something they naturally wouldn't left to their own devices.

Whereas user behavior would remain relatively the same with onchain scaling. It would be scan/copy, send - no extra steps. The actual process would be familiar, and identical, albiet with different addresses. No behavior change. Much of dogecoins adoption rate is due to simplicity, in my opinion. In some respects we might be the very worst coin to add complexity to, in terms of relying on adoption of that complexity for scaling.

.....

Presumably if we did do a hard fork, every third party wallet etc would update their apps, the old fork would be renamed 'dogecoin classic' or something, and the effect would be similar to ethereum? (assuming anyone even continues to support that network, which in itself is questionable perhaps). LTC miners and nodes would likely all switch to the new coin.

Should be possible to avoid or minimize this scenario I think? Apps should ideally be able to tell if it's a valid address too. Modern apps, especially on phones, auto-update.

Announced well in advance, 3rd parties should have time to prepare.

It's an interesting point you bring up though, perhaps other folk could chime in about that? Is it an mitigatable issue, and what was the experience of other coins that did hard forks? Did this occur, was it common, how did developers and communities deal with it?

.....

As a sidenote, I also agree with others that blocksize isn't pressing.

If anything block time, getting to zero confirmations, might be the thing to focus on first (speed of transaction)? The faster transactions are onchain, with no additional complexity or steps, the more useable they are in common use cases (such as point of sale, where some small retaillers are taking dogecoin payments presently with no speed, and no third party). Ie, even though the numbers are likely small, this would have a more immediate effect. It would be meaningful.

With a block chain clock on each miner (proof of history) similar to solana, in theory issues with orphan block fairness would be resolved - you would know the order in which they occurred, it would be proofed. The only issue then would be how much lost work is tolerable due to most miners internet speed. If I understand that issue correctly.

Even without something to proof the time of events, as I understand it halving blocktime would only minimally change the orphan rate? With it, you could probably reduce block time to 15 seconds, possibly with zero confirmations - which would expand it's usability substantially.

@Crybso
Copy link

Crybso commented May 19, 2021

@delbonis

If you keep thinking about this it won't be hard to see why a company that (1) sells mining hardware and (2) owns a mining pool would be opposed to scaling overall blockchain throughput by moving activity off chain and would be in favor of increasing block sizes. See the last two paragraphs above.

Yes! 100%. They, Bitmain for example (SHA-256 ASIC producer), have all incentives to increase activity on chain and make blocks bigger, to process more transactions. As long as their own chain activity can continue growing, they would not be against off-chains. So I agree with your point that miners have incentives to increase on-chain activity.

But my whole argument is that on-chain activity is good, important, needed, and should continue growing long term.
So that makes my vision compatible with the miner's own incentives.
And if your vision is to limit on-chain activity growth, then that vision is not compatible with the miner's incentives.

Also important to mention, when you are arguing against hard forks, you are also effectively arguing against any changes to the concencus protocol, thus preventing blocksize changes. So it's an other way of arguing for the same thing, the effect is the same.

So if you share that vision of limiting on-chain activity, I understand that you could see the miners as your enemy, but I don't think they are. They would gladly accept your innovative layers 2 as long as you also keep them happy with a plan to avoid full blocks with a gradual blocksize increases for example. But if you decide to limit the capacity of their chain, then you will not really get some support from the miners, as this goes against their incentives.

Then in the future, if the blocks start getting full, the community will have an incentive to release software that accepts larger blocks, and scrypt miners could decide to run it as this would increase their revenue, this would not create a fork unless some software doesn't include the larger blocks in their chain.

I am not arguing for sidechains.

Ok you're arguing for sidechains as a temporary measure while waiting for layer 2 dependencies. Is that correct?

I argue against raising the blocksize or decreasing the block time because both of those compromise

Seems like you didn't finish your sentence, what are the compromises when increasing the blocksize? To 2MB for example, today, then doubling it every 7 years.

@Crybso
Copy link

Crybso commented May 19, 2021

@delbonis

I could RBF the output

This is an important issue that you are bringing up. RBF breaks 0-conf. RBF should removed if Dogecoin wants to be usable as a currency.

@delbonis
Copy link

@Drael64

For me I think part of my own advocacy for onchain scaling comes from, as a discussed in another issue, user friction.

I don't really buy this argument because Bitcoin is already onboarding people onto Lightning without unnecessary fees, and like I discussed before there's project in the works to reduce the effective chain fees that users pay even further. Dogecoin has 10x the L1 capacity that Bitcoin does so it stands to reason that this would be even less of an issue.

The rest of the concerns you've listed there are valid but so far it hasn't really been an issue. Most users are running Lightning nodes because it's easy and fun, not because they're really trying to make a net profit in the short term.

I think perhaps LTC's lower usage of lightning may reflect this premise

That's possible, but it's also an effect of the lower usage of LTC in general. Which is a shame imo.

Is it an mitigatable issue, and what was the experience of other coins that did hard forks?

In ledgers that are expected to fork regularly it's not as much of an issue, but in systems like Dogecoin (and Bitcoin even moreso), a hard fork hasn't happened in many many years so it may actually pose a serious problem and take a multiple year-long coordination period to ensure that users can upgrade.

The only issue then would be how much lost work is tolerable due to most miners internet speed. If I understand that issue correctly.

Even without something to proof the time of events, as I understand it halving blocktime would only minimally change the orphan rate? With it, you could probably reduce block time to 15 seconds, possibly with zero confirmations - which would expand it's usability substantially.

It's really hard to predict how the orphan rate will actually change if we decrease the block time, other than "it will probably go up". This is really risky to mess with since it's still a hard fork and it's difficult to go back afterwards. Ethereum had a high enough orphan rate to justify building a system to include uncles/ommers at 15 seconds.

@Crybso

Yes! 100%. They, Bitmain for example (SHA-256 ASIC producer), have all incentives to increase activity on chain and make blocks bigger, to process more transactions. As long as their own chain activity can continue growing, they would not be against off-chains. So I agree with your point that miners have incentives to increase on-chain activity.

Cool, so we agree with this point then.

In the future where we continually increase the block size by a factor of q to a total size qB, and fees still stabilize at F, miners are now making qF. It's not hard to see that a corporation that stands to miss out on future profits would have both the motivation and means to attempt to stop a change that would eat away at their potential profits.

But my whole argument is that on-chain activity is good, important, needed, and should continue growing long term.
So that makes my vision compatible with the miner's own incentives.
And if your vision is to limit on-chain activity growth, then that vision is not compatible with the miner's incentives.

In a primarily off-chain world there would still obviously be on-chain activity. People do need to use cold wallets from time to time after all. Miners in Dogecoin already make the vast majority of their profits from the 10k DOGE per block subsidy, with fees accounting for a tiny fraction of that. In Bitcoin the subsidy still dominates the rewards, and the game theory is set to shift very slowly over time which gives plenty of time to understand the dynamics.

I suggest you read this article which touches upon the question "maybe we do pay miners too much?", among other things. Miners are a useful service but they do not control the blockchain. We do, by running full nodes. If we can't run full nodes then we no longer control the blockchain.

Miners don't need the network to be decentralized in order to make money mining. This is similar to what happens in banana republics and other regimes in underdeveloped countries where most of the well-developed infrastructure is between the farms (miners) and the ports to export the goods (exchanges) because the people running the country don't have an incentive to invest into development in the rest of the country. Replace farms with oil wells (or actual mines) and the analogy still makes sense and has historical precedent.

They would gladly accept your innovative layers 2 as long as you also keep them happy with a plan to avoid full blocks with a gradual blocksize increases for example.

This is not how they see it at all. They can see that a future emphasizing L2 protocols will reduce the community desire for larger blocks and they end up with less money in the long-term. That provides a motivation for them to push a narrative in the community that off-chain protocols are bad and that the only salvation is through increasing the capacity, without saying that conveniently they tend to make more money that way.

This is also likely why BitPay and some other corporations promoted that side of the schism back pre-UASF. In the far future where users would no longer be able to run full nodes they'd be dependent on corporations like BitPay to provide wallet services for them, where they'd be able to rent seek and extract value from us. This is not the future you wanted.

if the blocks start getting full, the community will have an incentive to release software that accepts larger blocks,

Not necessarily. There would be some incentive, but there's an incentive to enact anything that will alleviate fees. There's also the strong disincentive against raising the block size because of both increasing the costs of running a node and hard forking being troublesome in networks that aren't accustomed to it in general.

Ok you're arguing for sidechains as a temporary measure while waiting for layer 2 dependencies. Is that correct?

No, I said I suppose they're probably acceptable as an intermediary solution, but I was never actually in favor of them because they usually rely on high amounts of trust in third parties. PoS variants are an improvement but PoS is much weaker in not-global contexts.

Seems like you didn't finish your sentence, what are the compromises when increasing the blocksize?

Sorry yes, I did forget to finish that sentence. I tend to jump around when I'm writing in English because it's similar to how I write code.

You (reasonably) misinterpreted how I was going to use the word "compromise" there. Here's how I'd finish that sentence: "I argue against raising the blocksize or decreasing the block time because both of those compromise the ability of users to run sovereign nodes and the long-term sustainability of the network."

To 2MB for example, today, then doubling it every 7 years.

So actually, from a moral perspective, this is slightly better if it was the plan from the beginning. Not that I agree with it, which I don't, but in a vacuum it's interesting. Since the increases are predictable a user would be able to look out into the future and say "given the hardware I have today I will be able to run a full node until time t, at which point I won't be able to and I will be dependent on a third party for access to the network". If they're not happy with that they can decide to sell their funds and exit from the system by that future date, or decide against joining in the first place.

Because it wasn't the design from the beginning then this breaks the social contract regarding block size that was instituted, so it's still problematic in addition to not actually solving the throughput scaling problem in a sustainable way.

This is an important issue that you are bringing up. RBF breaks 0-conf. RBF should removed if Dogecoin wants to be usable as a currency.

This is some flawed reasoning. It's still possible to maliciously replace transactions even without RBF. A mining pool could go outside the protocol and provide a web interface where a user can submit a tx that the miner will include instead of another tx spending the same coins in exchange for an increased fee. Or you could rent hashrate on Nicehash. It's not guaranteed but you can with a very predictable probability win these games over repeated tries. Pre-consensus protocols are still not guaranteed, you just can't be sure of these things until the transaction is actually mined, and buried. This is one advantage that PoS protocols can get you, that blocks can actually have hard finality guarantees. Every Nakamoto consensus protocol actually only has probabilistic finality, not hard finality.

If you want secure payments as fast as 0-conf then you can just use a payment channel system.

@Drael64
Copy link

Drael64 commented May 20, 2021

I don't really buy this argument because Bitcoin is already onboarding people onto Lightning without unnecessary fees, and like I discussed before there's project in the works to reduce the effective chain fees that users pay even further. Dogecoin has 10x the L1 capacity that Bitcoin does so it stands to reason that this would be even less of an issue.

I think perhaps you misunderstood my whole argument there.

Having to make channels, whilst multihop networks are being built, means that it's more complicated to actually use, than just using the mainnet. If it doesn't 'just work' in an intuitive manner, you reduce adoption massively, IMO. I don't think that's controversial, I hear many developers repeat this. Usually they say it, like I said it, you get 10x less users for a small increase in complexity or adoption curve.

Here user friction specifically means - the things users have to learn, do differently, and the extra steps they have to make.

Here, not only do you have to dedicate money over to a main payment channel to make it work (which in itself will likely reduce adoption), but you also have to build out direct individual channels until the whole thing is adopted well enough that it runs more seemlessly. That's a user friction catch-22 - doesn't run that great until many people adopt, people don't wanna adopt unless it runs great.

But out of curiosity, what percentage of BTC network traffic is on lightning, having already been around for quite some time? (keeping in mind BTC is slow and expensive to transact so motive to use it with doge will be considerably lower - shaving 1 minute into 1 sec, or 0.1 cents, into 0.001 cents is far less substantial than it's effect on BTC)

People won't use something if it has any kind of added difficulty, chunkiness or adoption curve resistance. The WAY that it's used, and how easy, natural, intuitive, smooth, and non-technical that is, is really more primary than most of the technical details. And basically any sidechain, layer 2, or similar usually isn't as easy/smooth/intuitive to use as the main net itself.

It might be technically brilliant, fast, low fees etc, but that doesn't matter if the majority of people aren't actually using it. This is bound to be especially so with the most non-technically inclined userbase of any coin in crypto. Compounded heavily by doge already easily being cheap and comparatively fast.

In the ideal world if every single person had somehow, already adopted dogecoin as currency, and lightning as payment it would be probably fine. The trouble is, it's just not something the masses are rushing in to use in BTC, or will rush to use in dogecoin, because of how it works.

I can see third party payment processors creating more seamless and well adopted experiences with crypto, offchain, than lightning. That might already be the case.

@delbonis
Copy link

Having to make channels

Autopilots in development to automatically make channels with peers and when you withdraw from exchanges they can give you a starting amount of inbound liquidity.

whilst multihop networks are being built

What do you mean? Multihop is a core part of Lightning that makes it actually practical. It was there from day 1. Even having to talk about it this much is ridiculous because it's a core part of the function of the protocol.

means that it's more complicated to actually use

All the complexity can be (and in some cases, is already) hidden by wallets. The UX is just "scan a QR code, hit submit, 1 second later the the payment shows up at the vendor". So it does just werk.

not only do you have to dedicate money over to a main payment channel to make it work

This is a misconception. It's not dedicated to a channel, it's funds now available on Lightning. You want to keep most of your funds on Lightning. And for nontechnical users don't have to do any of this manually since it's completely managed by the scenes. If you need to make a payment on L1 you can just splice out to the address and still keep the channel open with no interruption. But wallets can hide this from the user, there's no increase in the complexity shown to users.

But out of curiosity, what percentage of BTC network traffic is on lightning

There's currently 1,331 BTC in the 45k public channels, though we have no idea how much is in private channels. These numbers are growing at an impressive rate that I cited before. We also have no idea how many payments are being made through the network since only the nodes that relay a payment knows about them. Though anecdotal evidence shows from /r/lightningnetwork and other places that some individual nodes (even smaller ones) relay thousands of payments per day. Litecoin's numbers are much less impressive, unfortunately, with 164 LTC in 547 channels. There's not much incentive to use it there since it's also just a less active network.

As far as I see it, the main barrier to adoption is major exchanges like Coinbase not implementing it because they don't actually care about developing the ecosystem. Kraken adopting it later this year will be a big step up though fortunately.

isn't as easy/smooth/intuitive to use as the main net itself.

It's only intuitive because that's what you're used to. If you sat down with a new user and got them set up with a Lightning wallet they'd have a perfectly easy time using it. And if you actually tried using it you'd be impressed with the UX that some of these wallets have.

Not if you proof time.

Huh?

@Drael64
Copy link

Drael64 commented May 20, 2021

Huh?

If you proof time during the mining process, somewhat similar to how solana uses 'proof of history' things can't be modified retrospectively, and whichever transaction, or block mined earlier is given priority. It should at least in part resolve things like double spend, in theory. Not sure if that entirely applies to what you were talking about though.

Sounds like we are going in circles a bit here, TBH w/ the lightening stuff. If there's a way so that you don't have to set up channels at all (like ever), and you can just transfer coins and pay anyone with a wallet exactly as with main net that's great.

A simple - transfer into wallet, and use just like regular main net coins, I could live with (although even then, with already fast/cheap mainnet it's arguable it wouldn't see anywhere near majority adoption)

As it exists today, it seems, well, not good enough IMO.

The numbers you cite are well less than 1% of circulating coins (by a few decimal places I think), and at least so far doesn't sound terribly promising in terms of scaling (which requires by definition, most people using it - I mean to actually avoid scaling the main net entirely you'd probably need 99% of people using it).

Seems like the main BTC net is still congested.

If you scale the main net by comparsion, everyone is using it. Automatically. No theoretical notion of maybe, if this and that happens, majority of people will use this.

This to me, doesn't seem like a non-important distinction, or consideration - use.

If there's a layer 2 that will automatically get adopted like kittens (ie something that was say 30% of current network capacity, or purchases), because it's so functional and easy to use, I'd be all for it. Then the theory of adoption would be demonstrable, not a future that nobody can soundly predict.

Even if I'd probably guess offhand that it wouldn't remove scaling issues from the main chain anyway (and likely wouldn't see adoption over 30%, let alone half, no matter how good it is)

I also wonder if a layer 2 scaling was actually somehow successful enough to take more than half of the transaction load of the mainnet, what impact that would have on the native chain - would it become insecure for eg? If it did somehow get to 99%, would miners be unhappy? And if it only ever got to 10% of usage, even though that's well over what you describe by the sounds, then what? (you'd have to scale main net anyway).

@veryscience
Copy link

veryscience commented May 20, 2021

If you're interested in what an easy Lightning wallet would look like. Try Blue Wallet on mobile. it's literally one button push to switch from Bitcoin to Lightning.

I could even send you some sats to test it out if you're interested

@Drael64
Copy link

Drael64 commented May 20, 2021

If you're interested in what an easy Lightning wallet would look like. Try Blue Wallet on mobile. it's literally one button push to switch from Bitcoin to Lightning.

I could even send you some sats to test it out if you're interested

Alright, lets do that. I can always send them back. Doesn't guarantee enough adoption to remove the need for onchain scaling (or even enough adoption to be very useful), but it could help inform this discussion anyway.

I've installed but have no idea how to send you my lightning address using this app. Says something about an invoice when i go under receive in the lightning wallet. There seems to be an address for the regular BTC wallet, do I need to use that instead to receive funds?

Edit: Okay, I seem to have created an invoice for 5 sats, and that may be a way to send to lightning address?

lnbc50n1ps2v89ypp57wd5rcz5q3e62nznh77z0jmnh5tf0u74ghuqa6e9as64wynhq8hsdqqcqzpgxqyz5vqsp55vqxxwm8fz8e9epxzeaqzn78js9tka4kw5k6y3pnv8657mxvlezq9qyyssqe00xwna7hqv2t83zpem28vlzrj4vrdlhldmt9nlfjact6wjhcflh4cvd08q2aje4jgx2ycq9hxza44se8uxgl02wy6cpn93zsz86atspf3yw4e

This is the regular BTC address if for some reason that doesn't work (kinda weird how you can't just give someone your lightning address, unless I'm missing something?)

bc1qw8z0sxycxvlxhvxrjdz5tzykgrfjqjy4fauh6j

@veryscience
Copy link

veryscience commented May 20, 2021

I sent you the invoice :) Took a couple of seconds

I've installed but have no idea how to send you my lightning address using this app. Says something about an invoice when i go under receive in the lightning wallet. There seems to be an address for the regular BTC wallet, do I need to use that instead to receive funds?

I agree it's prob one of my biggest issues with lightning is that you need to make invoices before sending money. Good news is the Devs are working on adding address payments like how Bitcoin works. But it can't come soon enough

@Drael64
Copy link

Drael64 commented May 20, 2021

I agree it's prob one of my biggest issues with lightning is that you need to make invoices before sending money. Good news is the Devs are working on adding address payments like how Bitcoin works. But it can't come soon enough

Yeah, like you can't do tips, donations or send spontaneous payments (like say, paying off a bill as you can afford to do so). Although if they add addresses to this, they'll need to make sure one address type doesn't work for the other wallet. I could see people getting confused having two different wallets for a coin. They probs should have named it something more intuitive like 'bitcoin fast payment', and create the wallet automatically.

Actually on that topic, for addresses in general (not just lightning), would be cool to have something like domain names, where people can register more straight forward aliases for their addresses. Things you might be able to quasi remember (not that bank accounts work like that). I guess the issue there is, so many darn people, so maybe not. I guess QR is easy enough.

@delbonis
Copy link

@Drael64

It should at least in part resolve things like double spend, in theory.

It seems much more complex (at least proportional to the amount of benefit you get out of it) to me to introduce an entire extra proof-of-history mechanism to the consensus system to resolve RBF transaction replacement issue than it would be to either have people put up with waiting or use an L2 that gives users instant payments in addition to benefits with scale.

A simple - transfer into wallet, and use just like regular main net coins, I could live with (although even then, with already fast/cheap mainnet it's arguable it wouldn't see anywhere near majority adoption)

This is what all the wallets with autopilots are planning around.

Here's a blog post from a while back discussing a lot of the thought and effort going into the design of these, and that was already from a while ago.

you can't do tips, donations or send spontaneous payments

It's called keysend which lets you request an invoice in-protocol from a node's address and pay it there, it's been around for a while now in LND and with a plugin in c-lightning. Here's an easy tool for website owners to generate an invoice on-demand in a website. The reason it's designed like this is because payments rely on the payee providing the payer with a hash that the payee then reveals to claim the payment once it's made its way through the network. Most actual payments (at web checkouts, point-of-sale systems, etc.) involve generating a user-specfic address to send a payment to anyways, so Lightning's invoices fit into that workflow completely naturally. There's even a spec laid out (not finalized) for reusable invoices that provide multiple hashes for multiple payments.

Someone that knows better should me if I'm wrong, but my intuition tells me that with PTLCs it should be possible to generate an invoice that can be reused an infinite number of times by using a derivation scheme like how HD wallets work.

they'll need to make sure one address type doesn't work for the other wallet

I'm not sure what you mean by this. The wallet can prompt the user for whatever information

where people can register more straight forward aliases for their addresses.

You're in luck because nodes you can already set aliases on their nodes. There's no enforcement that these being unique, but it's secondary to the public key that's used as the primary identifier. The Zap wallet already has a scheme for generating a unique avatar from your pubkey, which is a similar concept to randomart used to make node identities visually unique.

I also wonder if a layer 2 scaling was actually somehow successful enough to take more than half of the transaction load of the mainnet

Be realistic here, that's a tremendous amount of activity to shift around. This chart shows ~3M BTC of daily volume, but that figure counts coins are spent multiple times and counts change being sent back to the wallet, so it's tricky to really say what that actually means. Someone should write an analysis tool that doesn't count coins if they're from a tx that was already counted for that 24-hr period.

Anything growing at 11%/mo is pretty impressive. But what's more important is Metcalfe's Law. Which is on top of rising interest making people seek alternatives to L1 payments, that number is only going to increase. By the end of the year I wouldn't be surprised if the network reaches 30k nodes and 3k BTC in channels. I'm willing to bet on it, if I'm wrong then I'll pay you 1000 sats over Lightning.


Lightning devs are very self aware and self-criticizing in their work, and there's a lot of people doing wildly different things with it. Not to be antagonistic, but pretty much every issue you've brought up, someone has put some effort into smoothing out rough edges or proposing changes to eliminate. Yeah some of it is dependent on soft forks, but that makes the soft fork all the better.

Let me know if you want some more sats to play with and I'll send you 100 to go spend on satoshis.place or Yalls or something.

@delbonis
Copy link

Also to extend an earlier point, when give someone an lightning node address, you can use a literal domain name instead of an IP address or onion. I'm not sure if there's a spec set up for connecting to an IP without a pubkey due to how the handshake works, but it should be possible to connect, ask for a pubkey on the fly to finish the handshake, then verify it in the DHT later to verify identity properly.

@Drael64
Copy link

Drael64 commented May 21, 2021

It seems much more complex (at least proportional to the amount of benefit you get out of it) to me to introduce an entire extra proof-of-history mechanism to the consensus system to resolve RBF transaction replacement issue than it would be to either have people put up with waiting or use an L2 that gives users instant payments in addition to benefits with scale.

Yeah the benefit there is you don't need adoption. People are automatically using it. There's no 'if'. Also makes mining fairer.

"This is what all the wallets with autopilots are planning around."

I think that would be ideal. If you are trying to change user behavior, it should be minimal. Preferably essentially work the same as the mainnet.

"Most actual payments (at web checkouts, point-of-sale systems, etc.) involve generating a user-specfic address to send a payment to anyways"

You sort of need all types of payments though if your going to try to avoid all onchain scaling. People can and do make payments that aren't set up this way for fiat. Good that someone is working on a way to leave open channels to pay any continuous amount it, so this should be solved.

"I'm not sure what you mean by this. The wallet can prompt the user for whatever information"

I mean having two different types of wallet address is going to confuse some people. Basically assume the users are very drunk, and imagine what might go wrong. Ideally the lightning wallets can determine if it's a normal main net address and vice versa because people will try sending to the wrong types of addresses if they have two wallets. That's all I meant. If you have two different shaped slots, and to different shaped, assume both blocks will be shoves in both slots.

"Be realistic here, that's a tremendous amount of activity to shift around."

That's the idea though right? Like in order for offchain scaling to avoid any and all onchain scaling, indefinitely, it will need to take the vast majority of all network traffic. It can't just be some. You need to change most (and here most really means more than half more like 90-99%) users behavior, otherwise you'll just have to scale onchain anyway.

That's the rub when it comes down to it. Nobody knows to what degree l2 will be adopted. So it contains a degree of speculation. An unknown - the actual future. If planning for all scenarios, and all probabilities, you have to scale onchain anyway.

A pragmatically realist view might be that neither can be counted on, alone, to do the job.

Even in the shorter term, adoption of an l2 would need to be fairly high. You can go from under capacity, to over capacity in a network in a small and hard to predict scale of time. It's one thing to like a solution, it's another to bet the farm on it.

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Projects
Development

No branches or pull requests