-
br-m
<rucknium> MRL meeting in this room in about two hours.
-
br-m
<rucknium> Meeting time!
monero-project/meta #1307
-
br-m
<rucknium> 1. Greetings
-
br-m
<spackle> Hello
-
br-m
<boog900> Hi
-
br-m
<emsczkp:matrix.org> Hi
-
br-m
<vtnerd> Hi
-
br-m
<kayabanerve:matrix.org> 👋
-
br-m
<jberman> waves
-
br-m
<rucknium> 2. Updates. What is everyone working on?
-
br-m
<rucknium> me: Starting to use Markov Decision Process to analyze selfish mining countermeasures.
-
br-m
<vtnerd> me: completed subaddress lookahead in lwsf, just needs a few tests. otherwise been tracking down bug reports in lws, several of which have been solved
-
br-m
<jberman> stressnet, identified a cause of disconnected stressnet nodes when the pool exceeds max weight and submitted a PR for it (this was a bug in the fcmp++ integration code, not an existing issue), continuing to v1.5 stressnet release
-
br-m
<emsczkp:matrix.org> me: refining the BP* CCS proposal to introduce potential application scenarios in Monero with the help of @kayabanerve:matrix.org
-
br-m
<articmine> Hi
-
br-m
<articmine> Sorry I am late
-
br-m
<gingeropolous> me: making adaptive blocksize simulation websites, gearing up to get working on monerosim again
-
br-m
<articmine> I have updated my scaling proposal by incorporating Tevador's concept as a sanity cap
-
br-m
<rucknium> 3. Bulletproofs* (more efficient membership and range proofs) (
repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/626).
-
br-m
<emsczkp:matrix.org> Hi and thank you for this time slot
-
br-m
<rucknium> I liked "BulletStar" more :)
-
br-m
<rucknium> Bulletproofs* will be harder to search for in a search engine.
-
br-m
<emsczkp:matrix.org> Discussing with @kayabanerve we've identified several application scenarios for folding in Monero, including (but not limited to) the following:
-
br-m
<emsczkp:matrix.org> “Chain proving”. Suppose Alice and Bob each include their own membership proof in a single transaction without revealing their witnesses to each other: Alice creates her proof, passes it to Bob, and Bob adds his proof on top. In this case, we aggregate single-input transactions into one many-input transaction and create a single folded proof.
-
br-m
<emsczkp:matrix.org> “Stream proving”. This would enable zkRollups, where many statements are collected across transactions and a folded proof is generated for a batch of transactions.
-
br-m
<emsczkp:matrix.org> “Transaction uniformity”. In this scenario, all transactions have a fixed number of inputs and outputs so that all transactions look the same, improving privacy. We currently lack such a feature because larger transactions are too expensive for it to be worth the benefit to privacy. Cheaper standalone multi-input transactions through folding would address this issue.
-
br-m
<emsczkp:matrix.org> [... more lines follow, see
mrelay.p2pool.observer/e/ob7D9c4KUm5QR0Qy ]
-
br-m
<rucknium> Any prediction of the computational cost of these applications?
-
br-m
<articmine> There may be a case to defer FCMP++ until we obtain these proofs
-
br-m
<jberman> No there is no such case
-
br-m
<articmine> Especially given the concerns over the state of the code
-
br-m
<gingeropolous> these are optimizations right?
-
br-m
<kayabanerve:matrix.org> Ones requiring hard forks, such as MLSAG -> CLSAG.
-
br-m
<jberman> That design goal sounds like it would apply to all 3 of those applications ya?
-
br-m
<emsczkp:matrix.org> @rucknium: the most promising benefits in terms of computational costs is the reduction of memory consumption of membership proofs verification. Theoretically and asymptotically this would reduce to log time
-
br-m
<kayabanerve:matrix.org> The obvious use-case is within a transaction, across inputs (or even within a single input), assuming the overhead from folding is sufficiently small.
-
br-m
<kayabanerve:matrix.org> That inherently leads to uniformity being worth the performance hit due to the performance being so improved.
-
br-m
<kayabanerve:matrix.org> Larger efforts, folding across transactions, are theoretically enabled by this but would require much more work on integration.
-
br-m
<kayabanerve:matrix.org> (presumably, block builders would need to form a meta-proof, but I did once propose a way for a pool of proofs which anyone could fold onto)
-
br-m
<kayabanerve:matrix.org> That's my view of it, at least
-
br-m
<emsczkp:matrix.org> @jberman: Yes, and I believe independently of the top-level design.
-
br-m
<jberman> That design goal sounds fine to me. Imo "stream proving" is by far the most useful followed by "tx uniformity" followed by "chain proving". I hear how implementing within a single tx would be much simpler to integrate. I think we can cross that bridge when we get to it though once the foundational research is more firmly established
-
br-m
<rucknium> IMHO, this sounds like a worthy CCS. Good applications and a reasonable budget.
-
br-m
<jberman> small nit: "stream proving" sounds like it's interactive at time of tx construction, but the goal is to be able to fold already constructed proofs into 1 proof AFAIU i.e. you can separate folder from provers
-
br-m
<kayabanerve:matrix.org> Not quite, AFAIK, @jberman:monero.social:
-
br-m
<kayabanerve:matrix.org> In general, folding requires creating the proof on top of prior proofs. Folding proven proofs would require proving a meta-proof and folding that.
-
br-m
<kayabanerve:matrix.org> (To my understanding, obviously @emsczkp:matrix.org: should be deferred to here for their work)
-
br-m
<jeffro256> @jberman: If true, then this wouldn't be a nit, it would be a complete overhaul in the privacy model expected in Monero which turns the privacy battlefield to the network level. I don't think that it would ever be acceptable to require interaction from the folder IMO
-
br-m
<kayabanerve:matrix.org> While I'm unsure if the results will be applicable, and applicable in a timely fashion given other potential developments (quantum computers), I find the work reasonable and the concept of folding universally relevant to Monero's future. I also find the rate incredibly reason and see no reason we shouldn't be happy to have @emsczkp:matrix.org: working on Monero as a researcher.
-
br-m
<kayabanerve:matrix.org> Sorry, I responded to the wrong thing. I was thinking of "chain proving". Durr.
-
br-m
<emsczkp:matrix.org> i outlined "stream proving" as different case of "chian proving". The first should be done by a separate "folder" that aggregates statements (such as a sequencer in zkRollups) and it may have knowledge of witness. The chain proving does not
-
br-m
<kayabanerve:matrix.org> For "stream proving", yes, I believe the pitch would be for the block builder to perform the aggregation without a loss in privacy
-
br-m
<kayabanerve:matrix.org> 🤔 I'll be quiet and leave it to @emsczkp:matrix.org:
-
br-m
<jberman> @kayabanerve:matrix.org: this was my thought as well. My nit is just highlighting how the name "stream proving" sounds like it could require interaction from tx provers. "stream folding" may be better here
-
br-m
<jberman> or just "rollups"
-
br-m
<jberman> in any case, I'll reiterate I'm a strong +1 on the proposal too :)
-
br-m
<jeffro256> @emsczkp:matrix.org: "Stream proving" may have knowledge of the witness or must have knowledge of the witness to do its job? For the record, I'm supportive of the CCS proposal either way.
-
br-m
<kayabanerve:matrix.org> I think the next question here is "what's the witness"? Is the witness the opening of the membership proofs, or is the witness the original proofs which were folded into this succinct proof?
-
br-m
<kayabanerve:matrix.org> Because the folder of proofs into a theoretical meta-proof, across the entire chain, produces a proof whose witness is the original proofs. Therefore, they know the witness to their proof (the meta-proof attesting such proofs originally existed) but there's no loss of privacy.
-
br-m
<rucknium> More discussion on this item?
-
br-m
<kayabanerve:matrix.org> @rucknium: jeffro and I asked for clarifications on theoretical applicability to folding proofs across transactions, but seems the CCS itself is well-liked :)
-
br-m
<rucknium> 4. P2Pool consolidation fees after FCMP hard fork. Coinbase Consolidation Tx Type (
monero-project/research-lab #108).
-
br-m
<rucknium> @datahoarder:monero.social: More on this?
-
br-m
<rucknium> Thank you @emsczkp:matrix.org
-
br-m
<datahoarder> Nothing currently.
-
br-m
<datahoarder> As said last week, working on it and building a schema for it. No updates until then, I'll bring it up once it's ready
-
br-m
<rucknium> @datahoarder:monero.social: Do you want me to take it off the agenda until you say to put it back on?
-
br-m
<datahoarder> Maybe a minor update, a different derivation for coinbase outputs is considered internal to P2Pool that would be ephemeral (not need to be proved in a future turnstile) to allow efficient multisig per block ahead of time
-
br-m
<datahoarder> @rucknium: Let's do that
-
br-m
<emsczkp:matrix.org> @jeffro256: I believe it must as the sequencer has to be also trusted, but i need further invesitagion on the current design of rollups
-
br-m
<rucknium> Regarding the next agenda item:
-
br-m
<rucknium> Others' views on decision making processes are welcome. IMHO, trying to get "loose consensus" in MRL is a good goal, in general for two reasons. And on this specific topic for one practical reason.
-
br-m
<rucknium> 1. Compared to majority vote, seeking consensus can help prevent people from being entrenched in their positions. It can encourage creative compromise.
-
br-m
<rucknium> 2. Defining a voting body is not easy in MRL. With majority voting, you would have to say who can and cannot vote.
-
br-m
<rucknium> 3. ArticMine seems to have a small minority position, but he is in the Monero Core Team. I don't know if Core would approve a Monero node release with a scaling algorithm that ArticMine strongly opposes.
-
br-m
<rucknium> 5. Transaction volume scaling parameters after FCMP hard fork (
github.com/ArticMine/Monero-Documen…/master/MoneroScaling2025-12-01.pdf). Revisit FCMP++ transaction weight function (
seraphis-migration/monero #44).
-
br-m
<kayabanerve:matrix.org> Does this also include my proposal or not yet?
-
br-m
<articmine> No
-
br-m
<articmine> It includes Tevador's
-
br-m
<rucknium> @kayabanerve:matrix.org: No. Your proposal is next on the agenda.
-
br-m
<kayabanerve:matrix.org> @articmine:monero.social: I was asking @rucknium:monero.social: about the agenda item, not about you about your proposal.
-
br-m
<boog900> I don't agree with coding in exponential growth of the sanity cap with no extra usage.
-
br-m
<kayabanerve:matrix.org> Thank you for clarifying @rucknium:monero.social:
-
br-m
<spackle> I think supporters of this proposal owe it to the community to confirm that the daemon can handle a steady stream of 10 to 16 MB blocks.
-
br-m
<spackle> If supporters can make their case from evidence, are willing to pin their reputations to that claim, and there are not objections from others preventing consensus on the matter, then I have nothing to add.
-
br-m
<kayabanerve:matrix.org> I agree with @boog900:monero.social: with disliking how the sanity cap may become progressively insane
-
br-m
<jberman> @emsczkp:matrix.org: this sounds like "stream proving" actually would be the more accurate term in that case, since sounds like there would be some privacy loss if the sequencer is indepedent. I think the design goal of an untrusted sequencer would be ideal, but the applications described would still be useful even if not
-
br-m
<sgp_> fwiw, progress was made by ArticMine adding a ~40% max growth per year cap. This cap increases without any consummate increase in block demand, but there may be more restrictive caps (e.g. the long term median) at any given moment. One dispute seems to be whether scaling should permit "catching up" to this cap with a time peri [... too long, see
mrelay.p2pool.observer/e/9Z3i9s4KRzFnek1F ]
-
br-m
<rucknium> Polynomial growth instead of exponential?
-
br-m
<ofrnxmr> From discussion in mrlounge, id agree with a kayaba-style "or" condition that limits the to higher of exponential cap vs packet size limit (or serialization limit)
-
br-m
<ofrnxmr> Limits the sanity cap*
-
br-m
<articmine> Catching up is critical
-
br-m
<articmine> This is now the essence of the disagreement
-
br-m
<boog900> I also disagree with the rest of the changes to the scaling that have undergone very little discussion
-
br-m
<sgp_> as you can see ArticMine is in strong favor of it, whereas others (including me; make your voices heard everyone else) prefer not to have this catch-up
-
br-m
<ofrnxmr> Since we cant actually sync blocks > 100mb under any circumstances. 40% would keep us under 100mb for 6 more years. The last hf was 3yrs ago
-
br-m
<sgp_> @boog900: I agree; the increase of the long term median is an example of something that seems unnecessary
-
br-m
<articmine> @sgp_: This has been discussed at length going back to 2020
-
br-m
<sgp_> but after only weeks and weeks of discussion, we convinced one person that multi-gig blocks within one year is not acceptable. yay
-
br-m
<jeffro256> sgp_: By the "increase long term median", do you mean without prior increased chain activity, b/c the median has always had the potential to grow given that the priot median block size has increased
-
br-m
<spackle> I also find the design to be disagreeable and agree with boog900's perspective. That said, I hold myself to the statements I made above (at the start of this topic).
-
br-m
<articmine> @sgp_: I knew code rot
-
br-m
<sgp_> I mean the rate, sorry for the confusion
-
br-m
<ofrnxmr> @spackle: from 1.7x -> 2x
-
br-m
<ofrnxmr> Replie to wrong msg
-
br-m
<articmine> While ignoring that the short term median cap is dropped from 50 to 16
-
br-m
<articmine> ... and the maximum block weight is dropped from 100 to 16
-
br-m
<kayabanerve:matrix.org> When I disagreed with the input limit, I acknowledged I was the minority and while I kept my position clear, I understood consensus was for a much higher input limit than I'd personally reasonably support. I understand ArticMine's history not just within the project yet on this specific topic, but given the work on the stressn [... too long, see
mrelay.p2pool.observer/e/uL2C984KRkhGUmpp ]
-
br-m
<kayabanerve:matrix.org> I do understand some of this is part of the discussion process, a process ongoing for months, and I'm happy this latest proposal has improved from a wider belief of critically-flawed to solely disagreed-with (after incorporating a design from tevador as a safety mechanism, tevador themselves a voice held in high regard) though.
-
br-m
<kayabanerve:matrix.org> It _feels to me_ like we're talking down to an agreeable design instead of working out an agreeable design from the start.
-
br-m
<kayabanerve:matrix.org> (If that's even possible, I understand in a room of such different opinions, there isn't an agreeable starting point)
-
br-m
<articmine> Then go back to eliminating the long term median entirely
-
br-m
<articmine> That is in Tevador's concept
-
br-m
<ofrnxmr> The limit is the lower of the LTM and the Sanity Cap
-
br-m
<jeffro256> I'm actually in support of some long term sanity cap on the long term median that doesn't require chainstate besides block height, for quicker failures in block handling, given that it is an ADDITIONAL restriction, not a loosening of some other parameter. I think an exponential factor around 1.4 is reasonable and gives us time to cap if further.
-
br-m
<articmine> Correct
-
br-m
<ofrnxmr> in eli5, isnt this "16mb max short term median, but really capped to 10.8 at current time. Max long term median of 2x 16mb, but really still capped to 10.8mb. So the actual median cant go above 10.8mb"
-
br-m
<jeffro256> What is the current proposed rate of change of this long-term cap? (just to be sure i'm not mixing numbers)
-
br-m
<articmine> The cap starts at 10
-
br-m
<datahoarder> @jeffro256: an attack is to submit a block with zero txs but with extremely large miner tx. That after FCMP++ is no longer possible. Given we are checking on height, maybe it'd be reasonable to limit this input as well
-
br-m
<jeffro256> 10 MB?
-
br-m
<ofrnxmr> @jeffro256: 38.8% per yr
-
br-m
<articmine> @jeffro256: ~1.39 x a year
-
br-m
<jeffro256> @datahoarder: Are coinbase txs not also limited to 1MB? Lemme check
-
br-m
<datahoarder> they are post FCMP++ afaik
-
br-m
<kayabanerve:matrix.org> I don't like how the net will break after 7 years/the net's fundamental limits will cause 'valid' blocks to be rejected after 7 years.
-
br-m
<datahoarder> extra data can bloat it before the limit.
-
br-m
<kayabanerve:matrix.org> Coinbase TXs have no limit other than block size until FCMP++ which enforces extra and output limits.
-
br-m
<boog900> I wont support a block size bomb which will require a future fork to change scaling vs just changing the long term median growth rate to an acceptable level
-
br-m
<kayabanerve:matrix.org> We may upgrade the net in seven years. We presumably will as the network hasn't ossified. I don't see why we shouldn't update the sanity limit five years from now via hard fork given we know this will definitively become incongruent with reality at time of deployment.
-
br-m
<kayabanerve:matrix.org> But that leads into the next agenda topic, so I'll leave my criticism there for now.
-
br-m
<articmine> @kayabanerve:matrix.org: At least people have notice
-
br-m
<boog900> @articmine:monero.social: would you support exponential to a hard limit of 90 MB?
-
br-m
<articmine> No
-
br-m
<ofrnxmr> @boog900: (under the packet size limit)
-
br-m
<rucknium> @boog900:monero.social: Some alternatives to exponential growth are: 1) Polynomial growth, 2) Logistic Growth
en.wikipedia.org/wiki/Logistic_func…sociology:_diffusion_of_innovations 3) Bitcoin Cash's block size algorithm with a control function
gitlab.com/0353F40E/ebaa#technical-description
-
br-m
<boog900> @ofrnxmr: have to account for overhead
-
br-m
<articmine> Can we not fix this in 5 years
-
br-m
<kayabanerve:matrix.org> @boog900:monero.social: Isn't that the next agenda topic? Have the lines disappeared? :(
-
br-m
<ofrnxmr> @articmine: Will need a hard fork to remove the limit afaik (?)
-
br-m
<kayabanerve:matrix.org> @articmine: Can we not have the net blow up in five years if we don't fix it?
-
br-m
<ofrnxmr> So can cross that road at the same time
-
br-m
<kayabanerve:matrix.org> We need a HF, either to:
-
br-m
<kayabanerve:matrix.org> A) Stop the net from blowing up
-
br-m
<kayabanerve:matrix.org> B) Not have overly limited capacity
-
br-m
<kayabanerve:matrix.org> I'd rather be slower than incongruent and unstable
-
br-m
<articmine> @kayabanerve:matrix.org: What you are telling me is that someone put in a scaling bomb in the code
-
br-m
<kayabanerve:matrix.org> Even if a new relay protocol isn't a HF itself, it will force old nodes to update if they can't download new blocks unless they update to it.
-
br-m
<articmine> No wonder everyone is up in arms
-
br-m
<kayabanerve:matrix.org> @articmine:monero.social: Your proposal will force a hard fork in ~7 years if this growth occurs.
-
br-m
<boog900> @rucknium: I would much rather it take usage to increase the limit but slower growth would be better.
-
br-m
<kayabanerve:matrix.org> Or we'll have a netsplit
-
br-m
<articmine> @kayabanerve:matrix.org: I understand. This is a scaling bomb that needs a HF to fix
-
br-m
<kayabanerve:matrix.org> ... so if you acknowledge you're proposing putting a clock on blowing up the network, can you agree to not do that and accept a 90 MB hard cap?
-
br-m
<kayabanerve:matrix.org> Or no, you're acknowledging you're proposing a bomb and you refuse to not do so?
-
br-m
<rucknium> @boog900:monero.social: The BCH algorithm does require usage to increase the limit FWIW.
-
br-m
<jeffro256> To be fair, a 100 MB serialization limit does not necessarily limit the block size to 100 MB if the block is sent over multiple packets. I will need to check again, but I think that the syncing protocol already supports syncing individual txs from other nodes at a time, and only ~60 bytes per block is needed to validate PoW
-
br-m
<ofrnxmr> Caveat that removing the 90mb sanity cap can/will be done at the same time as the packet size removal/fix
-
br-m
<articmine> No I am not proposing a bomb. I am proposing fixing this in the next HF
-
br-m
<ofrnxmr> @jeffro256: 100mb packet size limit
-
br-m
<jeffro256> Sorry yes, packet size limit
-
br-m
<kayabanerve:matrix.org> @articmine:monero.social: That is a bomb though. You're saying this will blow up unless there is a next HF. That's the whole reason it's being called a bomb.
-
br-m
<kayabanerve:matrix.org> @jeffro256:monero.social: Yeah, I did wonder if we could shim such networking protocols.
-
br-m
<articmine> Which means not supporting ANY HF that doesn't fix this
-
br-m
<kayabanerve:matrix.org> Clarifying, I'm saying your proposal, if enacted with the FCMP++ HF, will require yet another HF after in order to not cause a net split (unless we create radically new networking proposals which are also backwards compatible).
-
br-m
<jeffro256> ArticMine is saying that it can blow up now, the bomb is already planted. Albeit, his proposed scaling changes would make it trigger faster, but we need to fix it even if we didn't HF to FCMP++
-
br-m
<articmine> Then defuse the bomb
-
br-m
-
br-m
<kayabanerve:matrix.org> Your proposal doesn't @articmine:monero.social: , not without the 90 MB limit @boog900:monero.social: asked you for and you declined
-
br-m
<articmine> Why is it so difficult to fix this?
-
br-m
<jeffro256> @kayabanerve:matrix.org: It's possible to defuse w/o the 90MB limit by a smart syncing protocol
-
br-m
<kayabanerve:matrix.org> I'm sorry, this is going in circles so I'm withdrawing until the next agenda item. Your proposal, which assumes and mandates yet another network upgrade later (though as @jeffro256 notes, one POTENTIALLY backwards compatible) is itself a bomb.
-
br-m
<ofrnxmr> Various other limits adde here
monero-project/monero 3c7eec1
-
br-m
<jberman> I think the added sanity cap following tevador's proposal makes sense, with a hard cap at 90mb that can be eliminated once the daemon is rearchitected to actually be able to support it
-
br-m
<jbabb:cypherstack.com> Please do not delay the FCMP HF for any reason.
-
br-m
<boog900> @jeffro256: we are betting on that happening before the 100 MB is hit. ngl I gave 90 MB as an example, I knew artic would not agree to it. The exponential growth will get out of hand eventually we will need to HF to decrease it.
-
br-m
<jbabb:cypherstack.com> It is the single most important change and controversial changes should not be paired with it if at all possible.
-
br-m
<ofrnxmr> 90mb wont be hit for 6 years
-
br-m
<ofrnxmr> (sanity cap)
-
br-m
<articmine> We need to separate the 90 MB hard cap from my proposal
-
br-m
<articmine> Is it acceptable otherwise
-
br-m
<boog900> @boog900: This is what I originally said was a bomb, the fact that in enough years the sanity cap grows so much it is no longer in play.
-
br-m
<ofrnxmr> @articmine: its just an "or" on top of your proposal
-
br-m
<boog900> @articmine: TO YOU.
-
br-m
<kayabanerve:matrix.org> @jeffro256:monero.social: Wallets, RPC also breaks
-
br-m
<ofrnxmr> Lower of 90mb vs exponential growth vs LTM vs STM
-
br-m
<jeffro256> @kayabanerve:matrix.org: Sure, but also those limits can be changed very easily in comparison to p2p limits
-
br-m
<jbabb:cypherstack.com> as spackle said earlier: essentially, what can the daemon handle today? what can stressnet actually prove as feasible?
-
br-m
<kayabanerve:matrix.org> Even if the nodes syncs the blocks, it can't serve them and an upgrade is mandated
-
br-m
<rucknium> AFAIK, stressnet hasn't hit any hard limits yet. Just hitting annoying snags.
-
br-m
<jberman> @jbabb:cypherstack.com: we're still not at the stage where we can answer definitively unfortunately. pool exceeding max default size triggered other issues that took time to investigate and deal with
-
br-m
<ofrnxmr> txpool is a fiasco :)
-
br-m
<datahoarder> @rucknium: It might be easier to test/prove limits on beta stressnet, but if it can't reach "hard" limits these soft limits are effectively the scaling limits
-
br-m
<articmine> As for as the 100 MB bomb there are only two options:
-
br-m
<articmine> Fix it
-
br-m
<articmine> Put in a hard cap
-
br-m
<articmine> This has nothing to do with my proposal
-
br-m
<boog900> temporary hard cap that will be HF'd away from just as we will HF to lower the sanity growth rate
-
br-m
<articmine> My proposal does have a built in temporary hard cap
-
br-m
<boog900> that grows exponentially ... yes we know
-
br-m
<articmine> That is irrelevant
-
br-m
<articmine> It is temporary
-
br-m
<kayabanerve:matrix.org> I actually do think this agenda item is best served by discussion on the rest of the proposal since the next item is on such a hard cap.
-
br-m
<articmine> @kayabanerve:matrix.org: I agree
-
br-m
<kayabanerve:matrix.org> That encouragement follows AM's question here ^ > <@articmine> Is it acceptable otherwise
-
br-m
<rucknium> 6. Proposal: Limit blocks to 32 MB, regardless of context (
monero-project/research-lab #154).
-
br-m
<kayabanerve:matrix.org> Oh, there we are
-
br-m
<datahoarder> ^ afaik it was withdrawn > <@kayabanerve:matrix.org> I'm sorry, this is going in circles so I'm withdrawing until the next agenda item. Your proposal, which assumes and mandates yet another network upgrade later (though as @jeffro256 notes, one POTENTIALLY backwards compatible) is itself a bomb.
-
br-m
<kayabanerve:matrix.org> I'm fine with 16-90 MB, I just support 32 MB.
-
br-m
<kayabanerve:matrix.org> I'd like to discuss this (some hard cap) independently to any/all other proposals. 32 MB is due to the stability of the current stressnet. 90 MB is an actual hard requirement of the P2P and RPC layers.
-
br-m
<articmine> I am not. Fix the bonb
-
br-m
<kayabanerve:matrix.org> Shall we get consensus on a 90 MB hard cap and fine grain from there?
-
br-m
<kayabanerve:matrix.org> At worst, the network is artificially limited and we have to issue a new HF in some years to remove the limit, when we upgrade the P2P and RPC.
-
br-m
<kayabanerve:matrix.org> At best, we stop a net split which will occur unless we upgrade the P2P layer.
-
br-m
<articmine> Like I said there are two choices
-
br-m
<articmine> Fix
-
br-m
<articmine> 90 MB hard cap
-
br-m
<sgp_> I'd rather have a hard cap until we know the network won't break, personally. Removing it would only be symbolic not functional (if the network would break anyway if reached)
-
br-m
<jbabb:cypherstack.com> @kayabanerve:matrix.org: I'd prefer a much much lower hard cap until a "scalenet" proves technical feasibility for larger blocks on a sustained basis
-
br-m
<kayabanerve:matrix.org> The fact this stops an inevitable net split if such larger blocks were to naturally occur should make 90 MB without disagreement IMO, even if I'd like to discuss from there a bit more moderation (32 or 64 MB).
-
br-m
<kayabanerve:matrix.org> @articmine:monero.social: The intent is a N MB hard cap _until_ it's fixed.
-
br-m
<jberman> @articmine: "Fix" = a hard fork since older daemons won't be compatible, so it's not ingenuous to characterize it as strictly a fix
-
br-m
<kayabanerve:matrix.org> As right now, it isn't fixed and can break.
-
br-m
<kayabanerve:matrix.org> Now, large blocks aren't working but also won't break the net (under this proposal).
-
br-m
<articmine> Let me start with No to any hard cap below 90 MB
-
br-m
<ofrnxmr> tevadors proposal limits growth to below 90mb for another 5.5yrs. Adding a 90mb hard cap (kayaba) means that in 6yrs it will stop growing
-
br-m
<ofrnxmr> We dont need to demonstrate that TODAYS nodes can handle 90mb, as that test is 5yrs away
-
br-m
<sgp_> And if it's fixed by a fork in the meantime, it can be removed before the limit is ever reached
-
br-m
<jeffro256> If we go with a semi-permanent fixed hardcap, I also support 90MB
-
br-m
<kayabanerve:matrix.org> @jbabb:cypherstack.com: That's the 32 MB number, but it sounds like you agree with *a* cap, as does @sgp_:monero.social: :)
-
br-m
<sgp_> Add it until it's fixed imo
-
br-m
<kayabanerve:matrix.org> @jeffro256:monero.social: Heard on if. Do you support 90 MB with FCMP++ though?
-
br-m
<sgp_> When fixed, I don't think anyone here will be hard-line for a permanent cap ala Bitcoin style. So it's fine
-
br-m
<jbabb:cypherstack.com> @ofrnxmr: we should, lest an attacker demonstrate for us that we can't
-
br-m
<kayabanerve:matrix.org> Also, ping @boog900:monero.social: to specifically state their opinion on this so I don't assume their stance from the prior agenda item
-
br-m
<articmine> I cannot support any hard cap. On 90MB I ABSTAIN
-
br-m
<kayabanerve:matrix.org> I'd also so love ofrnxmr and @jberman:monero.social: and @rucknium:monero.social: 's opinions
-
br-m
<sgp_> This gets us out of a potential emergency fork in the future, which hopefully we don't need. But no need to sign us up for one now
-
br-m
<articmine> Anything below NO
-
br-m
<ofrnxmr> Im in favor of a 90mb cap that wone be reached for over 5yrs due to a sanity cap of 1.4x yearly max growth
-
br-m
<kayabanerve:matrix.org> Abstain is much better than I thought we'd receive, and I truly appreciate you willing to not block this motion even if you don't support it @articmine:monero.social:
-
br-m
<boog900> I would be ok with a 90 MB cap, as a separate thing from artic's scaling proposal.
-
br-m
<kayabanerve:matrix.org> I also will note I do want to stop this from ever being relevant. I do want to improve the node to the point this can be removed and we can defer yo the standard policy
-
br-m
<rucknium> How difficult is it to remove the 100MB packet size limit?
-
br-m
<ofrnxmr> Im not in favor of ballooning to 90mb within 2026
-
br-m
<kayabanerve:matrix.org> But as we are still deciding a standard policy, and as we already have such a hard limit (albeit poorly defined), I support this
-
br-m
<jeffro256> Sure, only to prevent mentioned aforementioned net splits > <@kayabanerve:matrix.org> @jeffro256:monero.social: Heard on if. Do you support 90 MB with FCMP++ though?
-
br-m
<articmine> @ofrnxmr: My proposal speaks for itself
-
br-m
<ofrnxmr> @rucknium: Its been there since genesis, and there are a bunch of other limits that have been added on top. Likely sue to hackerone stuff
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: The concern is the potential DoS effects from that, not the removal itself, of course
-
br-m
<ofrnxmr> due*
-
br-m
<boog900> @rucknium: it may require a new p2p block propagation and syncing protocol
-
br-m
<kayabanerve:matrix.org> @ofrnxmr:monero.social: Of course, this is in conjunction with scaling policies, not as the sole scaling policy
-
br-m
<sgp_> Yay, we agreed to a thing within one meeting 🎉
-
br-m
<rucknium> That goes back to what I said many meetings ago: Lost of technical debt from "temporary" limits that do not fix the core issues.
-
br-m
<kayabanerve:matrix.org> And RPC updates
-
br-m
<datahoarder> @kayabanerve:matrix.org: so the compromise is a hard limit, like your proposal, but 90 MB. seems the last piece that would vote no is abstaining
-
br-m
<jeffro256> @rucknium: We shouldn't remove the 100MB packet size limit IMO, we should just download chain data correctly.
-
br-m
<rucknium> lots*
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: Promoting this to a well-defined item is fixing a core issue.
-
br-m
<kayabanerve:matrix.org> A core issue existed. We add spot checks to avoid resolving the underlying issue. Promoting those spot checks as to not conflict with consensus smooths this out.
-
br-m
<vtnerd> the other issue is that a new serialization system would need to be in place. We probably hit unpack lots on blocks well before the 100 mib lomit
-
br-m
<vtnerd> *unpack limits
-
br-m
<kayabanerve:matrix.org> The limitation itself can then be discussed as a suboptimality, but this turns from a house of cards into a proper building: just one which needs more floors built on top.
-
br-m
<articmine> @sgp_: 90 MB blocks is enough to destroy BS with no additional privacy
-
br-m
<rucknium> I would support a 90MB hard cap, then linear increases every year (+10MB/year, for example). That would prevent complete ossification in the event that another hard fork were infeasible.
-
br-m
<ofrnxmr> @jeffro256: to add to this: monero bans peers that take too long to send packets. so just removing limit would just cause widespread banning if upload speeds arent fast enough
-
br-m
<kayabanerve:matrix.org> @vtnerd:monero.social: Do you feel 90 MB, leaving 10 MB of room, sufficient for the overhead?
-
br-m
<articmine> @rucknium: That is my proposal
-
br-m
<articmine> Simi
-
br-m
<articmine> Similar
-
br-m
<sgp_> monero will not ossify this second because post-quantum is a must anyway
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: While I love the linearity for sanity, that reintroduces the fundamental problem blocks exceed the actually supported size and the goal of promoting the actual limit into a well-defined limit
-
br-m
<rucknium> IMHO, non-spam demand for p2p electronic cash is niche according to data that I've seen and analyzed. The limits are useful for defense against a malicious actor, including a malicious actor with large hashpower share.
-
br-m
<sgp_> anyone 5 years from now who says Monero ossification is more important than post quantum protection will be laughed out of the room. I'll make that my mission :p
-
br-m
<kayabanerve:matrix.org> I'll note 90 MB was a number @boog900:monero.social: mentioned as leaving room for overhead, yet @vtnerd:monero.social: is noting that overhead is non-trivial. We may technically end up on a number approximate to 90 MB but not exactly/so literally, as necessary for the intended space for overhead.
-
br-m
-
br-m
<ofrnxmr> @vtnerd: @kayaba i think he's referring to 8867 etc, because blocks ~30mb become hard or impossible to sync due to serialization unpacking
-
br-m
<kayabanerve:matrix.org> But I'm happy we appear to have large support, and no explicit rejections, for adopting an additional sanity limit of approximately 90 MB: the packet limit with clear space for the inherent overhead.
-
br-m
<ofrnxmr> Which is seperate from the packet limit
-
br-m
<jberman> I think tevador's proposal + 90mb hard limit due to packet serialization limit is reasonable, and think we re-open discussion on it once stress testing gets further along in helping answer what the daemon can actually handle
-
br-m
<ofrnxmr> you can see serialization limits in play by syncing stressnet with --batch-max-weight=50 or --block-sync-size=20 etc
-
br-m
<kayabanerve:matrix.org> Ah, thanks for clarifying it's the performance aspect of it, not the static limits.
-
br-m
<vtnerd> The issue is the hard limits on objects and strings in the current serialization system. Otherwise 90 mib is likely sufficient
-
br-m
<ofrnxmr> @kayabanerve:matrix.org: Its actually a static limit 🥲
-
br-m
<articmine> @vtnerd: It is an absolute mess
-
br-m
<ofrnxmr> @ofrnxmr:
monero-project/monero #9433 pr to increase the limits to, iirc, roughly match 100mb
-
br-m
<boog900> @ofrnxmr: for current tx sizes, and for non-pruned blocks.
-
br-m
<vtnerd> Even better @ofrnxmr:monero.social: that eased the transition to bigger blocks
-
br-m
<kayabanerve:matrix.org> Got it. So this 90 MB limit also requires a PR such as
monero-project/monero #9433 to align the literal constants at this time, and ideally vtnerd's long-standing serialization PR.
-
br-m
<kayabanerve:matrix.org> Sounds like a clear/immediate path forward then, without any objections yet.
-
br-m
<kayabanerve:matrix.org> Unfortunately, the consensus seems to be 90 MB and not 32 MB (sorry to myself and @jbabb:cypherstack.com: :( )
-
br-m
<kayabanerve:matrix.org> But I'm happy we're planning suboptimality over collapse :)
-
br-m
<ofrnxmr> Without tevador's sanity median, id say 32mb. But with it, 90mb (5yrs away) is fine to me
-
br-m
<rucknium> I saw a few people, including me, suggest that the 90MB hard cap should still raise slowly
-
br-m
<articmine> It is not a median I is a cap
-
br-m
<ofrnxmr> @rucknium: It does raise slowly with tevadors sanity cap. 10mb * 1.4x per yr
-
br-m
<kayabanerve:matrix.org> We can write 90 MB for now and leave 32 MB to be done with the other proposal which will inevitably occur @ofrnxmr:monero.social:
-
br-m
<ofrnxmr> @articmine: Sorry, i mistyped
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: I saw you do so without explicitly objecting to 90 MB, and I saw AM abstain.
-
br-m
<articmine> @kayabanerve:matrix.org: Why other proposal
-
br-m
<articmine> For what reason?
-
br-m
<rucknium> I object if it's an indefinite 90MB cap.
-
br-m
<kayabanerve:matrix.org> @articmine:monero.social: You have a scaling proposal. That's still being discussed, even if 90 MB sanity is agreed to today.
-
br-m
<kayabanerve:matrix.org> I'm saying I'm fine with 32 MB, a lower sanity limit, being discussed with other scaling proposals in the context of other proposals, such as yours.
-
br-m
<kayabanerve:matrix.org> Note how ofrnxmr said they'd support a lower sanity limit if a median such as tevador's isn't accepted.
-
br-m
<articmine> I understand that
-
br-m
<kayabanerve:matrix.org> I'm saying I'd try to corral agreement on the higher limit today, and leave lower limits _and other scaling discussions_ independent and open.
-
br-m
<kayabanerve:matrix.org> That's all, I wasn't proposing dropping it or saying that itself will be a proposal. Just a discussion item eligible as the rest is.
-
br-m
<kayabanerve:matrix.org> But I hear @rucknium:monero.social: is now objecting, the first explicit objection to a 90 MB sanity cap :/
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: Care to say more?
-
br-m
<kayabanerve:matrix.org> I know I can repeat how Monero will break without this, and will continue with this, but that's been stated already.
-
br-m
<rucknium> Raise it linearly by 10MB/year, starting 5 years from now. That's an escape hatch.
-
br-m
<kayabanerve:matrix.org> ... An escape hatch into it breaking once again.
-
br-m
<sgp_> removing it is only symbolic if it would in practice break
-
br-m
<kayabanerve:matrix.org> I really don't know how to say the protocol will literally break with such large blocks.
-
br-m
<articmine> @rucknium: It does not help. The only other answer is to fix the underlying issues
-
br-m
<rucknium> Can't you just cut the block into pieces? How hard is the engineering, really? (I say as a non-engineer)
-
br-m
<kayabanerve:matrix.org> The intent is to remove it as soon as the protocol is improved. Hopefully, the protocol is improved before this ever is actually applied to any blocks.
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: The P2P and RPC layers aren't designed for this. Yes, we can improve the layers. That has to be done.
-
br-m
<articmine> @kayabanerve:matrix.org: Now I see a path to consensus
-
br-m
<kayabanerve:matrix.org> Will this be done over the next give years, making this irrelevant? Probably and hopefully.
-
br-m
<kayabanerve:matrix.org> But if those improvements don't happen, will the network break? Yes
-
br-m
<kayabanerve:matrix.org> Will this sanity limit stop it from breaking? Yes
-
br-m
<kayabanerve:matrix.org> It just also sets an outrageous limit that, assuming a current scaling proposal is adopted (AM or tevador's), will become relevant in ~6 years
-
br-m
<articmine> My proposal includes Tevador's
-
br-m
<articmine> It is the lower of both
-
br-m
<kayabanerve:matrix.org> jeffro did suggest a backwards-compatible p2p improvement could happen but that doesn't resolve the RPC layer and wallet scanning protocol, which likely would need more invasive changes (we currently scan blocks and would likely want to scan outputs?)
-
br-m
<kayabanerve:matrix.org> @articmine:monero.social: I'm aware, but your proposal incorporating tevador's does not outright invalidate tevador's, which is why I still mentioned tevador's.
-
br-m
<kayabanerve:matrix.org> I do support the existence of median's over tevador's alone, personally.
-
br-m
<articmine> Of course. Tevador's proposal can be implemented without the long term median
-
br-m
<articmine> It is right in the proposal
-
br-m
<ofrnxmr> i'd prefer to keep the long term median as a sanity check on the short term median
-
br-m
<kayabanerve:matrix.org> So, with the intent to remove this, but with the comment our current stack _will not work_ past the limit defined here, this limit existing in order to stop the stack from entering this broken state of existence, do you still object @rucknium:monero.social: ?
-
br-m
<articmine> It is more than that. It provides fee stability see issue 70
-
br-m
<kayabanerve:matrix.org> Again, I agree we should split blocks into pieces and solve this.
-
br-m
<kayabanerve:matrix.org> That just hasn't been done and I don't believe is scheduled to be done before the next HF. It'd be some months of work.
-
br-m
<rucknium> Let's hire some engineers from big tech to fix it.
-
br-m
<ofrnxmr> @articmine: Yes, yes. To be more correct, i oppose removing it
-
br-m
<rucknium> Many of them have been laid off recently.
-
br-m
<articmine> It can be a CCS
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: docker microservices on the cloud with serverless would fix this
-
br-m
<rucknium> The indefinite 90MB cap seems too similar to BTC's "temporary" 1MB cap burned into consensus.
-
br-m
<sgp_> how about we agree not to add the cap by the next hardfork if it's fixed by then? :)
-
br-m
<articmine> @rucknium: It is
-
br-m
<kayabanerve:matrix.org> Anyways. We're all assuming Monero will be upgraded to fix these limitations. This proposal just means we won't break if we don't upgrade, for whatever reason.
-
br-m
<articmine> @sgp_: I support this
-
br-m
<datahoarder> @rucknium: suggestion. 90 MB cap would only exist on next fork 2 versions. while this could be adjusted * next * hardfork again, it's more symbolical that it's to be kept ONLY for the specific next one
-
br-m
<sgp_> it's different because it's set at the limit that prevents breakage, not a lower value
-
br-m
<datahoarder> (2 versions, initial transition + final)
-
br-m
<kayabanerve:matrix.org> If someone redoes the P2P layer and RPC by the FCMP++ HF, with appropriate testing and review, I'm fine not adding this limit with the FCMP++ HF.
-
br-m
<kayabanerve:matrix.org> I'm not fine delaying the HF for those milestones however.
-
br-m
<sgp_> a limit to prevent breakage is different philosophically than a limit to enforce a "value" of small blocks
-
br-m
<kayabanerve:matrix.org> And not just 'is what nodes can handle'. Nodes can only handle 32 MB. That's the current limit.
-
br-m
<kayabanerve:matrix.org> 90 MB is when things _definitively_ break without reworking multiple parts of Monero.
-
br-m
<kayabanerve:matrix.org> And, under current discussions, is five years away anyways.
-
br-m
<kayabanerve:matrix.org> Suggesting we break in five years if this isn't fixed is a gun to our head. This just removes the round from the chamber.
-
br-m
<articmine> This is going to come down to material progress. If so we can keep consensus
-
br-m
<kayabanerve:matrix.org> This limit is only proposed as a sanity limit due to fundamental limitations in the current stack. If the limitations are removed, this limit should be removed.
-
br-m
<kayabanerve:matrix.org> But we should never allows blocks so big they break the network, whether regarding live operation, syncing the blockchain, or running wallets.
-
br-m
<articmine> Still I have to say I commented in the original 1MB BitcoinTalk thread
-
br-m
<articmine> In 2013
-
br-m
<articmine> Before Monero even existed
-
br-m
-
br-m
<rucknium> Monero can't do it because why?
-
br-m
<ofrnxmr> @kayabanerve:matrix.org: tbf, the txpool breaks wallets when the pool limit is exceeded 😆
-
br-m
<rucknium> Just on the p2p layer
-
br-m
<kayabanerve:matrix.org> Because our stack assumes a 100 MB limit in several different places.
-
br-m
<ofrnxmr> Why cant we just s/100/256? No clue
-
br-m
<articmine> BCH has one problem: Otherwise they can be a brutal competitor
-
br-m
<kayabanerve:matrix.org> We can go through and correct each one. We should and will have to.
-
br-m
<articmine> The one problem No tail emission
-
br-m
<datahoarder> @rucknium: we break currently when setting checkpoints :D, so technical debt and specially old code inherited
-
br-m
<kayabanerve:matrix.org> But we can potentially break before we do so or we can acknowledge our reality.
-
br-m
<rucknium> You're saying that BCH programmers are better than Monero programmers? (Meant to provoke)
-
br-m
<boog900> we did inherit a scam coin
-
br-m
<ofrnxmr> @boog900: a crippled* scamcoin
-
br-m
<kayabanerve:matrix.org> @rucknium:monero.social: Would you be fine with a 90 MB sanity limit with the FCMP++ HF if these improvements are not made by the FCMP++ HF?
-
br-m
<articmine> @rucknium: I am saying that a lack of a tail emission is what is keeping BCH back
-
br-m
<jberman> BitcoinSV blocks got up to 3.8gb, they have the best of the best
-
br-m
<kayabanerve:matrix.org> Is that the compromise phrasing I can ask to get you to replace your objection with an abstain?
-
br-m
<articmine> Actually 4 GB
-
br-m
<rucknium> What if cuprate can do it? FWIW, Zcash is completely transitioning to their Rust zebra node.
-
br-m
<articmine> For BSV
-
br-m
<kayabanerve:matrix.org> BSV also hard forked without admitting it, had an explorer shut down due to all the fallout, and is moving towards just ~6 nodes with a legal framework to arbitrarily move coins before we start seriously discussing them
-
br-m
<boog900> @rucknium: its the p2p protocol - we don't have the serialization limits tho
-
br-m
<rucknium> The BSV example just drives home the point further.
-
br-m
<boog900> we could make our own p2p protocol
-
br-m
<boog900> then we might fork from monerod :(
-
br-m
<articmine> @kayabanerve:matrix.org: BSV is a very useful stressnet for Monero
-
br-m
<kayabanerve:matrix.org> Rucknium: Would you be fine with a 90 MB sanity limit with the FCMP++ HF if these improvements are not made by the FCMP++ HF, or unless monerod is deprecated for Cuprate (assuming Cuprate fixes this) by the FCMP++ HF?
-
br-m
<jbabb:cypherstack.com> BCH does this, BSV does that--anecdotally, I know no real-world users of either in real life. their block size over time seems to show that despite allowing a lot, they don't actually do a lot that's not spam or token spam
-
br-m
<ofrnxmr> Theae transparent coins doing have the verification requirments of private ones
-
br-m
<ofrnxmr> Dont* have
-
br-m
<kayabanerve:matrix.org> Suggesting a project with less than ten nodes (or attempting to move in that direction) which is attempting to define a legal framework for the blockchain itself is anything for Monero is deeply unserious
-
br-m
<rucknium> @kayabanerve:matrix.org: @kayabanerve:matrix.org: No, I would not be fine with it. But my objection shouldn't stop it from being implemented in the HF release software.
-
br-m
<articmine> @ofrnxmr: They can bury BS in data
-
br-m
<kayabanerve:matrix.org> Then with a sole explicit objection from someone who says their objection shouldn't be a blocker, I'd like to thank you and say we appear to have consensus in favor.
-
br-m
<articmine> Even ETH
-
br-m
<ofrnxmr> @articmine: Eth funds are frozen all the time
-
br-m
<boog900> I think a sufficiently slow scaling algo, which requires years of spam before hitting 100 MB would be fine without the hard cap.
-
br-m
<kayabanerve:matrix.org> Even if not unanimous consensus.
-
br-m
<ofrnxmr> @boog900: I think this would relegate monero to not allowing scaling when needed
-
br-m
<kayabanerve:matrix.org> ofrnxmr: Ether itself has not been frozen at any time, and the only unilateral movement I'm aware of was the DAO HF.
-
br-m
<articmine> @boog900: Let us be reasonable. Like burning XMR by looking it for longer than the age of the universe
-
br-m
<sgp_> fwiw, I think the aversion to a hard limit is healthy. It is scary. It's good to really question why they are there. This is a good Monero community value
-
br-m
<ofrnxmr> @kayabanerve:matrix.org: Im referring to BS freezes
-
br-m
<ofrnxmr> I'd also be ok with doing an s/100/256 etc and using a higher limit than 90
-
br-m
<ofrnxmr> But in any event, we need to fix the p2p protocol before we hit those numbers, and that may or may not require a hard fork
-
br-m
<kayabanerve:matrix.org> I don't think we could do that without checking every line to ensure every silent limit hodge-podged over the years was considered, but heard that'd be another _acceptable_ option.
-
br-m
<boog900> @articmine: how much growth does the current scaling allow?
-
br-m
<boog900> in 1 year of spam
-
br-m
<boog900> a very reasonable amount I assume?
-
br-m
<datahoarder> @kayabanerve:matrix.org: a network fuzzer for these things would be lovely, and afaik RPC is currently in the books. for P2Pool I included fuzzers for most of the network as well on my code
-
br-m
<ofrnxmr> 16 mb * 2 * 2 * 1.24 = 79mb? I think?
-
br-m
<articmine> One can push the short term median to 50x 100x for the blocksize
-
br-m
<sgp_> speaking of fuzzing, the MAGIC Monero Fund got another quote for that!
-
br-m
<sgp_> so expect a fundraise for that shortly
-
br-m
<boog900> @articmine: 425 MB
-
br-m
<boog900> so reasonable
-
br-m
<rucknium> Is @preland:monero.social here?
-
br-m
<articmine> My proposal cut this down to 16x for both the short term median and the blocks Less if the sanity cap is triggered
-
br-m
<preland> @rucknium: I am now lol
-
br-m
<rucknium> Did you want to discuss Post-FCMP scaling concepts?
-
br-m
<rucknium> at this meeting
-
br-m
<articmine> I have a proposal on the table
-
br-m
<boog900> @articmine: your first allowed gigabyte blocks, for this one you know the exponential growth makes the limit meaningless after some years
-
br-m
<preland> @rucknium: Unless anyone has anything else to add, I think we can discuss it next week
-
br-m
<articmine> @boog900: Never for the short term median
-
br-m
<kayabanerve:matrix.org> My proposal is for a solution now because while the try solution is a P2P/RPC rework, that isn't being proposed now. Instead, that can is being kicked. My proposal just makes it safe to kick.
-
br-m
<kayabanerve:matrix.org> I'm fine not discussing the can itself at this meeting, or even before the FCMP++ HF.
-
br-m
<kayabanerve:matrix.org> If someone else picks up the can and responsibly disposes of it at a recycling center before the FCMP++ HF, I'm fine withdrawing my proposal.
-
br-m
<boog900> @articmine: yes eventually we have to rely on only the long term median, which allows gigabyte blocks in a year.
-
br-m
<articmine> @boog900: No the sanity cap is in place
-
br-m
<kayabanerve:matrix.org> But the time for these reworks is reasonably believable to be a year after the FCMP++ HF. While that's plenty before the 5 years of safety bought here, without trade-off, it still justifies safety now.
-
br-m
<gingeropolous> finally caught up. silly real work meeting. The way I see it, if we put in a hard, permanent cap, it has the chance to get stuck. Same as with the bitcorn. If we do this 90MB + 10MB a year starting +5 yrs from now, it gives us time to fix whatever in the daemon is capping it, and it can be rolled out without the coordination [... too long, see
mrelay.p2pool.observer/e/sq_w-c4KX0YtZU5p ]
-
br-m
<jbabb:cypherstack.com> do we not already have a hard technical cap that would allow someone to burn fees to incapacitate the network?
-
br-m
<kayabanerve:matrix.org> @gingeropolous:monero.social: 90+10*(y-5) breaks the network in 5 years unless we HF before then.
-
br-m
<articmine> @gingeropolous: If my proposal is approved the real test is will the sanity cap exceed 90 MB. If so we have a real problem
-
br-m
<kayabanerve:matrix.org> Unless we assume miners self-limit and update to remove the self-limit in a pseudo-network-upgrade as you note as a technicality.
-
br-m
<gingeropolous> @kayabanerve:matrix.org: , if monerod isn't fixed by then, right?
-
br-m
<rucknium> End of meeting is now. Feel free to continue discussing, of course.
-
br-m
<kayabanerve:matrix.org> Right, with the immediate fix being the limit, and when those layers are fixed, part of the fix being to remove this limit
-
br-m
<kayabanerve:matrix.org> But we either have to fix those layers, or have this cap, or have a bomb.
-
br-m
<articmine> @kayabanerve:matrix.org: Yes that would remove the ossification issue
-
br-m
<kayabanerve:matrix.org> Because no one is proposing the design, effort, and manpower to fix those layers in time, and because the bomb is unaccepted, the sanity limit is the option in front of us.
-
br-m
<kayabanerve:matrix.org> Monero isn't ossified, and it'd still require a coordinated upgrade by a majority of hash power while we trust people to forfeit including transactions which would pay them fees until then.
-
br-m
<sgp_> ty rucknium
-
br-m
<articmine> For my miner idea to work the cap has to be set at 45 MB
-
br-m
<gingeropolous> i mean we're talking about 63GB blocks so im fine with 90MB
-
br-m
<kayabanerve:matrix.org> By that argument, we can solely and entirely rely on the median and trusting the miners to do the right thing.
-
br-m
<gingeropolous> in a day. sorry.
-
br-m
<kayabanerve:matrix.org> Thank you for noting you're fine with it @gingeropolous:monero.social:
-
br-m
<gingeropolous> if we're filling 63GB in a day and can't find the resources to make it bigger with the serialization and the whatsits... then thats the real problem
-
br-m
<articmine> @kayabanerve:matrix.org: ...but yes it requires a majority of the hashrate to keep the cap
-
br-m
<gingeropolous> but i still don't like hard caps
-
br-m
<gingeropolous> and i guess none of us really do
-
br-m
<kayabanerve:matrix.org> By my count, and somewhat railroading of ensuring I collected options and trying to declare consensus, we have one explicit objection from Rucknium who said it shouldn't be blocking and even the grace of abstaining by ArticMine. I am happy with that as I believe a solution is needed, and this is the only solution available to [... too long, see
mrelay.p2pool.observer/e/l7aL-s4Kd1VRNk5i ]
-
br-m
<kayabanerve:matrix.org> Agreed. Do we prefer things not working?
-
br-m
<kayabanerve:matrix.org> This just codifies existing hard caps until things are reworked.
-
br-m
<gingeropolous> i prefer there being a reason to make things resilient. I worry that strange reasons will arise for keeping 90MB in place regardless of optimizations.
-
br-m
<gingeropolous> but these are nebulous fears that probably shouldn't be used to justify time bombs
-
br-m
<kayabanerve:matrix.org> I actually believe we should target net-0 blockchain growth and should reject any new transactions after the local database exceeds 300 GB /s
-
br-m
<datahoarder> @kayabanerve:matrix.org: ring blockchain, starts writing over old ones ;)
-
br-m
<kayabanerve:matrix.org> I think the practical argument for a static limit, if any, is if the extra space can only be considered beneficial for the purposes of spam. I don't believe we should allow unnecessary space for the hell of it, yet the medians and relation to the fee policy aim to resolve that without requiring a static limit.
-
br-m
<gingeropolous> right, because if we put in the +10 per year thing at year 5, then it has to be fixed. If we slap 90 on it, "it's fine". "its a feature, not a bug!"
-
br-m
<kayabanerve:matrix.org> @datahoarder:monero.social: @boog900:monero.social: Saved 70 GB in Cuprate simply by improving the schema :)
-
br-m
<datahoarder> don't doubt it :)\
-
br-m
<kayabanerve:matrix.org> There's also a proposal to fold ZK proofs 😱 Imagine the space savings there 😱😱😱
-
br-m
<kayabanerve:matrix.org> And what's this? MRL issue for payment channels????
-
br-m
<gingeropolous> who knows in 5 years we may have star trek universe and not need money. Boom, problem solved.
-
br-m
<kayabanerve:matrix.org> @gingeropolous:monero.social: Then it's just the current behavior. This is already scheduled to break in six years with the proposals currently under discussion.
-
br-m
<datahoarder> @kayabanerve:matrix.org: yep! brought that up on lounge around future scaling hardforks. that'd allow a middle between pruned and full archival node, one that saves pruned txs but proofs per block (and still fully verified)
-
br-m
<kayabanerve:matrix.org> Having a limit for five years that breaks after six doesn't solve how the existing proposals break after six years. That's the sole and entire purpose to this proposal.
-
br-m
<ofrnxmr> @kayabanerve:matrix.org: or break in like a month under current conditions
-
br-m
<gingeropolous> indeed
-
br-m
<articmine> Must say I am sad.
-
br-m
<articmine> 😂
-
br-m
<ofrnxmr> i think +10mb / yr is underestimating and doesnt take into accountbthat resources grow exponentially
-
br-m
<ofrnxmr> If we did this in 2014, we'd have added like 10kb per yr
-
br-m
<articmine> I will of course add the 90 MB cap to my proposal given that it did get at least loose consensus
-
br-m
<kayabanerve:matrix.org> To stop things from breaking in six years, unless we do this work, which we should do in the next couple of years.
-
br-m
<gingeropolous> me too. but i'd rather a functioning network with 63GB a day than a non-functioning network
-
br-m
<ofrnxmr> numbers should reflect reality, and im in favor of 1.4x yearly sanity cap growth
-
br-m
<ofrnxmr> Which has been grounded in fact based numbers thusfar (1.4x)
-
br-m
<ofrnxmr> If the world changes in 6yrs and growth slows to 1.1x, then we change the sanity cap to reflect that.. but to date, that has not been the case
-
br-m
<boog900> I don't think our capacity will grow at 1.4x forever, I don't think we will hit gigabyte block capacity in 15 years.
-
br-m
<datahoarder> Remember
monero-project/monero #8827 considered the 8MB/year overhead :)
-
br-m
<boog900> and I would much rather this growth only happen if our usage actually increases
-
br-m
<gingeropolous> well i think thats a given
-
br-m
<kayabanerve:matrix.org> Thank you ArticMine @articmine:monero.social: ♥
-
br-m
<ofrnxmr> @boog900: The stm and ltm should deal with this, not the sanity cap
-
br-m
<boog900> the limit increases each year no matter if there is any extra activity of not
-
br-m
<ofrnxmr> the stm and ltm dont increase w/o activity
-
br-m
<boog900> the stm and ltm allow gigabyte blocks in a year from 0 so are not exactly safe
-
br-m
<sgp_> yeah but not for several years is the point, which does make it less bad
-
br-m
<ofrnxmr> I dont understand how those work, but imo shouldnt allow greater thanmin(sanity, stm) * 2 (or 1.7 etc)per 100k blocks
-
br-m
<kayabanerve:matrix.org> I'd prefer if the 38% YoY sanity cap from AM's proposal was instead 38% of the last year or so
-
br-m
<sgp_> @ofrnxmr: there is also the short-term "boost" allowance
-
br-m
<kayabanerve:matrix.org> or even if it was 60% but still restricted by use, not accumulation regardless of use
-
br-m
<sgp_> @kayabanerve:matrix.org: I agree
-
br-m
<ofrnxmr> @sgp_: isnt that supposed to be limited as well?
-
br-m
<kayabanerve:matrix.org> (Not to commit to a larger number, to note I'd prefer a larger number in exchange for limited by actual use)
-
br-m
<ofrnxmr> @kayabanerve:matrix.org: But then why start at 10mb? Why not start at 110kb 😭
-
br-m
<sgp_> it was picked per tevador's suggestion
-
br-m
<ofrnxmr> The avg of the last yr is like 110kb
-
br-m
<ofrnxmr> @sgp_: Due to 1.4x YoY from genesis
-
br-m
<ofrnxmr> Tevadors #s are based on external factors, not volume
-
br-m
<ofrnxmr> So if were using volume moving fwd, we should be using volume backward too. Which is like 110kb
-
br-m
<gingeropolous> so whats the consensus? 90MB until its proven that the reference client can process x MB blocks something something ... ?
-
br-m
<sgp_> it's due to the estimated sync time at the median speed
-
br-m
<ofrnxmr> 90mb unless someone fixes p2p before hf
-
br-m
<gingeropolous> sure, but after that.
-
br-m
<ofrnxmr> @sgp_: Its based on download speeds of the blockchain
-
br-m
<gingeropolous> testing these large blocks is gonna be a pita
-
br-m
<ofrnxmr> actual sync speeds arent in the proposal, such as w/ or w/o checkpoints, or verification speeda
-
br-m
<sgp_> I'm just telling you where it comes from :)
-
br-m
<rucknium> @articmine:monero.social: fluffypony used to recruit developers AFAIK. To fix the 100MB packet limit you could do the same. Core has a general fund, too. Code changes would have to pass review, of course.
-
br-m
<ofrnxmr> i'd prefer that p2p, serialization, and packet limits be fixed before adding a limit, but cest la vie
-
br-m
<ofrnxmr> add torrenting to the node to split blocks into pieces /s not/s
-
br-m
<gingeropolous> so at 1.5kb / tx, 90MB puts us at 44 million txs/day.
-
br-m
<gingeropolous> how big are FCMP txs? wait there's an explorer somewhere
-
br-m
<ofrnxmr> 6-7kb for a 1-2in
-
br-m
-
br-m
<sgp_> I've been using 10 kB as an approximation
-
br-m
<ofrnxmr> the block sizes are wrong, as they use an old weighting calc
-
br-m
<gingeropolous> ok so only 6 million tx/day
-
br-m
<sgp_> the horror
-
br-m
<gingeropolous> yeah if we're at 6 million tx/day in 5 years i'll eat my hat
-
br-m
<gingeropolous> plus we getting that folding math up in here
-
br-m
<hinto> "simply improving the schema" by implementing a DB from scratch 😭 > <@kayabanerve:matrix.org> @datahoarder:monero.social: @boog900:monero.social: Saved 70 GB in Cuprate simply by improving the schema :)
-
br-m
<sgp_> that only gets us up to 8.4 billion transactions a year 10 years from now so it would mean monero is a failure per previous expectations
-
br-m
<datahoarder> @hinto: the best improvement is to throw the code away
-
br-m
<datahoarder> "oops I needed it" then make it anew with learned knowledge
-
br-m
<kayabanerve:matrix.org> @hinto:monero.social: Improving LMDB's schema by complete replacement, yep, mhm
-
br-m
<kayabanerve:matrix.org> I for one love our new rust db overlord
-
br-m
<hinto> We'll still have a data.mdb as well
-
br-m
<kayabanerve:matrix.org> (I know, I know)
-
br-m
<hinto> @boog900:monero.social: please make a rust LMDB while you're at it
-
br-m
<boog900> Don't tempt me, we could have fully atomic updates when adding a block then.
-
br-m
<boog900> I have thought about it lol
-
br-m
<ack-j:matrix.org> Wait until you see the size of a post quantum fcmp++++ transaction > <@boog900> I don't think our capacity will grow at 1.4x forever, I don't think we will hit gigabyte block capacity in 15 years.
-
br-m
<articmine> 1 GB blocks are like 100 Mbps bandwidth. Multi Gig Internet is available. The only reason why we can't support this is: > <@boog900> I don't think our capacity will grow at 1.4x forever, I don't think we will hit gigabyte block capacity in 15 years.
-
br-m
<articmine> Very serious code issues
-
br-m
<articmine> The willingness to pay for hardware. The latter is closely related to price of Monero
-
br-m
<articmine> They would very likely break BS even with no privacy at all
-
sech1
Even my 5G mobile internet is 930 Mbit down/91 Mbit up. It's not 1995 anymore.
-
br-m
<boog900> Sech1 you think the network would be fine with gigabyte blocks assuming no stupid code issues?
-
br-m
<articmine> I have 5Gbps symmetrical over fibre
-
sech1
If there is enough bandwidth, yes. The problem can be software that is not optimized enough, but that is fixable.
-
sech1
Cuprate is looking good already, stressnet also helps to fix monerod
-
br-m
<boog900> It will be interesting to see how high we can get cuprate, I doubt even we can process that many txs though
-
br-m
<sgp_> to be fair we are nerds who seek out good internet (selection bias). my dumb apartment provides only 40up/40down by default
-
br-m
<boog900> @boog900: Assuming no p2p packet limits
-
br-m
<articmine> @sgp_: Nerds that believe in privacy are a major demographic for running Monero nodes
-
br-m
<sgp_> Optimizing for the median is good for decentralization if possible
-
br-m
<articmine> This is the reason that I consider mid to high end Internet speeds appropriate for determining what a node can support
-
br-m
<articmine> @sgp_: I have tuned down houses over this kind of thing
-
br-m
<sgp_> Yeah I think many of us here have, ha
-
sech1
sgp_ that's exactly why I mentioned mobile connection speed. Everyone has it in major cities now.
-
br-m
<ravfx:xmr.mx> Not long ago I could only access to like 35/2
-
br-m
<ravfx:xmr.mx> Now I have like 200/25
-
br-m
<sgp_> fwiw in the US at least, cell home internet providers deprioritize your service level after 1 TB or so of use a month
-
br-m
<sgp_> in any case I agree it's not 1995 anymore lol