15:14:19 MRL meeting in this room in about two hours. 17:00:17 Meeting time! https://github.com/monero-project/meta/issues/1307 17:00:36 1. Greetings 17:00:49 Hello 17:01:00 Hi 17:01:00 Hi 17:01:09 Hi 17:01:15 👋 17:02:03 waves 17:02:42 2. Updates. What is everyone working on? 17:04:17 me: Starting to use Markov Decision Process to analyze selfish mining countermeasures. 17:04:19 me: completed subaddress lookahead in lwsf, just needs a few tests. otherwise been tracking down bug reports in lws, several of which have been solved 17:05:09 stressnet, identified a cause of disconnected stressnet nodes when the pool exceeds max weight and submitted a PR for it (this was a bug in the fcmp++ integration code, not an existing issue), continuing to v1.5 stressnet release 17:05:30 me: refining the BP* CCS proposal to introduce potential application scenarios in Monero with the help of @kayabanerve:matrix.org 17:06:31 Hi 17:06:43 Sorry I am late 17:07:51 me: making adaptive blocksize simulation websites, gearing up to get working on monerosim again 17:08:23 I have updated my scaling proposal by incorporating Tevador's concept as a sanity cap 17:08:49 3. Bulletproofs* (more efficient membership and range proofs) (https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/626). 17:09:31 Hi and thank you for this time slot 17:09:41 I liked "BulletStar" more :) 17:09:41 Bulletproofs* will be harder to search for in a search engine. 17:10:23 Discussing with @kayabanerve we've identified several application scenarios for folding in Monero, including (but not limited to) the following: 17:10:23 “Chain proving”. Suppose Alice and Bob each include their own membership proof in a single transaction without revealing their witnesses to each other: Alice creates her proof, passes it to Bob, and Bob adds his proof on top. In this case, we aggregate single-input transactions into one many-input transaction and create a single folded proof. 17:10:23 “Stream proving”. This would enable zkRollups, where many statements are collected across transactions and a folded proof is generated for a batch of transactions. 17:10:23 “Transaction uniformity”. In this scenario, all transactions have a fixed number of inputs and outputs so that all transactions look the same, improving privacy. We currently lack such a feature because larger transactions are too expensive for it to be worth the benefit to privacy. Cheaper standalone multi-input transactions through folding would address this issue. 17:10:23 [... more lines follow, see https://mrelay.p2pool.observer/e/ob7D9c4KUm5QR0Qy ] 17:12:23 Any prediction of the computational cost of these applications? 17:12:35 There may be a case to defer FCMP++ until we obtain these proofs 17:12:52 No there is no such case 17:13:11 Especially given the concerns over the state of the code 17:13:12 these are optimizations right? 17:14:08 Ones requiring hard forks, such as MLSAG -> CLSAG. 17:16:38 That design goal sounds like it would apply to all 3 of those applications ya? 17:17:19 @rucknium: the most promising benefits in terms of computational costs is the reduction of memory consumption of membership proofs verification. Theoretically and asymptotically this would reduce to log time 17:17:52 The obvious use-case is within a transaction, across inputs (or even within a single input), assuming the overhead from folding is sufficiently small. 17:17:52 That inherently leads to uniformity being worth the performance hit due to the performance being so improved. 17:18:23 Larger efforts, folding across transactions, are theoretically enabled by this but would require much more work on integration. 17:19:09 (presumably, block builders would need to form a meta-proof, but I did once propose a way for a pool of proofs which anyone could fold onto) 17:19:30 That's my view of it, at least 17:19:46 @jberman: Yes, and I believe independently of the top-level design. 17:22:32 That design goal sounds fine to me. Imo "stream proving" is by far the most useful followed by "tx uniformity" followed by "chain proving". I hear how implementing within a single tx would be much simpler to integrate. I think we can cross that bridge when we get to it though once the foundational research is more firmly established 17:24:20 IMHO, this sounds like a worthy CCS. Good applications and a reasonable budget. 17:26:40 small nit: "stream proving" sounds like it's interactive at time of tx construction, but the goal is to be able to fold already constructed proofs into 1 proof AFAIU i.e. you can separate folder from provers 17:27:28 Not quite, AFAIK, @jberman:monero.social: 17:28:01 In general, folding requires creating the proof on top of prior proofs. Folding proven proofs would require proving a meta-proof and folding that. 17:28:26 (To my understanding, obviously @emsczkp:matrix.org: should be deferred to here for their work) 17:29:16 @jberman: If true, then this wouldn't be a nit, it would be a complete overhaul in the privacy model expected in Monero which turns the privacy battlefield to the network level. I don't think that it would ever be acceptable to require interaction from the folder IMO 17:29:21 While I'm unsure if the results will be applicable, and applicable in a timely fashion given other potential developments (quantum computers), I find the work reasonable and the concept of folding universally relevant to Monero's future. I also find the rate incredibly reason and see no reason we shouldn't be happy to have @emsczkp:matrix.org: working on Monero as a researcher. 17:30:37 Sorry, I responded to the wrong thing. I was thinking of "chain proving". Durr. 17:31:15 i outlined "stream proving" as different case of "chian proving". The first should be done by a separate "folder" that aggregates statements (such as a sequencer in zkRollups) and it may have knowledge of witness. The chain proving does not 17:31:30 For "stream proving", yes, I believe the pitch would be for the block builder to perform the aggregation without a loss in privacy 17:32:13 🤔 I'll be quiet and leave it to @emsczkp:matrix.org: 17:33:31 @kayabanerve:matrix.org: this was my thought as well. My nit is just highlighting how the name "stream proving" sounds like it could require interaction from tx provers. "stream folding" may be better here 17:34:12 or just "rollups" 17:34:51 in any case, I'll reiterate I'm a strong +1 on the proposal too :) 17:36:07 @emsczkp:matrix.org: "Stream proving" may have knowledge of the witness or must have knowledge of the witness to do its job? For the record, I'm supportive of the CCS proposal either way. 17:37:27 I think the next question here is "what's the witness"? Is the witness the opening of the membership proofs, or is the witness the original proofs which were folded into this succinct proof? 17:38:40 Because the folder of proofs into a theoretical meta-proof, across the entire chain, produces a proof whose witness is the original proofs. Therefore, they know the witness to their proof (the meta-proof attesting such proofs originally existed) but there's no loss of privacy. 17:40:17 More discussion on this item? 17:42:07 @rucknium: jeffro and I asked for clarifications on theoretical applicability to folding proofs across transactions, but seems the CCS itself is well-liked :) 17:43:33 4. P2Pool consolidation fees after FCMP hard fork. Coinbase Consolidation Tx Type (https://github.com/monero-project/research-lab/issues/108). 17:43:49 @datahoarder:monero.social: More on this? 17:43:59 Thank you @emsczkp:matrix.org 17:44:12 Nothing currently. 17:44:30 As said last week, working on it and building a schema for it. No updates until then, I'll bring it up once it's ready 17:44:58 @datahoarder:monero.social: Do you want me to take it off the agenda until you say to put it back on? 17:45:26 Maybe a minor update, a different derivation for coinbase outputs is considered internal to P2Pool that would be ephemeral (not need to be proved in a future turnstile) to allow efficient multisig per block ahead of time 17:45:29 @rucknium: Let's do that 17:45:34 @jeffro256: I believe it must as the sequencer has to be also trusted, but i need further invesitagion on the current design of rollups 17:47:31 Regarding the next agenda item: 17:48:07 Others' views on decision making processes are welcome. IMHO, trying to get "loose consensus" in MRL is a good goal, in general for two reasons. And on this specific topic for one practical reason. 17:48:07 1. Compared to majority vote, seeking consensus can help prevent people from being entrenched in their positions. It can encourage creative compromise. 17:48:07 2. Defining a voting body is not easy in MRL. With majority voting, you would have to say who can and cannot vote. 17:48:07 3. ArticMine seems to have a small minority position, but he is in the Monero Core Team. I don't know if Core would approve a Monero node release with a scaling algorithm that ArticMine strongly opposes. 17:49:05 5. Transaction volume scaling parameters after FCMP hard fork (https://github.com/ArticMine/Monero-Documents/blob/master/MoneroScaling2025-12-01.pdf). Revisit FCMP++ transaction weight function (https://github.com/seraphis-migration/monero/issues/44). 17:49:40 Does this also include my proposal or not yet? 17:49:56 No 17:50:22 It includes Tevador's 17:50:47 @kayabanerve:matrix.org: No. Your proposal is next on the agenda. 17:50:56 @articmine:monero.social: I was asking @rucknium:monero.social: about the agenda item, not about you about your proposal. 17:51:03 I don't agree with coding in exponential growth of the sanity cap with no extra usage. 17:51:07 Thank you for clarifying @rucknium:monero.social: 17:52:09 I think supporters of this proposal owe it to the community to confirm that the daemon can handle a steady stream of 10 to 16 MB blocks. 17:52:10 If supporters can make their case from evidence, are willing to pin their reputations to that claim, and there are not objections from others preventing consensus on the matter, then I have nothing to add. 17:52:53 I agree with @boog900:monero.social: with disliking how the sanity cap may become progressively insane 17:53:39 @emsczkp:matrix.org: this sounds like "stream proving" actually would be the more accurate term in that case, since sounds like there would be some privacy loss if the sequencer is indepedent. I think the design goal of an untrusted sequencer would be ideal, but the applications described would still be useful even if not 17:53:43 fwiw, progress was made by ArticMine adding a ~40% max growth per year cap. This cap increases without any consummate increase in block demand, but there may be more restrictive caps (e.g. the long term median) at any given moment. One dispute seems to be whether scaling should permit "catching up" to this cap with a time peri [... too long, see https://mrelay.p2pool.observer/e/9Z3i9s4KRzFnek1F ] 17:54:04 Polynomial growth instead of exponential? 17:54:06 From discussion in mrlounge, id agree with a kayaba-style "or" condition that limits the to higher of exponential cap vs packet size limit (or serialization limit) 17:54:32 Limits the sanity cap* 17:54:57 Catching up is critical 17:55:28 This is now the essence of the disagreement 17:56:01 I also disagree with the rest of the changes to the scaling that have undergone very little discussion 17:56:05 as you can see ArticMine is in strong favor of it, whereas others (including me; make your voices heard everyone else) prefer not to have this catch-up 17:56:11 Since we cant actually sync blocks > 100mb under any circumstances. 40% would keep us under 100mb for 6 more years. The last hf was 3yrs ago 17:58:42 @boog900: I agree; the increase of the long term median is an example of something that seems unnecessary 17:59:26 @sgp_: This has been discussed at length going back to 2020 17:59:59 but after only weeks and weeks of discussion, we convinced one person that multi-gig blocks within one year is not acceptable. yay 18:00:17 sgp_: By the "increase long term median", do you mean without prior increased chain activity, b/c the median has always had the potential to grow given that the priot median block size has increased 18:00:22 I also find the design to be disagreeable and agree with boog900's perspective. That said, I hold myself to the statements I made above (at the start of this topic). 18:00:25 @sgp_: I knew code rot 18:00:31 I mean the rate, sorry for the confusion 18:00:58 @spackle: from 1.7x -> 2x 18:01:12 Replie to wrong msg 18:01:55 While ignoring that the short term median cap is dropped from 50 to 16 18:02:31 ... and the maximum block weight is dropped from 100 to 16 18:02:32 When I disagreed with the input limit, I acknowledged I was the minority and while I kept my position clear, I understood consensus was for a much higher input limit than I'd personally reasonably support. I understand ArticMine's history not just within the project yet on this specific topic, but given the work on the stressn [... too long, see https://mrelay.p2pool.observer/e/uL2C984KRkhGUmpp ] 18:02:43 I do understand some of this is part of the discussion process, a process ongoing for months, and I'm happy this latest proposal has improved from a wider belief of critically-flawed to solely disagreed-with (after incorporating a design from tevador as a safety mechanism, tevador themselves a voice held in high regard) though. 18:03:14 It _feels to me_ like we're talking down to an agreeable design instead of working out an agreeable design from the start. 18:03:45 (If that's even possible, I understand in a room of such different opinions, there isn't an agreeable starting point) 18:04:06 Then go back to eliminating the long term median entirely 18:04:22 That is in Tevador's concept 18:05:05 The limit is the lower of the LTM and the Sanity Cap 18:05:16 I'm actually in support of some long term sanity cap on the long term median that doesn't require chainstate besides block height, for quicker failures in block handling, given that it is an ADDITIONAL restriction, not a loosening of some other parameter. I think an exponential factor around 1.4 is reasonable and gives us time to cap if further. 18:05:24 Correct 18:06:49 in eli5, isnt this "16mb max short term median, but really capped to 10.8 at current time. Max long term median of 2x 16mb, but really still capped to 10.8mb. So the actual median cant go above 10.8mb" 18:07:17 What is the current proposed rate of change of this long-term cap? (just to be sure i'm not mixing numbers) 18:07:20 The cap starts at 10 18:07:29 @jeffro256: an attack is to submit a block with zero txs but with extremely large miner tx. That after FCMP++ is no longer possible. Given we are checking on height, maybe it'd be reasonable to limit this input as well 18:07:31 10 MB? 18:07:48 @jeffro256: 38.8% per yr 18:08:17 @jeffro256: ~1.39 x a year 18:08:20 @datahoarder: Are coinbase txs not also limited to 1MB? Lemme check 18:08:37 they are post FCMP++ afaik 18:09:10 I don't like how the net will break after 7 years/the net's fundamental limits will cause 'valid' blocks to be rejected after 7 years. 18:09:18 extra data can bloat it before the limit. 18:09:46 Coinbase TXs have no limit other than block size until FCMP++ which enforces extra and output limits. 18:09:51 I wont support a block size bomb which will require a future fork to change scaling vs just changing the long term median growth rate to an acceptable level 18:10:50 We may upgrade the net in seven years. We presumably will as the network hasn't ossified. I don't see why we shouldn't update the sanity limit five years from now via hard fork given we know this will definitively become incongruent with reality at time of deployment. 18:11:23 But that leads into the next agenda topic, so I'll leave my criticism there for now. 18:11:39 @kayabanerve:matrix.org: At least people have notice 18:12:04 @articmine:monero.social: would you support exponential to a hard limit of 90 MB? 18:12:26 No 18:12:36 @boog900: (under the packet size limit) 18:12:41 @boog900:monero.social: Some alternatives to exponential growth are: 1) Polynomial growth, 2) Logistic Growth https://en.wikipedia.org/wiki/Logistic_function#In_economics_and_sociology:_diffusion_of_innovations 3) Bitcoin Cash's block size algorithm with a control function https://gitlab.com/0353F40E/ebaa#technical-description 18:12:48 @ofrnxmr: have to account for overhead 18:12:49 Can we not fix this in 5 years 18:12:54 @boog900:monero.social: Isn't that the next agenda topic? Have the lines disappeared? :( 18:13:13 @articmine: Will need a hard fork to remove the limit afaik (?) 18:13:24 @articmine: Can we not have the net blow up in five years if we don't fix it? 18:13:40 So can cross that road at the same time 18:13:49 We need a HF, either to: 18:13:49 A) Stop the net from blowing up 18:13:49 B) Not have overly limited capacity 18:13:56 I'd rather be slower than incongruent and unstable 18:14:24 @kayabanerve:matrix.org: What you are telling me is that someone put in a scaling bomb in the code 18:14:48 Even if a new relay protocol isn't a HF itself, it will force old nodes to update if they can't download new blocks unless they update to it. 18:14:57 No wonder everyone is up in arms 18:15:10 @articmine:monero.social: Your proposal will force a hard fork in ~7 years if this growth occurs. 18:15:23 @rucknium: I would much rather it take usage to increase the limit but slower growth would be better. 18:15:26 Or we'll have a netsplit 18:15:36 @kayabanerve:matrix.org: I understand. This is a scaling bomb that needs a HF to fix 18:16:15 ... so if you acknowledge you're proposing putting a clock on blowing up the network, can you agree to not do that and accept a 90 MB hard cap? 18:16:40 Or no, you're acknowledging you're proposing a bomb and you refuse to not do so? 18:16:48 @boog900:monero.social: The BCH algorithm does require usage to increase the limit FWIW. 18:17:19 To be fair, a 100 MB serialization limit does not necessarily limit the block size to 100 MB if the block is sent over multiple packets. I will need to check again, but I think that the syncing protocol already supports syncing individual txs from other nodes at a time, and only ~60 bytes per block is needed to validate PoW 18:17:20 Caveat that removing the 90mb sanity cap can/will be done at the same time as the packet size removal/fix 18:17:27 No I am not proposing a bomb. I am proposing fixing this in the next HF 18:17:35 @jeffro256: 100mb packet size limit 18:17:54 Sorry yes, packet size limit 18:18:04 @articmine:monero.social: That is a bomb though. You're saying this will blow up unless there is a next HF. That's the whole reason it's being called a bomb. 18:18:39 @jeffro256:monero.social: Yeah, I did wonder if we could shim such networking protocols. 18:19:04 Which means not supporting ANY HF that doesn't fix this 18:19:27 Clarifying, I'm saying your proposal, if enacted with the FCMP++ HF, will require yet another HF after in order to not cause a net split (unless we create radically new networking proposals which are also backwards compatible). 18:19:38 ArticMine is saying that it can blow up now, the bomb is already planted. Albeit, his proposed scaling changes would make it trigger faster, but we need to fix it even if we didn't HF to FCMP++ 18:19:44 Then defuse the bomb 18:20:12 packet size limit has been there from the start fwiw: https://github.com/monero-project/monero/blob/1a8f5ce89a990e54ec757affff01f27d449640bc/contrib/epee/include/net/levin_base.h#L71 18:20:23 Your proposal doesn't @articmine:monero.social: , not without the 90 MB limit @boog900:monero.social: asked you for and you declined 18:21:01 Why is it so difficult to fix this? 18:21:39 @kayabanerve:matrix.org: It's possible to defuse w/o the 90MB limit by a smart syncing protocol 18:22:44 I'm sorry, this is going in circles so I'm withdrawing until the next agenda item. Your proposal, which assumes and mandates yet another network upgrade later (though as @jeffro256 notes, one POTENTIALLY backwards compatible) is itself a bomb. 18:23:02 Various other limits adde here https://github.com/monero-project/monero/commit/3c7eec152cd5663c461f64699574943d3619f0b9 18:23:41 I think the added sanity cap following tevador's proposal makes sense, with a hard cap at 90mb that can be eliminated once the daemon is rearchitected to actually be able to support it 18:23:42 Please do not delay the FCMP HF for any reason. 18:23:51 @jeffro256: we are betting on that happening before the 100 MB is hit. ngl I gave 90 MB as an example, I knew artic would not agree to it. The exponential growth will get out of hand eventually we will need to HF to decrease it. 18:23:57 It is the single most important change and controversial changes should not be paired with it if at all possible. 18:24:45 90mb wont be hit for 6 years 18:24:52 (sanity cap) 18:25:03 We need to separate the 90 MB hard cap from my proposal 18:25:24 Is it acceptable otherwise 18:25:27 @boog900: This is what I originally said was a bomb, the fact that in enough years the sanity cap grows so much it is no longer in play. 18:25:35 @articmine: its just an "or" on top of your proposal 18:25:35 @articmine: TO YOU. 18:25:54 @jeffro256:monero.social: Wallets, RPC also breaks 18:26:00 Lower of 90mb vs exponential growth vs LTM vs STM 18:26:29 @kayabanerve:matrix.org: Sure, but also those limits can be changed very easily in comparison to p2p limits 18:26:53 as spackle said earlier: essentially, what can the daemon handle today? what can stressnet actually prove as feasible? 18:27:50 Even if the nodes syncs the blocks, it can't serve them and an upgrade is mandated 18:28:12 AFAIK, stressnet hasn't hit any hard limits yet. Just hitting annoying snags. 18:28:21 @jbabb:cypherstack.com: we're still not at the stage where we can answer definitively unfortunately. pool exceeding max default size triggered other issues that took time to investigate and deal with 18:28:45 txpool is a fiasco :) 18:29:10 @rucknium: It might be easier to test/prove limits on beta stressnet, but if it can't reach "hard" limits these soft limits are effectively the scaling limits 18:29:45 As for as the 100 MB bomb there are only two options: 18:29:45 Fix it 18:29:45 Put in a hard cap 18:30:08 This has nothing to do with my proposal 18:30:38 temporary hard cap that will be HF'd away from just as we will HF to lower the sanity growth rate 18:31:29 My proposal does have a built in temporary hard cap 18:32:20 that grows exponentially ... yes we know 18:32:42 That is irrelevant 18:32:51 It is temporary 18:33:34 I actually do think this agenda item is best served by discussion on the rest of the proposal since the next item is on such a hard cap. 18:33:55 @kayabanerve:matrix.org: I agree 18:33:57 That encouragement follows AM's question here ^ > <@articmine> Is it acceptable otherwise 18:34:21 6. Proposal: Limit blocks to 32 MB, regardless of context (https://github.com/monero-project/research-lab/issues/154). 18:34:34 Oh, there we are 18:34:38 ^ afaik it was withdrawn > <@kayabanerve:matrix.org> I'm sorry, this is going in circles so I'm withdrawing until the next agenda item. Your proposal, which assumes and mandates yet another network upgrade later (though as @jeffro256 notes, one POTENTIALLY backwards compatible) is itself a bomb. 18:34:52 I'm fine with 16-90 MB, I just support 32 MB. 18:35:37 I'd like to discuss this (some hard cap) independently to any/all other proposals. 32 MB is due to the stability of the current stressnet. 90 MB is an actual hard requirement of the P2P and RPC layers. 18:35:49 I am not. Fix the bonb 18:35:51 Shall we get consensus on a 90 MB hard cap and fine grain from there? 18:36:16 At worst, the network is artificially limited and we have to issue a new HF in some years to remove the limit, when we upgrade the P2P and RPC. 18:36:33 At best, we stop a net split which will occur unless we upgrade the P2P layer. 18:36:43 Like I said there are two choices 18:36:43 Fix 18:36:43 90 MB hard cap 18:36:53 I'd rather have a hard cap until we know the network won't break, personally. Removing it would only be symbolic not functional (if the network would break anyway if reached) 18:37:22 @kayabanerve:matrix.org: I'd prefer a much much lower hard cap until a "scalenet" proves technical feasibility for larger blocks on a sustained basis 18:37:24 The fact this stops an inevitable net split if such larger blocks were to naturally occur should make 90 MB without disagreement IMO, even if I'd like to discuss from there a bit more moderation (32 or 64 MB). 18:38:04 @articmine:monero.social: The intent is a N MB hard cap _until_ it's fixed. 18:38:05 @articmine: "Fix" = a hard fork since older daemons won't be compatible, so it's not ingenuous to characterize it as strictly a fix 18:38:14 As right now, it isn't fixed and can break. 18:38:36 Now, large blocks aren't working but also won't break the net (under this proposal). 18:38:46 Let me start with No to any hard cap below 90 MB 18:38:53 tevadors proposal limits growth to below 90mb for another 5.5yrs. Adding a 90mb hard cap (kayaba) means that in 6yrs it will stop growing 18:39:23 We dont need to demonstrate that TODAYS nodes can handle 90mb, as that test is 5yrs away 18:39:32 And if it's fixed by a fork in the meantime, it can be removed before the limit is ever reached 18:39:48 If we go with a semi-permanent fixed hardcap, I also support 90MB 18:39:51 @jbabb:cypherstack.com: That's the 32 MB number, but it sounds like you agree with *a* cap, as does @sgp_:monero.social: :) 18:39:54 Add it until it's fixed imo 18:40:20 @jeffro256:monero.social: Heard on if. Do you support 90 MB with FCMP++ though? 18:40:22 When fixed, I don't think anyone here will be hard-line for a permanent cap ala Bitcoin style. So it's fine 18:40:56 @ofrnxmr: we should, lest an attacker demonstrate for us that we can't 18:40:57 Also, ping @boog900:monero.social: to specifically state their opinion on this so I don't assume their stance from the prior agenda item 18:41:26 I cannot support any hard cap. On 90MB I ABSTAIN 18:41:28 I'd also so love ofrnxmr and @jberman:monero.social: and @rucknium:monero.social: 's opinions 18:41:50 This gets us out of a potential emergency fork in the future, which hopefully we don't need. But no need to sign us up for one now 18:42:14 Anything below NO 18:42:16 Im in favor of a 90mb cap that wone be reached for over 5yrs due to a sanity cap of 1.4x yearly max growth 18:42:25 Abstain is much better than I thought we'd receive, and I truly appreciate you willing to not block this motion even if you don't support it @articmine:monero.social: 18:42:49 I would be ok with a 90 MB cap, as a separate thing from artic's scaling proposal. 18:42:53 I also will note I do want to stop this from ever being relevant. I do want to improve the node to the point this can be removed and we can defer yo the standard policy 18:42:57 How difficult is it to remove the 100MB packet size limit? 18:43:07 Im not in favor of ballooning to 90mb within 2026 18:43:19 But as we are still deciding a standard policy, and as we already have such a hard limit (albeit poorly defined), I support this 18:43:24 Sure, only to prevent mentioned aforementioned net splits > <@kayabanerve:matrix.org> @jeffro256:monero.social: Heard on if. Do you support 90 MB with FCMP++ though? 18:43:37 @ofrnxmr: My proposal speaks for itself 18:43:44 @rucknium: Its been there since genesis, and there are a bunch of other limits that have been added on top. Likely sue to hackerone stuff 18:43:49 @rucknium:monero.social: The concern is the potential DoS effects from that, not the removal itself, of course 18:44:00 due* 18:44:13 @rucknium: it may require a new p2p block propagation and syncing protocol 18:44:18 @ofrnxmr:monero.social: Of course, this is in conjunction with scaling policies, not as the sole scaling policy 18:44:31 Yay, we agreed to a thing within one meeting 🎉 18:44:38 That goes back to what I said many meetings ago: Lost of technical debt from "temporary" limits that do not fix the core issues. 18:44:46 And RPC updates 18:44:48 @kayabanerve:matrix.org: so the compromise is a hard limit, like your proposal, but 90 MB. seems the last piece that would vote no is abstaining 18:44:52 @rucknium: We shouldn't remove the 100MB packet size limit IMO, we should just download chain data correctly. 18:45:05 lots* 18:45:13 @rucknium:monero.social: Promoting this to a well-defined item is fixing a core issue. 18:45:44 A core issue existed. We add spot checks to avoid resolving the underlying issue. Promoting those spot checks as to not conflict with consensus smooths this out. 18:45:47 the other issue is that a new serialization system would need to be in place. We probably hit unpack lots on blocks well before the 100 mib lomit 18:45:58 *unpack limits 18:46:23 The limitation itself can then be discussed as a suboptimality, but this turns from a house of cards into a proper building: just one which needs more floors built on top. 18:46:41 @sgp_: 90 MB blocks is enough to destroy BS with no additional privacy 18:46:44 I would support a 90MB hard cap, then linear increases every year (+10MB/year, for example). That would prevent complete ossification in the event that another hard fork were infeasible. 18:46:55 @jeffro256: to add to this: monero bans peers that take too long to send packets. so just removing limit would just cause widespread banning if upload speeds arent fast enough 18:47:10 @vtnerd:monero.social: Do you feel 90 MB, leaving 10 MB of room, sufficient for the overhead? 18:47:17 @rucknium: That is my proposal 18:47:40 Simi 18:47:47 Similar 18:48:19 monero will not ossify this second because post-quantum is a must anyway 18:48:22 @rucknium:monero.social: While I love the linearity for sanity, that reintroduces the fundamental problem blocks exceed the actually supported size and the goal of promoting the actual limit into a well-defined limit 18:49:16 IMHO, non-spam demand for p2p electronic cash is niche according to data that I've seen and analyzed. The limits are useful for defense against a malicious actor, including a malicious actor with large hashpower share. 18:49:39 anyone 5 years from now who says Monero ossification is more important than post quantum protection will be laughed out of the room. I'll make that my mission :p 18:49:59 I'll note 90 MB was a number @boog900:monero.social: mentioned as leaving room for overhead, yet @vtnerd:monero.social: is noting that overhead is non-trivial. We may technically end up on a number approximate to 90 MB but not exactly/so literally, as necessary for the intended space for overhead. 18:50:04 Data and analysis: https://github.com/Rucknium/presentations/blob/main/Rucknium-Monerotopia-2024-Banking-the-Unbanked.pdf 18:51:18 @vtnerd: @kayaba i think he's referring to 8867 etc, because blocks ~30mb become hard or impossible to sync due to serialization unpacking 18:51:38 But I'm happy we appear to have large support, and no explicit rejections, for adopting an additional sanity limit of approximately 90 MB: the packet limit with clear space for the inherent overhead. 18:51:46 Which is seperate from the packet limit 18:52:28 I think tevador's proposal + 90mb hard limit due to packet serialization limit is reasonable, and think we re-open discussion on it once stress testing gets further along in helping answer what the daemon can actually handle 18:52:35 you can see serialization limits in play by syncing stressnet with --batch-max-weight=50 or --block-sync-size=20 etc 18:52:44 Ah, thanks for clarifying it's the performance aspect of it, not the static limits. 18:53:12 The issue is the hard limits on objects and strings in the current serialization system. Otherwise 90 mib is likely sufficient 18:53:16 @kayabanerve:matrix.org: Its actually a static limit 🥲 18:53:43 @vtnerd: It is an absolute mess 18:54:42 @ofrnxmr: https://github.com/monero-project/monero/pull/9433 pr to increase the limits to, iirc, roughly match 100mb 18:55:14 @ofrnxmr: for current tx sizes, and for non-pruned blocks. 18:56:06 Even better @ofrnxmr:monero.social: that eased the transition to bigger blocks 18:56:44 Got it. So this 90 MB limit also requires a PR such as https://github.com/monero-project/monero/pull/9433 to align the literal constants at this time, and ideally vtnerd's long-standing serialization PR. 18:57:22 Sounds like a clear/immediate path forward then, without any objections yet. 18:57:48 Unfortunately, the consensus seems to be 90 MB and not 32 MB (sorry to myself and @jbabb:cypherstack.com: :( ) 18:58:11 But I'm happy we're planning suboptimality over collapse :) 18:59:41 Without tevador's sanity median, id say 32mb. But with it, 90mb (5yrs away) is fine to me 19:00:04 I saw a few people, including me, suggest that the 90MB hard cap should still raise slowly 19:00:16 It is not a median I is a cap 19:00:47 @rucknium: It does raise slowly with tevadors sanity cap. 10mb * 1.4x per yr 19:00:55 We can write 90 MB for now and leave 32 MB to be done with the other proposal which will inevitably occur @ofrnxmr:monero.social: 19:01:10 @articmine: Sorry, i mistyped 19:01:27 @rucknium:monero.social: I saw you do so without explicitly objecting to 90 MB, and I saw AM abstain. 19:01:30 @kayabanerve:matrix.org: Why other proposal 19:01:53 For what reason? 19:02:17 I object if it's an indefinite 90MB cap. 19:02:22 @articmine:monero.social: You have a scaling proposal. That's still being discussed, even if 90 MB sanity is agreed to today. 19:02:46 I'm saying I'm fine with 32 MB, a lower sanity limit, being discussed with other scaling proposals in the context of other proposals, such as yours. 19:03:23 Note how ofrnxmr said they'd support a lower sanity limit if a median such as tevador's isn't accepted. 19:03:41 I understand that 19:03:47 I'm saying I'd try to corral agreement on the higher limit today, and leave lower limits _and other scaling discussions_ independent and open. 19:04:11 That's all, I wasn't proposing dropping it or saying that itself will be a proposal. Just a discussion item eligible as the rest is. 19:04:32 But I hear @rucknium:monero.social: is now objecting, the first explicit objection to a 90 MB sanity cap :/ 19:04:59 @rucknium:monero.social: Care to say more? 19:05:38 I know I can repeat how Monero will break without this, and will continue with this, but that's been stated already. 19:06:15 Raise it linearly by 10MB/year, starting 5 years from now. That's an escape hatch. 19:06:35 ... An escape hatch into it breaking once again. 19:06:38 removing it is only symbolic if it would in practice break 19:06:50 I really don't know how to say the protocol will literally break with such large blocks. 19:07:06 @rucknium: It does not help. The only other answer is to fix the underlying issues 19:07:08 Can't you just cut the block into pieces? How hard is the engineering, really? (I say as a non-engineer) 19:07:16 The intent is to remove it as soon as the protocol is improved. Hopefully, the protocol is improved before this ever is actually applied to any blocks. 19:07:39 @rucknium:monero.social: The P2P and RPC layers aren't designed for this. Yes, we can improve the layers. That has to be done. 19:07:48 @kayabanerve:matrix.org: Now I see a path to consensus 19:07:53 Will this be done over the next give years, making this irrelevant? Probably and hopefully. 19:08:06 But if those improvements don't happen, will the network break? Yes 19:08:17 Will this sanity limit stop it from breaking? Yes 19:08:58 It just also sets an outrageous limit that, assuming a current scaling proposal is adopted (AM or tevador's), will become relevant in ~6 years 19:09:23 My proposal includes Tevador's 19:09:35 It is the lower of both 19:09:53 jeffro did suggest a backwards-compatible p2p improvement could happen but that doesn't resolve the RPC layer and wallet scanning protocol, which likely would need more invasive changes (we currently scan blocks and would likely want to scan outputs?) 19:10:16 @articmine:monero.social: I'm aware, but your proposal incorporating tevador's does not outright invalidate tevador's, which is why I still mentioned tevador's. 19:10:56 I do support the existence of median's over tevador's alone, personally. 19:11:16 Of course. Tevador's proposal can be implemented without the long term median 19:11:46 It is right in the proposal 19:12:10 i'd prefer to keep the long term median as a sanity check on the short term median 19:12:50 So, with the intent to remove this, but with the comment our current stack _will not work_ past the limit defined here, this limit existing in order to stop the stack from entering this broken state of existence, do you still object @rucknium:monero.social: ? 19:13:01 It is more than that. It provides fee stability see issue 70 19:13:22 Again, I agree we should split blocks into pieces and solve this. 19:13:22 That just hasn't been done and I don't believe is scheduled to be done before the next HF. It'd be some months of work. 19:13:25 Let's hire some engineers from big tech to fix it. 19:13:34 @articmine: Yes, yes. To be more correct, i oppose removing it 19:14:00 Many of them have been laid off recently. 19:14:26 It can be a CCS 19:14:30 @rucknium:monero.social: docker microservices on the cloud with serverless would fix this 19:14:46 The indefinite 90MB cap seems too similar to BTC's "temporary" 1MB cap burned into consensus. 19:14:59 how about we agree not to add the cap by the next hardfork if it's fixed by then? :) 19:15:00 @rucknium: It is 19:15:03 Anyways. We're all assuming Monero will be upgraded to fix these limitations. This proposal just means we won't break if we don't upgrade, for whatever reason. 19:15:34 @sgp_: I support this 19:15:49 @rucknium: suggestion. 90 MB cap would only exist on next fork 2 versions. while this could be adjusted * next * hardfork again, it's more symbolical that it's to be kept ONLY for the specific next one 19:15:50 it's different because it's set at the limit that prevents breakage, not a lower value 19:15:59 (2 versions, initial transition + final) 19:16:07 If someone redoes the P2P layer and RPC by the FCMP++ HF, with appropriate testing and review, I'm fine not adding this limit with the FCMP++ HF. 19:16:07 I'm not fine delaying the HF for those milestones however. 19:16:29 a limit to prevent breakage is different philosophically than a limit to enforce a "value" of small blocks 19:16:43 And not just 'is what nodes can handle'. Nodes can only handle 32 MB. That's the current limit. 19:17:00 90 MB is when things _definitively_ break without reworking multiple parts of Monero. 19:17:10 And, under current discussions, is five years away anyways. 19:17:47 Suggesting we break in five years if this isn't fixed is a gun to our head. This just removes the round from the chamber. 19:18:01 This is going to come down to material progress. If so we can keep consensus 19:19:24 This limit is only proposed as a sanity limit due to fundamental limitations in the current stack. If the limitations are removed, this limit should be removed. 19:20:23 But we should never allows blocks so big they break the network, whether regarding live operation, syncing the blockchain, or running wallets. 19:21:04 Still I have to say I commented in the original 1MB BitcoinTalk thread 19:21:14 In 2013 19:21:35 Before Monero even existed 19:22:14 BCH was testing 256MB blocks in 2022: https://bitcoincashresearch.org/t/assessing-the-scaling-performance-of-several-categories-of-bch-network-software/754 19:22:42 Monero can't do it because why? 19:22:51 @kayabanerve:matrix.org: tbf, the txpool breaks wallets when the pool limit is exceeded 😆 19:22:52 Just on the p2p layer 19:23:05 Because our stack assumes a 100 MB limit in several different places. 19:23:20 Why cant we just s/100/256? No clue 19:23:21 BCH has one problem: Otherwise they can be a brutal competitor 19:23:31 We can go through and correct each one. We should and will have to. 19:23:38 The one problem No tail emission 19:23:45 @rucknium: we break currently when setting checkpoints :D, so technical debt and specially old code inherited 19:23:46 But we can potentially break before we do so or we can acknowledge our reality. 19:23:59 You're saying that BCH programmers are better than Monero programmers? (Meant to provoke) 19:24:30 we did inherit a scam coin 19:24:41 @boog900: a crippled* scamcoin 19:24:42 @rucknium:monero.social: Would you be fine with a 90 MB sanity limit with the FCMP++ HF if these improvements are not made by the FCMP++ HF? 19:25:01 @rucknium: I am saying that a lack of a tail emission is what is keeping BCH back 19:25:09 BitcoinSV blocks got up to 3.8gb, they have the best of the best 19:25:23 Is that the compromise phrasing I can ask to get you to replace your objection with an abstain? 19:25:28 Actually 4 GB 19:25:35 What if cuprate can do it? FWIW, Zcash is completely transitioning to their Rust zebra node. 19:25:40 For BSV 19:26:08 BSV also hard forked without admitting it, had an explorer shut down due to all the fallout, and is moving towards just ~6 nodes with a legal framework to arbitrarily move coins before we start seriously discussing them 19:26:14 @rucknium: its the p2p protocol - we don't have the serialization limits tho 19:26:23 The BSV example just drives home the point further. 19:26:26 we could make our own p2p protocol 19:26:37 then we might fork from monerod :( 19:26:47 @kayabanerve:matrix.org: BSV is a very useful stressnet for Monero 19:26:50 Rucknium: Would you be fine with a 90 MB sanity limit with the FCMP++ HF if these improvements are not made by the FCMP++ HF, or unless monerod is deprecated for Cuprate (assuming Cuprate fixes this) by the FCMP++ HF? 19:27:13 BCH does this, BSV does that--anecdotally, I know no real-world users of either in real life. their block size over time seems to show that despite allowing a lot, they don't actually do a lot that's not spam or token spam 19:27:20 Theae transparent coins doing have the verification requirments of private ones 19:27:28 Dont* have 19:27:43 Suggesting a project with less than ten nodes (or attempting to move in that direction) which is attempting to define a legal framework for the blockchain itself is anything for Monero is deeply unserious 19:27:52 @kayabanerve:matrix.org: @kayabanerve:matrix.org: No, I would not be fine with it. But my objection shouldn't stop it from being implemented in the HF release software. 19:28:17 @ofrnxmr: They can bury BS in data 19:28:31 Then with a sole explicit objection from someone who says their objection shouldn't be a blocker, I'd like to thank you and say we appear to have consensus in favor. 19:28:32 Even ETH 19:28:54 @articmine: Eth funds are frozen all the time 19:28:59 I think a sufficiently slow scaling algo, which requires years of spam before hitting 100 MB would be fine without the hard cap. 19:29:08 Even if not unanimous consensus. 19:29:45 @boog900: I think this would relegate monero to not allowing scaling when needed 19:29:53 ofrnxmr: Ether itself has not been frozen at any time, and the only unilateral movement I'm aware of was the DAO HF. 19:30:17 @boog900: Let us be reasonable. Like burning XMR by looking it for longer than the age of the universe 19:30:26 fwiw, I think the aversion to a hard limit is healthy. It is scary. It's good to really question why they are there. This is a good Monero community value 19:30:32 @kayabanerve:matrix.org: Im referring to BS freezes 19:31:11 I'd also be ok with doing an s/100/256 etc and using a higher limit than 90 19:32:00 But in any event, we need to fix the p2p protocol before we hit those numbers, and that may or may not require a hard fork 19:32:01 I don't think we could do that without checking every line to ensure every silent limit hodge-podged over the years was considered, but heard that'd be another _acceptable_ option. 19:32:18 @articmine: how much growth does the current scaling allow? 19:32:26 in 1 year of spam 19:33:02 a very reasonable amount I assume? 19:33:08 @kayabanerve:matrix.org: a network fuzzer for these things would be lovely, and afaik RPC is currently in the books. for P2Pool I included fuzzers for most of the network as well on my code 19:33:43 16 mb * 2 * 2 * 1.24 = 79mb? I think? 19:33:45 One can push the short term median to 50x 100x for the blocksize 19:33:46 speaking of fuzzing, the MAGIC Monero Fund got another quote for that! 19:34:12 so expect a fundraise for that shortly 19:34:22 @articmine: 425 MB 19:34:25 so reasonable 19:35:54 Is @preland:monero.social here? 19:36:12 My proposal cut this down to 16x for both the short term median and the blocks Less if the sanity cap is triggered 19:36:20 @rucknium: I am now lol 19:36:37 Did you want to discuss Post-FCMP scaling concepts? 19:36:56 at this meeting 19:37:09 I have a proposal on the table 19:37:29 @articmine: your first allowed gigabyte blocks, for this one you know the exponential growth makes the limit meaningless after some years 19:37:30 @rucknium: Unless anyone has anything else to add, I think we can discuss it next week 19:38:42 @boog900: Never for the short term median 19:38:51 My proposal is for a solution now because while the try solution is a P2P/RPC rework, that isn't being proposed now. Instead, that can is being kicked. My proposal just makes it safe to kick. 19:39:09 I'm fine not discussing the can itself at this meeting, or even before the FCMP++ HF. 19:39:31 If someone else picks up the can and responsibly disposes of it at a recycling center before the FCMP++ HF, I'm fine withdrawing my proposal. 19:40:01 @articmine: yes eventually we have to rely on only the long term median, which allows gigabyte blocks in a year. 19:40:37 @boog900: No the sanity cap is in place 19:41:00 But the time for these reworks is reasonably believable to be a year after the FCMP++ HF. While that's plenty before the 5 years of safety bought here, without trade-off, it still justifies safety now. 19:42:26 finally caught up. silly real work meeting. The way I see it, if we put in a hard, permanent cap, it has the chance to get stuck. Same as with the bitcorn. If we do this 90MB + 10MB a year starting +5 yrs from now, it gives us time to fix whatever in the daemon is capping it, and it can be rolled out without the coordination [... too long, see https://mrelay.p2pool.observer/e/sq_w-c4KX0YtZU5p ] 19:43:29 do we not already have a hard technical cap that would allow someone to burn fees to incapacitate the network? 19:44:01 @gingeropolous:monero.social: 90+10*(y-5) breaks the network in 5 years unless we HF before then. 19:44:02 @gingeropolous: If my proposal is approved the real test is will the sanity cap exceed 90 MB. If so we have a real problem 19:44:38 Unless we assume miners self-limit and update to remove the self-limit in a pseudo-network-upgrade as you note as a technicality. 19:44:49 @kayabanerve:matrix.org: , if monerod isn't fixed by then, right? 19:45:04 End of meeting is now. Feel free to continue discussing, of course. 19:45:47 Right, with the immediate fix being the limit, and when those layers are fixed, part of the fix being to remove this limit 19:46:11 But we either have to fix those layers, or have this cap, or have a bomb. 19:46:34 @kayabanerve:matrix.org: Yes that would remove the ossification issue 19:46:41 Because no one is proposing the design, effort, and manpower to fix those layers in time, and because the bomb is unaccepted, the sanity limit is the option in front of us. 19:47:30 Monero isn't ossified, and it'd still require a coordinated upgrade by a majority of hash power while we trust people to forfeit including transactions which would pay them fees until then. 19:47:31 ty rucknium 19:47:35 For my miner idea to work the cap has to be set at 45 MB 19:47:38 i mean we're talking about 63GB blocks so im fine with 90MB 19:47:45 By that argument, we can solely and entirely rely on the median and trusting the miners to do the right thing. 19:47:46 in a day. sorry. 19:48:19 Thank you for noting you're fine with it @gingeropolous:monero.social: 19:48:44 if we're filling 63GB in a day and can't find the resources to make it bigger with the serialization and the whatsits... then thats the real problem 19:49:30 @kayabanerve:matrix.org: ...but yes it requires a majority of the hashrate to keep the cap 19:49:31 but i still don't like hard caps 19:49:40 and i guess none of us really do 19:49:49 By my count, and somewhat railroading of ensuring I collected options and trying to declare consensus, we have one explicit objection from Rucknium who said it shouldn't be blocking and even the grace of abstaining by ArticMine. I am happy with that as I believe a solution is needed, and this is the only solution available to [... too long, see https://mrelay.p2pool.observer/e/l7aL-s4Kd1VRNk5i ] 19:50:01 Agreed. Do we prefer things not working? 19:50:19 This just codifies existing hard caps until things are reworked. 19:52:20 i prefer there being a reason to make things resilient. I worry that strange reasons will arise for keeping 90MB in place regardless of optimizations. 19:53:41 but these are nebulous fears that probably shouldn't be used to justify time bombs 19:56:02 I actually believe we should target net-0 blockchain growth and should reject any new transactions after the local database exceeds 300 GB /s 19:56:42 @kayabanerve:matrix.org: ring blockchain, starts writing over old ones ;) 19:56:52 I think the practical argument for a static limit, if any, is if the extra space can only be considered beneficial for the purposes of spam. I don't believe we should allow unnecessary space for the hell of it, yet the medians and relation to the fee policy aim to resolve that without requiring a static limit. 19:57:08 right, because if we put in the +10 per year thing at year 5, then it has to be fixed. If we slap 90 on it, "it's fine". "its a feature, not a bug!" 19:57:11 @datahoarder:monero.social: @boog900:monero.social: Saved 70 GB in Cuprate simply by improving the schema :) 19:57:26 don't doubt it :)\ 19:57:27 There's also a proposal to fold ZK proofs 😱 Imagine the space savings there 😱😱😱 19:57:39 And what's this? MRL issue for payment channels???? 19:58:00 who knows in 5 years we may have star trek universe and not need money. Boom, problem solved. 19:58:11 @gingeropolous:monero.social: Then it's just the current behavior. This is already scheduled to break in six years with the proposals currently under discussion. 19:58:13 @kayabanerve:matrix.org: yep! brought that up on lounge around future scaling hardforks. that'd allow a middle between pruned and full archival node, one that saves pruned txs but proofs per block (and still fully verified) 19:58:37 Having a limit for five years that breaks after six doesn't solve how the existing proposals break after six years. That's the sole and entire purpose to this proposal. 19:58:40 @kayabanerve:matrix.org: or break in like a month under current conditions 19:58:46 indeed 19:58:54 Must say I am sad. 19:58:55 😂 20:00:06 i think +10mb / yr is underestimating and doesnt take into accountbthat resources grow exponentially 20:00:31 If we did this in 2014, we'd have added like 10kb per yr 20:00:35 I will of course add the 90 MB cap to my proposal given that it did get at least loose consensus 20:00:45 To stop things from breaking in six years, unless we do this work, which we should do in the next couple of years. 20:00:49 me too. but i'd rather a functioning network with 63GB a day than a non-functioning network 20:01:12 numbers should reflect reality, and im in favor of 1.4x yearly sanity cap growth 20:02:15 Which has been grounded in fact based numbers thusfar (1.4x) 20:03:09 If the world changes in 6yrs and growth slows to 1.1x, then we change the sanity cap to reflect that.. but to date, that has not been the case 20:03:12 I don't think our capacity will grow at 1.4x forever, I don't think we will hit gigabyte block capacity in 15 years. 20:03:20 Remember https://github.com/monero-project/monero/issues/8827 considered the 8MB/year overhead :) 20:03:41 and I would much rather this growth only happen if our usage actually increases 20:03:54 well i think thats a given 20:04:38 Thank you ArticMine @articmine:monero.social: ♥ 20:05:15 @boog900: The stm and ltm should deal with this, not the sanity cap 20:06:04 the limit increases each year no matter if there is any extra activity of not 20:06:22 the stm and ltm dont increase w/o activity 20:06:31 the stm and ltm allow gigabyte blocks in a year from 0 so are not exactly safe 20:08:08 yeah but not for several years is the point, which does make it less bad 20:08:26 I dont understand how those work, but imo shouldnt allow greater thanmin(sanity, stm) * 2 (or 1.7 etc)per 100k blocks 20:08:55 I'd prefer if the 38% YoY sanity cap from AM's proposal was instead 38% of the last year or so 20:09:04 @ofrnxmr: there is also the short-term "boost" allowance 20:09:11 or even if it was 60% but still restricted by use, not accumulation regardless of use 20:09:14 @kayabanerve:matrix.org: I agree 20:09:26 @sgp_: isnt that supposed to be limited as well? 20:09:38 (Not to commit to a larger number, to note I'd prefer a larger number in exchange for limited by actual use) 20:09:53 @kayabanerve:matrix.org: But then why start at 10mb? Why not start at 110kb 😭 20:10:21 it was picked per tevador's suggestion 20:10:33 The avg of the last yr is like 110kb 20:10:37 @sgp_: Due to 1.4x YoY from genesis 20:10:55 Tevadors #s are based on external factors, not volume 20:11:30 So if were using volume moving fwd, we should be using volume backward too. Which is like 110kb 20:13:26 so whats the consensus? 90MB until its proven that the reference client can process x MB blocks something something ... ? 20:13:45 it's due to the estimated sync time at the median speed 20:13:51 90mb unless someone fixes p2p before hf 20:14:23 sure, but after that. 20:14:35 @sgp_: Its based on download speeds of the blockchain 20:15:01 testing these large blocks is gonna be a pita 20:15:07 actual sync speeds arent in the proposal, such as w/ or w/o checkpoints, or verification speeda 20:16:07 I'm just telling you where it comes from :) 20:16:13 @articmine:monero.social: fluffypony used to recruit developers AFAIK. To fix the 100MB packet limit you could do the same. Core has a general fund, too. Code changes would have to pass review, of course. 20:17:19 i'd prefer that p2p, serialization, and packet limits be fixed before adding a limit, but cest la vie 20:18:19 add torrenting to the node to split blocks into pieces /s not/s 20:18:49 so at 1.5kb / tx, 90MB puts us at 44 million txs/day. 20:19:09 how big are FCMP txs? wait there's an explorer somewhere 20:20:02 6-7kb for a 1-2in 20:20:12 http://stressgguj7ugyxtqe7czeoelobeb3cnyhltooueuae2t3avd5ynepid.onion 20:20:18 I've been using 10 kB as an approximation 20:20:41 the block sizes are wrong, as they use an old weighting calc 20:20:46 ok so only 6 million tx/day 20:20:53 the horror 20:21:13 yeah if we're at 6 million tx/day in 5 years i'll eat my hat 20:21:43 plus we getting that folding math up in here 20:27:06 "simply improving the schema" by implementing a DB from scratch 😭 > <@kayabanerve:matrix.org> @datahoarder:monero.social: @boog900:monero.social: Saved 70 GB in Cuprate simply by improving the schema :) 20:27:50 that only gets us up to 8.4 billion transactions a year 10 years from now so it would mean monero is a failure per previous expectations 20:27:58 @hinto: the best improvement is to throw the code away 20:28:14 "oops I needed it" then make it anew with learned knowledge 20:29:01 @hinto:monero.social: Improving LMDB's schema by complete replacement, yep, mhm 20:29:11 I for one love our new rust db overlord 20:30:49 We'll still have a data.mdb as well 20:31:20 (I know, I know) 20:31:22 @boog900:monero.social: please make a rust LMDB while you're at it 20:33:50 Don't tempt me, we could have fully atomic updates when adding a block then. 20:33:58 I have thought about it lol 21:31:06 Wait until you see the size of a post quantum fcmp++++ transaction > <@boog900> I don't think our capacity will grow at 1.4x forever, I don't think we will hit gigabyte block capacity in 15 years. 21:39:56 1 GB blocks are like 100 Mbps bandwidth. Multi Gig Internet is available. The only reason why we can't support this is: > <@boog900> I don't think our capacity will grow at 1.4x forever, I don't think we will hit gigabyte block capacity in 15 years. 21:39:56 Very serious code issues 21:39:56 The willingness to pay for hardware. The latter is closely related to price of Monero 21:41:02 They would very likely break BS even with no privacy at all 21:42:38 Even my 5G mobile internet is 930 Mbit down/91 Mbit up. It's not 1995 anymore. 21:44:06 Sech1 you think the network would be fine with gigabyte blocks assuming no stupid code issues? 21:44:22 I have 5Gbps symmetrical over fibre 21:45:25 If there is enough bandwidth, yes. The problem can be software that is not optimized enough, but that is fixable. 21:45:49 Cuprate is looking good already, stressnet also helps to fix monerod 21:47:04 It will be interesting to see how high we can get cuprate, I doubt even we can process that many txs though 21:47:07 to be fair we are nerds who seek out good internet (selection bias). my dumb apartment provides only 40up/40down by default 21:47:37 @boog900: Assuming no p2p packet limits 21:48:06 @sgp_: Nerds that believe in privacy are a major demographic for running Monero nodes 21:51:16 Optimizing for the median is good for decentralization if possible 21:59:18 This is the reason that I consider mid to high end Internet speeds appropriate for determining what a node can support 22:00:13 @sgp_: I have tuned down houses over this kind of thing 22:00:59 Yeah I think many of us here have, ha 22:11:19 sgp_ that's exactly why I mentioned mobile connection speed. Everyone has it in major cities now. 22:13:04 Not long ago I could only access to like 35/2 22:13:04 Now I have like 200/25 22:13:17 fwiw in the US at least, cell home internet providers deprioritize your service level after 1 TB or so of use a month 22:13:30 in any case I agree it's not 1995 anymore lol