16:37:59 I have a question: I did some research and number crunching yesterday. Based on my research it appears that the current (not FCMP) total sync time of Monero blocks (when joining the network) is 243 times larger than that of Bitcoin (UTXO so less expensive proofs etc) blocks if we hold the number of inputs constant (the same) o [... too long, see https://mrelay.p2pool.observer/e/q_fa4eUKMkE0N2dP ] 16:46:53 If my current understanding is correct, then in Proof of Work chains like Monero and Bitcoin, the main issue that knowledgeable (smart) people have when it comes to scaling is not the actual block-size and storage requirements per se, but rather the point that as the initial block download and verification time rises, then it [... too long, see https://mrelay.p2pool.observer/e/7sf74eUKOU1UY1Ew ] 16:48:06 You didn't take into account the computing and network bandwidth advancements 16:48:21 The CPUs will get faster, the internet connections will get faster too 17:02:28 sech1, yes, what you say is true there are / will probably be some marginal advancements in those areas. I'm re-reading through the MRL now and trying to get a grasp of what sort of block-sizes are going to be the upper-bound after FCMP++? My desire is to get a grasp of what the projected Monero "end-game" is going to look lik [... too long, see https://mrelay.p2pool.observer/e/79S04uUKeTJ1ODVY ] 17:03:35 I feel like we are entering the Monero version of the block-size debate with the looming FCMP++ upgrade because it has us re-assessing what is desirable in terms of restricting / allowing throughput weighed against our tolerance for initial sync time. 17:06:01 I don't have very strong opinions, but am trying to learn and understand the positions of people with greater technical knowledge. I feel that the scaling tradeoff will be the biggest "sacrifice" or cost of the upcoming fork. But people on the normie side like X / twitter aren't talking about it haha. Instead they're getting worked up about improved view keys which is a total nothing burger. 17:06:53 > initial verification and sync time 17:06:53 That's the reason why checkpoints exists, it allows new nodes to skip verification for old blocks safely. So its mostly internet connection speed. 17:06:53 However keep in mind that AAA games routinely push beyond 100GB these days, so internet speed has to increase for normie's as well 17:09:53 You're point about checkpoints is somewhat valid, but isn't it desirable for all (or at least most) new peers joining to perform full verification of all historical blocks? Checkpoints increase the speed, but require trusting what you are being fed without without verifying it oneself on the device. Seems like a bit of a compr [... too long, see https://mrelay.p2pool.observer/e/g-jP4uUKNEFOeV9r ] 17:13:31 Checkpoints are part of monerod that you download and verify against provided public keys. If checkpoint data is compromised, so is monerod and we have bigger problems. 17:13:31 I agree that its not ideal, and all blocks since last checkpoint still have to be verified. 17:17:11 I think we agree. Checkpoints are nice to have, but I think changes the trustless nature of the initial verification and download of the chain. When relying on a checkpoint you are trusting the public keys are correct / not compromised about checkpoint data and the data offered by those checkpoints is genuine and valid. Still, [... too long, see https://mrelay.p2pool.observer/e/kcDq4uUKT0UzY1c3 ] 17:32:32 I don't read MRL, but I kept a close eye on hardware requirements for the past few years. The key question is: What grows faster, hardware performance per dollar, or hardware requirements to run monerod. 17:32:32 I still run my node on a mini PC with a 6500T CPU. The CPU load is in the single digit percent, on a single core. Power consumption is 4W, so maybe a few euros per year. Similar boxes are regularly available for 100€ shipped, but if this PC runs it it means any 10 year old hardware can run it. 17:32:32 I think there is a lot of runway left, and focus should be on technical excellence and usage growth first 17:38:04 aye 17:38:20 As long as cheap SBCs (like Raspberry) can run it, we're fine 18:42:51 Looks like 90MB upperbound on block-size. So it seems consensus is for high throughput. So running a node will likely become pretty specialized. 18:46:50 90 MB Monero blocks is huge compared to what I had in mind. 90 MB x5 (monero blocks per bitcoin block) = 450 MB Bitcoin block and the verification of all those blocks is 243 times slower than Bitcoin haha. That's gonna be a lotta bloat. I'm not complaining, but this does answer my question about consensus around end-game. It s [... too long, see https://mrelay.p2pool.observer/e/lumy5eUKVXdaMGh5 ] 18:47:55 > <@fr33_yourself> You're point about checkpoints is somewhat valid, but isn't it desirable for all (or at least most) new peers joining to perform full verification of all historical blocks? Checkpoints increase the speed, but require trusting what you are being fed without without verifying it oneself on the device. Seem [... too long, see https://mrelay.p2pool.observer/e/tte25eUKQWJoS3Vm ] 18:47:55 I think it was mentioned few weeks back that in the future it could be possible to simply provide a succint validity proof for the whole block, and nodes would have to verify just the proof instead of validating the entire block itself 18:48:00 Monero end-game is more toward big-block specialization and than lots of amateur's and hobbyists mining and running nodes. Based on the degree of "loose-ness" of the blocksize algo I've observed when I just reread the MRL meetings 18:49:24 @redsh4de:matrix.org: How would the new node know that the data in the succinct validity proof was correct? And wasn't swapped with alternate data. Is this really that air-tight of a solution so initial download and verification time for new peers? 19:00:22 https://github.com/monero-project/research-lab/issues/155 19:02:10 My question: if Monero is 243 times to slower to verify than Bitcoin how does a 10MB block cap yield less than a 7 day initial verification / sync time over an extended period of time? I would think that a 10MB cap would lead to like a 1 month sync time at least over the span of years haha 19:03:26 I mean my initial research about the degree of slowness of Monero block verification could be way off, but I don't see how 10MB over the span of a few years wouldn't take longer than a week to sync 19:11:48 It will take years to reach 90mb 19:20:17 @ofrnxmr:xmr.mx: I think I am following the discussion pretty well so far. But I'm skeptical about 10MB blocks yielding less than 7 days of initial sync time. Are Tevador's calculations on that really true? Like if we had full 10MB blocks for the next 3 years, you're actually telling me it would be possible at the end of t [... too long, see https://mrelay.p2pool.observer/e/sout5uUKTHRoRTVp ] 19:24:40 I like Tevador's proposal and his line of reasoning. It seems reasonable to have a rough 1 week sync target. But I don't think that will be the practical outcome if Artic's updated scaling proposal were instantly implemented and we had full blocks. I could be wrong, but I would think verifying everything from genesis after a f [... too long, see https://mrelay.p2pool.observer/e/hJO95uUKUDAtZlp6 ] 19:26:41 We're not using the sanity cap 19:26:43 (tev's proposal) 19:28:08 Re checkpoints. Probably 1/10000 nodes was synced w/o them 19:31:28 It takes like 24hrs to sync an 8gb testnet. It likely takes well, well over a week to sync mainnet w/o checkpoints currently. Thats with like 40kb blocks. 19:31:28 bitcoin etc use checkpoints too. 21:35:31 @ofrnxmr:xmr.mx: Can you clearly describe the tradeoff associated with vast majority of new peers using checkpoints to sync, as opposed to verifying all block data themselves during initial sync? Wasn't my prior statement correct that it introduces the following trust assumptions: (1) They signing keys aren't compromised a [... too long, see https://mrelay.p2pool.observer/e/nMqc6uUKejZ5ejRh ] 21:57:04 Not saying I disagree, but what are the reasons for not using Tevador's 1 week sync target? > <@ofrnxmr:xmr.mx> We're not using the sanity cap 23:06:58 > <@fr33_yourself> I think I am following the discussion pretty well so far. But I'm skeptical about 10MB blocks yielding less than 7 days of initial sync time. Are Tevador's calculations on that really true? Like if we had full 10MB blocks for the next 3 years, you're actually telling me it would be possible at the end of [... too long, see https://mrelay.p2pool.observer/e/07Pr7OUKN2ZPUEYw ] 23:06:58 it takes 3 days right now lol no way it will take 7 with 10 mb blocks 23:07:43 small block is the only way to keep users running nodes instead of using spy nodes 23:14:54 A lot can be done by just improving the code, Cuprate can do a full verification sync in under a day. Fast sync can be done in an hour if you have the bandwidth. 23:20:13 speaking solely as a user here, shouldn't optimization be done before the chain bloats to a state that might need a month to verify? the spam from a year or so ago basically doubled the verification time of those blocks already to around 1.5 blocks/s on my machine, even a 3 mb limit will be 0.2 blocks/s = 24s/s -> 15 straight days of sync time per year 23:27:54 speaking solely as a user here, shouldn't optimization be done before the chain bloats to a state that might need a month to verify? the spam from a year or two ago basically doubled the sync time of those blocks (still max 300 kb) already to around 1.5 blocks/s on my machine, getting hit with 10 mb blocks will be 0.045 blocks [... too long, see https://mrelay.p2pool.observer/e/jIu47eUKWk5OSmcw ] 23:29:42 RCT Prunable _already_ allows removing legacy proofs. That just requires a distinct trust assumption _or_ a replacement proof. A replacement proof can happen at any point in time though.