17:00:26 Meeting time! https://github.com/monero-project/meta/issues/1030 17:00:32 1) Greetings 17:00:40 Hello 17:00:53 hello 17:00:56 Hello 17:01:08 hi 17:01:08 hi 17:01:19 hello 17:01:26 👋 17:01:28 hi 17:01:50 *waves* 17:03:06 2) Updates. What is everyone working on? 17:04:11 me: Helping with stressnet. And I have the visualization of the two new solution concepts for best fee and ring size for defense against black marble attacks. 17:05:43 me: misc. stressnet tasks 17:06:16 a number of bug fixes to LWS and ZMQ. Re-started "frontend" lib work for LWS (after hearing the wallet2 update) 17:06:29 me: fcmp grow_tree and trim_tree algorithms 17:07:07 <0​xfffc:monero.social> Hi 17:07:42 Soliciting quotes to review the divisor proof, one already acquired. Both should be here by Monday. We're also trying to move forward with the divisor R1CS specification. 17:07:48 *specification review. 17:08:16 kayaba, if you need auditor intros/recommendations, i can help with that 17:09:01 3) Stress testing `monerod` https://github.com/monero-project/monero/issues/9348 17:09:13 The main thing to share is that we experienced serious issues with larger blocks. Consistent ~1.5MB blocks required lowering '--block-sync-size' for block propagation and network synchronization to function. 17:09:24 Each miner was isolated to separate chains and other nodes could not sync until the change was made. 17:09:28 spackle, do you want to discuss stressnet? 17:09:50 Hi 17:10:24 That is the brief overview. Rucknium performed an investigation on the limits of the current software, and I imagine they would like to speak on it: https://gist.github.com/Rucknium/f092b0ad5870f6038226c39af529152c 17:11:44 Right. By default, monerod has `--block-sync-size` set to 20 for recent (last several years) blocks on mainnet. A chunk of 20 blocks was too much once block size reached 1.5MB. 17:12:55 I lowered it one unit at a time. I was able to sync once I set it to 14. Roughly, I think that monerod can sync block chunks when they are 20-25MB and lower. 17:13:35 If this happened on mainnet, probably there would be a netsplit and chainsplit until a lot of the network restarted their nodes with a lower `--block-sync-size` 17:13:43 And it seems the daemon isn't "aware" that there is a problem, does not error out, does not warn, just does not progress anymore? 17:14:35 It just says "Sync data returned a new top block candidate: 2521517 -> 2524276 [Your node is 2759 blocks (3.8 days) behind]" again and again. And then starts banning peers. 17:14:48 Splendid :) 17:15:18 do we know if this limitation is imposed by the amount of data or the amount of computation? similar behavior was observed with ~300 kB mainnet blocks with 150-in/few-out transactions. 17:15:22 I didn't try to turn on deeper log levels. The problem is reliably reproducible on stressnet if you pop blocks down to before the block size "hill". Developers can turn on the appropriate log levels and categories 17:15:31 log-level 2 would yield useful info here, and obviously would be nice to have someone focus on this asap 17:17:00 In the newest release on stressnet, I just set the value to `1` to make sure it syncs ok: https://github.com/spackle-xmr/monero/pull/8 17:17:32 You can see where in the code the defaults are controlled. It is sort of a very basic control based on hard-coded block heights. 17:17:54 Thankfully it would be quite expensive to push blocksize up to that "hill" on mainnet 17:18:36 (and on mainnet, `--block-sync-size 1` also got rid of the problem) 17:18:52 spackle, do you have an estimate on how much in fees it would take with the minimum fee (20 nanoneros/byte)? 17:19:38 Certainly expensive to do quickly. 17:19:42 Not on hand, one moment... 17:19:46 what are this machine's specs? 17:20:06 I pushed up block size to 1.5MB quickly by spamming priority 4 (highest) fees on stressnet. 17:20:41 Here's a plot of the block sizes during the stress testing: https://github.com/spackle-xmr/chaindata_graphics/blob/main/stressnet_block_size_26_JUN.png 17:21:07 And here is spackle's one-week report: https://reddit.com/r/Monero/comments/1doyde9/stressnet_first_week_report/ 17:21:36 I don't quite understand the axes on that plot ... 17:21:47 jberman: The machine that I used for the testing here https://gist.github.com/Rucknium/f092b0ad5870f6038226c39af529152c ? 17:22:14 yes 17:23:29 It's one of the Monero Research Computing Cluster's machines: 3900x 24 thread, 32GB ram, 1TB nvme. 17:23:41 It only had outgoing connections. 17:24:13 got it 17:25:23 A quick simulation shows min fees would expand the block size to 1.66MB in 8500 blocks at a cost of ~107 XMR. 17:25:38 *after 8500 blocks 17:25:46 rbrunner: AFAIK, the vertical is number of bytes in a block and the horizontal is just number of blocks since the stressnet testing started last Wednesday. 17:26:19 Ok, thanks 17:26:21 spackle: Thanks a ton for all your work on that simulation code that can give us an answer so quickly :) 17:26:44 I'll be the first to say it is not perfect, but I do think it paints a generally accurate picture. 17:27:26 So in less than 2 weeks and for less than USD 20'000 you can aspire and try to bring the Monero network to partial standstill. 17:27:35 That's about 12 days. I think what we are doing on stressnet now is just spamming minimum fees and seeing the growth rate of the blocks. 17:28:16 Pretty sobbering. 17:28:43 Here my webapp that shows live stressnet data: https://monitor.stressnet.net/ 17:30:01 rbrunner: I agree. Stressnet has shown the issue. Now software engineers can decide what to do about it :) 17:30:28 You could have a quick fix by lowering the default values or look into what causes the problem deeper. 17:30:53 It may well be that the correction will be quite easy, once the problem is on the table. 17:34:06 Some other things: Node startup with a large txpool (600MB) takes about an hour. 0xfffc wrote a patch that decreased the time by 2-5x. Thank you! Still, that's slow, and it can be worked on more AFAIK. Node<->wallet connects are hard to establish when the txpool is very large, too. 17:35:01 Anything else about stressnet? Stressnet conversation happens in #monero-stressnet:monero.social and ##monero-stressnet on IRC 17:35:44 Cmon guys let’s reach 1m pending in mem_pool 🔥 17:36:29 (Btw gigabytes and bytes look the same on the monitor) 17:36:43 if another dev isn't on this by next week's MRL meeting, I'll shift priority from fcmp's to this barring objection 17:36:45 <0​xfffc:monero.social> Parallelization of startup txpool load, is indirectly related to rwlock. So I went back to rwlock work. 17:36:46 <0​xfffc:monero.social> If we merge rwlock, we can write parallelization of txpool load. Which would even speed it up substantially 17:36:57 preland: The default txpool size is about 600MB. That gets about 250,000 txs. We can't get higher without increasing the default (can be changed with a flag at startup) 17:37:32 0xfffc: what causes txpool loading to take that long? curious on the details 17:38:20 preland: Thanks for input. The monitor is a rough draft. I haven't done anything with the units yet. B = "billion". 17:38:22 <0​xfffc:monero.social> we are validating every txis. 17:40:11 and it is done sequentially? 17:40:59 AFAIK, single-threaded 😎 (the sunglasses are ironic) 17:41:20 The CPU load is on one thread at startup 17:41:39 You could probably do batch verification, too, AFAIK 17:42:45 The startup time is inconvenient, but it's potentially a bigger problem if nodes shut down during high tx volumes, then they need a long time to restart. 17:43:15 The stressnet node release is running 0xfffc 's patch already 17:44:14 We had about 40 nodes on stressnet at the start. Maybe 25-30 now. 17:45:22 4) Potential measures against a black marble attack. https://github.com/monero-project/research-lab/issues/119 17:45:23 <0​xfffc:monero.social> yes, of course. 17:46:23 <0​xfffc:monero.social> https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/cryptonote_core/tx_pool.cpp#L1793 17:47:00 I updated https://black-marble-defense-params.redteam.cash/ with visualizations for the two new solution concepts. The first is the best fee and ring size at a specific effective ring size. The second is the best fee and ring size at a specific "budget" for Alice, i.e. the total cost of aggregate tx fees plus the cost of storage to node operators. 17:47:16 I think sgp_ was interested in this. 17:48:09 These solution concepts align with the expectation that as node storage costs are higher (i.e. adjust the `m` parameter up), it is more attractive to defeat black marbles by raiding the fee instead of raising the ring size. 17:48:57 We don't have much time in the usual hour, so we can move to FCMP 17:49:13 5) Research Pre-Seraphis Full-Chain Membership Proofs. https://www.getmonero.org/2024/04/27/fcmps.html 17:49:29 kayabanerve: ^ 17:50:06 👋 17:50:08 Aaron also has news beyond my own. 17:50:15 Cypher Stack has complete its FCMP++ report and provided a draft to kayabanerve 17:50:20 Cypher Stack has completed its FCMP++ report and provided a draft to kayabanerve 17:50:40 Great! 17:50:47 I only saw after the meeting started, hence my lack of announcement in intro, apologies there. 17:50:51 Once any issues are addressed and inevitable typos fixed, we'll get it posted to GitHub (along with TeX source) 17:50:55 Delivered this morning though :) 17:51:02 good news :) 17:51:28 The gist is that the technique should be suitable for its intended use case, given some conditions on how proving systems are instantiated 17:51:45 We also proved an optimization secure 17:52:44 I'm still reading through, so I apologize I can't immediately provide my own summary. 17:53:45 Almost suspicious how clear the FCMP sailing has been so far :) 17:54:00 My notation has been thoroughly critiqued. 17:54:46 Yet we now have GBP proofs (which I'm soliciting review for), divisor proofs (also soliciting review for), and the composition proofs (I say as I read through the document supposedly with them). 17:55:16 Once we get the necessary secondary reviews for GBPs/divisors, we should be able to move forward with audits on each. 17:55:57 And with the divisor R1CS spec being reviewed, we can then request Veridise to do formal verification of the rest of our spec or do it ourselves before moving to auditing there. 17:56:18 I don't have much more to say on this end. jberman may on the integration side? 17:57:47 hoping to have a documented spec of the grow_tree and trim_tree implementations within the next 2 weeks. haven't been particularly simple to implement 17:59:26 Mind if I ping you on your thoughts about the rust ffi side of things 👀 Been fine, been a leading cause of issues...? 17:59:49 cool stuff! Thanks for adding that dot to the graph 18:00:30 hah course. ffi stuff has been mostly fine, it's more capturing the edge cases on the algo side unrelated to the ffi 18:01:43 I want it on the record Rust is mostly fine :p 18:01:47 sgp_: If you sweep the two lines through the plot area, you get similar optima because they are both downward-sloping lines. The main difference between the two solutions is the effective ring size line is more convex (higher second derivative) than the budget line. 18:06:09 We can end the meeting here. Thanks everyone. 18:06:55 Excited to share the FCMP++ report once it's reviewed :D 18:22:11 Hi everyone, sorry I'm late (chat and logs is not working). Few updates on my side: i worked on multi-exp, compressed sigma IPA provides effectively the multi-exp now. I'll test on the kayabaNerve fcmp github repo my solution, also with batching, and compare it. I'll keep you updated ... 18:25:19 I haven't asked for funds yet. We could discuss it when I see improvements in verification times of 5-10% (as kayabanerve said), right ? 18:26:37 https://libera.monerologs.net/ is giving 504 error for me 18:27:27 I'd be interested in moving forward with it if it benefited performance 5-10%. That'd be non-trivial, and due to it being a drop-in replacement (not redoing all the GBP work), it'd potentially be feasible to review within our time span. I'd have to see that time for myself to ask the community thought's on the effort, and then we'd be discussing paying for review of the proof to e nsure its security holds. 18:28:14 I don't want to trouble you with the dev work on your end, if it is something you're unfamiliar with. I'm truly happy to take the responsibility there :) I just have to sit down and do it 😅 18:28:25 You mean this is a drop-in replacement of the BP IPA? 18:28:42 And therefore could be reviewed independently? 18:28:43 Yep, which I wrote as a dedicated proof already. Should be feasible within just 100 lines or so. 18:28:50 Yep, GBPs would remain. 18:28:59 *I impl'd as a dedicated proof already 18:29:34 I believe it already has security proofs for the same properties as the BP IPA, emsczkp obviously the better person to confirm with. 18:29:44 I would certainly be interested in conducting such a review :D 18:29:50 Yeah I understood that—though it did make me scratch my head for a second when the total blockchain size was 10.3 bytes 18:29:57 So we'd be doing proof review _if_ the performance justifies it as more than a point of interest not worth the political capital and effort on. 18:30:24 (not to be rude to the theory. I actually quite like a design done without the inversions. I just already have had "scope creep" discussions and a distinct IPA is right on that fence :p ) 18:31:00 But yeah, 5-10% off the FCMP++ verification would be non-negligble and very worth discussng. 18:37:14 The security proof given for that protocol is not nearly as detailed as the original BP IPA 18:39:52 thanks kayabaNerve. Yes the security proofs are there and I have also extended them, I would be happy if anyone wants to discuss it. I would like to personally test the solution on fcmp, also to avoid committing your effort before being 5-10% verif sure 18:40:28 Oh, you expanded on the proofs to provide better detail? 18:42:02 Yes! 18:43:51 Nice 18:44:04 Are you one of the original authors as well? 18:44:06 (if you care to say) 18:44:25 I'd be interested to see the updated proofs, for sure 18:46:14 I'm the author 18:49:27 Apologies if this was discussed earlier, but is there a reason why you'd expect to see practical efficiency benefits, given that inversions are batched? 18:49:48 It mainly has benefits for the prover who has reduced MSMs while proving. 18:50:03 Ah, I see 18:50:08 I was only considering the verifier 18:50:17 For the verifier, it removes... 24 inversions at our scale? 18:50:41 If you're batching, you only have one actual inversion 18:50:55 Sure, yet that's still 256 scalar muls knocked out. 18:51:02 (although you do more muls) 18:53:03 So the 5-10% informal target was for proving, ya? 18:53:06 Not verifying? 18:53:16 the multi-exp on verfier should save several MSMs too 18:54:02 Even when combining challenges to a single MSM? 18:54:07 my target would be the verifier, but the prover should also benefit if I'm not mistaken 18:54:57 Yess I see many msm saved 18:55:11 I'm thinking in terms of Equation 105 in the BP preprint (page 29) 18:55:34 Where would the savings come from in that case? 18:56:13 this in multi-exp but I want to see batching too, I will work on this in the next few days 18:56:21 Just from the `L`- and `R`-type terms? 18:57:04 Surely you wouldn't get anything from the generators if you're doing generating combining? 18:57:17 *challenge combining 18:58:12 I think the most helpful thing for investigating verifier performance would be something akin to Equation 105 18:58:52 (of course, it would be for the AC protocol) 19:10:02 I should calculate the challenges more lightly, this from multiexp if I remember equation 105 19:13:32 My target was the verifier. I'll also bench prover yet I'd need much higher prover perf to justify that. Most of our proving time isn't IPA dominated AFAIK yet the divisor Poly's. 19:14:06 The verifier saving MSMs is notational, not practical. 19:15:21 I'd be surprised by 5-10% verifier perf increase. I wouldn't be surprised by that much on the prover. I'd want the prover to be 20+% though. 19:18:59 (And again, that may due 50% off the IP A, but I can only care about the entire FCMP++ membership proof)