06:06:44 Rucknium[m] it would be interesting to see how the 10-block decoy selection bug interacted with pools not picking up transactions immediately in the past. Maybe these two bugs canceled out each other? 07:18:08 The reason I ask, is wallet doesn't let user to send a transaction unless it's 10 blocks old, and mining pools didn't mine it for one more block until their update frequency was fixed 07:18:30 so 10 block old decoys should've been very rare until recently 07:18:58 because wallet didn't pick them as decoys, and real spends waited for one more block 07:20:18 Unless an observer was actively monitoring/logging the mempool 07:20:28 yes 07:21:08 but still, blockchain doesn't have this data. So future attackers will not have it. 07:21:38 Right 07:34:04 sech1: I guess p2pool sidechain did collect transactions in mempool in the shares provided 07:34:31 as those get baked in, tho pruned by normal nodes 08:45:00 oh my https://github.com/monero-project/monero/issues/7807 08:45:12 This issue just keeps coming back to haunt us :D 09:01:08 yea, I was definitely in physical pain when I realized I missed this. I thought it was one of the first things I checked, but evidently still missed it 14:24:50 sech1: Maybe. My thought process was: 14:24:51 1) Same as yours. 14:24:51 2) Wait, that just causes a one-block delay in confirmation. So instead of the youngest real spend being 10 blocks old and the youngest decoy being 11 blocks old, it would be 11 and 12 blocks of age, respectively. An adversary could just adjust their analysis by one block. 14:24:59 3) Wait, different blocks are mined by different pools and solo miners. So some transactions could be confirmed "on time" (with 10/11 youngest real/decoy ring member) and some with a one block delay (11/12 real/decoy). That introduces uncertainty about the youngest possible age of the real spend and decoy output. 14:25:08 4) Wait, mining pools declare which blocks they mine. And the block template update config of each pool could be known. That would mostly eliminate the uncertainty suggested in point (3). 14:25:56 I agree with endor00 's point that mempool logging would probably remove most of the uncertainty anyway. 14:28:44 If you look at jeffro256 's line chart: https://github.com/monero-project/monero/issues/8872 the green line increases a lot at about the time that most of the major mining pools changed their configurations (Jan/Feb of this year). 14:30:39 Yes, the mining pool delay config "hid" the 10/11 split, but there would probably be a 11/12 split that an adversary could use. jeffro256 could rerun the chart (or I could do it, too) 14:40:18 Like jeffro256 said, nonstandard decoy selection algorithms may have not had the same off-by-one error in decoy selection. That introduces uncertainty for an adversary. If those decoy selection algorithms are _very_ different from the wallet2 one, however, maybe an adversary could tell that the ring was not constructed by wallet2, which would eliminate the uncertainty again. 14:42:19 How different would it have to be (at 11 and 16 ring size)? Not an easy question to answer without putting in a lot of research time. I want to look into ways to classify individual on-chain rings into different decoy selection algorithms eventually. 15:49:07 Looks like relevant research for possible Seraphis curve change for global membership proofs: https://magicgrants.org/Aram-Jivanyan-to-Research-Firo-Curves/ 15:49:46 ^ UkoeHB , kayabaNerve, tevador 16:39:56 Meeting 20mins 17:01:09 meeting time https://github.com/monero-project/meta/issues/841 17:01:09 1. greetings 17:01:09 hello 17:01:12 hi 17:01:20 Hello 17:02:09 Hi 17:03:35 2. updates, what's everyone working on? 17:03:51 Hi 17:04:07 me: slowly working on escrowed multisig design for monerokon. 17:05:12 I like that - I'm slowly working on bulletproof++ - https://github.com/ElementsProject/secp256k1-zkp/files/10130246/BP_PP_proofs.pdf 17:05:30 Experimented with cluster computing software on Monero Research Computing server and settled on an R native solution for now. Developing and testing some R code at scale on the cluster to make sure there is no RAM blowup. 17:05:46 I previously thought the linked pdf was just a copy of the eprint, but its actually just the math equations for the protocol 17:06:35 I have no idea how to do a batched proof like we had previously, so I will focus on just getting the single case 17:06:54 I also got the new serialization pushed out, but that is less of a MRL task 17:07:25 batching should be possible wherever generators can be shared 17:08:18 ah ok, yeah that makes sense, but I feel like i will still botch it lol 17:08:47 we previously only batched per tx ? 17:08:58 nvm, I can find that in the code easily enough 17:09:24 Sorry for my ignorance about bulletproofs, but how does batching work there? Is it clever maths and/or not recomputing stuff or using simd instructions (or all together because the first part can be precomputed before the parallel part)? 17:09:24 Anyway, probably tomorrow I'll start to do some wallet work 17:10:01 ah there's batched verification and aggregated proving which are different; batched verification is easy but idk how the aggregated proving works 17:14:53 Did I stump the meeting?... 17:15:12 ghostway[m]: batched verification is possible because proofs are verified by summing EC points and checking they equal identity. You can combine multiple proof verifications in one large multiexponentiation (adding all proofs together at the same time) by multiplying the elements of each proof with a random verifier-selected number (ensuring no proof cross-talk is possible). You get perf gains from the 17:15:12 multiexponentiation algorithm and from sharing generators between proofs. 17:15:12 s/stump/disrupt/ 17:16:38 Matrix<>IRC bridge is having problems again. 17:17:05 I think it's working fine 17:17:47 Matrix users can see all IRC messages here: https://libera.monerologs.net/monero-research-lab/20230524 17:18:30 "ah there's batched verification and aggregated proving which are different; batched verification is easy but idk how the aggregated proving works" This message didnt come through to Matrix AFAIK 17:18:34 I am using matrix. Liberalogs are no different so far. 17:19:01 It is for me, thanks! 17:20:40 https://github.com/monero-project/monero/issues/8757 17:20:40 Any chance we can allow fast N-of-N multisig setup? 17:21:00 I think I should just come to irc. I have to find my tor config lol 17:21:43 The post-kex round is unnecessary because all keys are involved in the signing. 17:22:47 This will make the 2/2 multisig setup a single round-trip long, simplifying it greatly in a stateless API context. 17:24:05 Alex|LocalMonero: that's kind of a dev question 17:24:17 Got it, apologies. 17:24:41 I thought it was relevant to research in case there are security risks related to bypassing the post-kex round. 17:24:59 Just wanted to make sure it's clear. 17:27:16 On this recent off-by-one error in decoy selection: IMHO, treating Monero's statistics like its cryptography is treated would be a good idea. For cryptography, formal specifications are written. Then code is independently audited to make sure that the specification is executed properly. 17:27:56 This would apply at least to the decoy selection algorithm and Dandelion++ implementation. 17:30:32 I have heard people, e.g. kayabaNerve, say that wallet2's decoy selection algorithm is difficult to port to another language or implementation. If there was a formal specification, maybe it would be less so. 17:31:37 Hmm, but then, if you implement a spec, code in other languages would maybe audits as well, at least in principle 17:31:48 *maybe need audits 17:31:53 writing a spec does imply some cost 17:32:20 not a bad thing to have though, of course 17:32:27 That would be a good thing IMHO. 17:33:07 We could start by reverse-engineering a spec from the current code. 17:33:52 Or, we look forward, and start a spec based approach with Seraphis. 17:34:14 I'd agree, I don't think it would require audits, it would only lessen the impact of bad coding 17:36:17 A crude simple test is to take some alternative code and generate thousands or millions of (unused) rings against the mainnet blockchain and test statistical equivalence. 17:37:39 That's what I did (except no actual blockchain, just the output indices) to validate the Python and R ports of parts of wallet2's decoy selection algorithm here: https://github.com/mj-xmr/monero-mrl-mj/tree/master/decoy 17:38:47 Maybe stupid question, but why didn't this unearth some hints about the recent one-off bug? 17:39:08 I guess you didn't have that in your alternative implementations? 17:39:31 it's a small hard-to-see edge condition 17:39:50 (the statistical test code isn't in the repo, but the implementations are there) 17:40:01 Not enough cases randomly generated that hit it, then? 17:40:19 So no difference sticks out 17:40:28 Not a stupid question. I think this piece of the code was isolated from the part that decided which outputs werwe eligible for decoy selection 17:41:42 There are two more things: a KS test, which is what I did has "low power". That means that it can be hard for it to detect very small differences. A KS test is very general, which is why it's used. And: 17:43:16 R is indexed from 1. C++ is indexed from zero. Like I said, I think this code that I worked on was isolated from the main problem, so I would not have caught it anyway. But even if I saw it, I may have assumed it was an off-by-one error in my own code because of the R port. Or maybe I would hav erealized it. 17:44:10 Interesting, thanks 17:44:34 This is why with an audit or more complete test you would want to test using the actual mainnet blockchain to make sure there is "100% code coverage" of the test 17:46:43 A good place to start would be writing an MRL research issue outlining the algorithm that we have. 17:47:06 If someone writes that issue: Rucknium[m] jberman[m] jeffro256[m] then I will review it. 17:48:25 Reverse-engineering a spec is on my to-do list. But it's much easier to work with someone who can read the original C++, e.g. jeffro256 17:49:04 UkoeHB: Thanks. 17:50:32 Are there any other topics we should touch on before the end of the meeting? 17:54:07 ok I think that wraps it up, thanks for attending everyone 17:55:39 Ty ukoehb 18:02:16 My memory has been jogged a bit. mj and I worked on that repo about a year ago. We did have an off-by-one issue in one of the first versions that my KS statistical test caught. One of my messages to mj was: "Preliminary findings: I think there may be an off-by-one error in the Python implementation." 18:03:43 We eventually fixed it. I don't know if it was related to this recent finding. 18:11:01 I remember that I put the number of random samples into the millions since I knew the KS test had low power. A larger sample size can fix the low power issue. Power of all good valid hypothesis tests converge to 1 as sample size increases to infinity.