15:10:04 MRL meeting in this room in about two hours. 16:52:20 Is this meeting open to external attendees? 16:53:13 sagewilder: Yes, always :) 17:00:40 Meeting time! https://github.com/monero-project/meta/issues/1142 17:00:49 1) Greetings 17:00:52 Hi 17:00:54 first 17:00:55 hi 17:00:57 Howdy 17:01:02 Hello 17:01:17 *waves* 17:02:27 2) Updates. What is everyone working on? 17:03:36 Carrot integration & benchmarking, FCMP++ benchmarking, reviewing FCMP++ integration 17:03:46 me: constructing and verifying FCMP++ proofs over the FFI is working, cleaning it up and organizing functions now, aiming to roll up wallet sync + FCMP++ prove/verify into the WIP PR 17:04:07 bug fixes for monerod 17:05:01 me: Submitted Milestone 2 of OSPEAD to the review panel. That's the initial improved decoy selection algorithm with all the code to statistically estimate it. Probably I will post it publicly in about a month. In the meantime, if any devs or researchers want a copy, let me know. I may discuss preliminary results next meeting. 17:06:05 3) Generalized Bulletproofs implementation audit. https://github.com/cypherstack/generalized-bulletproofs-code 17:06:25 Anything to say about the audit? 17:08:16 it was a Christmas present to MRL. 17:08:25 The audit highlighted areas that could have more tests, I figure it would be nice if someone wanted to pick that up 17:11:40 Sounds good :) 17:11:48 4) FCMP++ tx size and compute cost. On MAX_INPUTS and MAX_OUTPUTS. Monero FCMP MAX_INPUTS/MAX_OUTPUTS empirical analysis. LATEST BENCHMARKS: https://github.com/jeffro256/clsag_vs_fcmppp_bench 17:14:35 From the results that I obtained on an AMD CPU and Intel CPU that I own, plus the results that other people have sent me, the speed for an 8-input FCMP++ transaction is the same as a 47-52 input CLSAG. Likewise, a 16-input FCMP++ transaction is the same as a 94-101 input CLSAG 17:14:39 From my read of the benchmarks, FCMP input proofs with a larger number of inputs are actually more efficient to verify than those with a smaller number of inputs. Am I missing something? 17:14:57 More efficient per input I mean 17:16:13 Yes, but the effect seems start to to taper off after 8-input and they don't get much more efficient per-input after that 17:16:21 So the only issue is the initial verification for txpool and relay, which can be large for a tx with many inputs. The verification of a block with these txs would actually be more efficient with large number of inputs instead of breaking up the enote consolidation 17:17:02 And the main issue with txpool/relay is a malicious actor trying to DDoS a node with bad proofs, right? 17:17:58 I'd have to check multi-tx batch times. I don't know if it improves the per-tx verification time or not. 17:18:35 But yes the main goal with this whole affair of limiting input counts is to prevent DoS attacks with bad proofs 17:19:39 Does sech1 want to say something about this? 17:20:17 If 100-input CLSAGs are acceptable now, I'd say we should allow 16-input FCMP++s 17:20:27 How limiting is that really? If I want to attack, and I am only allowed 4 inputs instead of 8, can't I simply send twice the number of transactions for the same effect? 17:20:37 I can only repeat that comparing (supposedly) unoptimized FCMP++ code with an optimized production CLSAG code can be misleading 17:20:42 I expect FCMP++ will get faster 17:20:43 Probably. Maybe could go even higher to 32. 17:21:15 rbrunner: it does increase the latency between the time you sent the first one, and the time that we decide that we can ban you for X amount of time 17:21:26 rbrunner: I guess your node would get banned after the first bad tx 17:21:47 Yes, I only expect FCMP++ to get faster, especially with high-performance field arithmetic 17:22:15 If FCMP verification gets faster, it should be worked on soon because the audits should audit optimized code. 17:22:41 Hmmm, but if you block me possibly after only a few bad apples, it again does not matter much how long that takes? Or do you think of attackers with hundreds of nodes at hand? 17:22:53 I don't like the idea of auditing an unoptimized implementation and deployed an optimized, unaudited one. But maybe others disagree. 17:24:41 I agree, auditing first and then tinker with the code further does not sound like the best of ideas ... 17:25:55 what about releasing unoptimized and next hard fork deploy the optimized + audited one 17:26:00 IMHO that could erode the margin of safety, as well as miss the opportunity to gradually decrease the number of possible tx shapes as part of the longer-term goal of increasing tx uniformity. 17:27:27 It needs to be reasonably optimized before the audit. Reasonable = no crazy optimizations that hurt readability/maintainability, and no assembly code 17:28:08 The "unoptimized" part of the code expected to be sped up (via a contest, ideally) is sectioned off from other sections slated for immediate auditing I'm pretty sure. I'm generally content with putting more pressure on optimizing sooner rather than later though, the contest has been on the back of my mind 17:29:30 Should the contest be set up? Any issues holding it up? 17:29:49 Well it was supposed to be KayabaNerve organizing it, before the stepping back 17:30:16 I was interested. But no news since. 17:30:55 Was the idea to get the "award money" through a CCS? Or out of some already existing fund? 17:31:14 I can come back next week with a stronger fleshed out proposed next steps 17:31:30 jberman: Great. Thanks! 17:31:32 0xfffc got competition 17:32:23 There were others known to be interested ? 17:33:14 Yes 17:33:38 I guess where we are in the MAX_INPUTS discussion is being thankful that jeffro256 set up the benchmark repo, then the code optimizations can be easily put into the repo when they are ready. Then a decision can be made based on the new benchmarks. 17:34:32 I could re-run my empirical analysis https://gist.github.com/Rucknium/784b243d75184333144a92b3258788f6 with the new jeffro256 benchmarks, too. 17:36:09 Thanks for doing those calculations, having optimal consolidation times laid out like this is helpful 17:37:32 I wonder how far wallet app implementers will go supporting automatic consolidations, at least in early FCMP++ supporting versions 17:38:22 Probably "DAEMON: Transaction rejected" :P 17:38:37 As in, some of them won't realize there is a limit 17:39:05 Any more comments on this issue? 17:39:06 Lol 17:40:08 none that contribute 17:41:10 5) Discussion: Post-quantum security and ethical considerations over elliptic curve cryptography https://github.com/monero-project/research-lab/issues/131 17:41:30 It's maths time! 17:43:00 Seem to wait for the next company to announce some step forward towards QC, to get to the top of people's minds again ... 17:43:27 rbrunner im already haunted, no need to add more 17:43:49 Kinda have to admit this section of the meeting is a little empty without Jeffro and Kayaba brainstorming 17:43:56 god bless cryptography nerds 17:44:23 lol 17:45:08 I wanted to throw up a point for discussion as it relates to how the generate-address wallet tier and quantum migrations interact 17:45:21 feel free 17:45:23 rbrunner: I think we're rather waiting for better/optimized signature algorithms to be invented. 17:46:02 chaser: kayabanerve just told let me a big NO. But i'm interested if PQ exchange can be done with KEM, because they generally have smaller sizes 17:46:52 Basically, if a quantum computer sees any one of your Monero addresses, then they can 1) see all incoming enotes to that account, and 2) see where those incoming enotes were spent, if 2 enotes addressed to the same subaddress were spent more than once 17:46:54 "big NO" as to what? 17:47:51 The generate-address tier with a quantum computer would be able to trim that down to seeing where incoming enotes are spent if they're spent at all 17:48:15 "if 2 enotes addressed to the same subaddress were spent more than once " Do not necessarily have to be spent in the same tx, right? 17:48:18 jeffro256: They can see all incoming enotes from the moment they break your address. Or the entirety of the incoming history? 17:48:37 Nope, they can be spent anywhere on the chain 17:49:21 They entirety of the incoming history, not including change and other self-sends 17:50:04 Does this mean....PQ churning defense!?!? 17:50:07 I feel like until we get some miracle PQ DSA, wallet will need to self-send automatically on receive then 17:50:17 So there's a question on whether we should officially support delegated subaddress generation, since they get a more privileged look into the wallet history as compared to a normal external observer 17:50:24 YES 17:50:40 Mind explaining for the mortals? 17:51:00 ...which is not so different from tevador's Monero Checks 😔 17:51:33 Churning was hypothesized to be a defense against ring signature analysis. And it may have a role even after FCMP activation, as a defense against quantum computers 17:51:34 This is a good mitigation that hides the flow of funds out of the account. A PQ would still know which transactions you received XMR from other and which amounts, but then the trail would go cold 17:52:43 IIRC, there was s discussion on the Zcash forums about churning to defend against quantum computers, even for their shielded protocol 17:52:48 I didn't know that. Has this been discussed before? I would like to check moredetails 17:52:56 ah ok yes 17:53:07 With so much larger tx sizes, and so much longer verification times, making wallets to churn anything incoming automatically? Sounds like a hard sell to me 17:53:40 rbrunner i agree i feel the concern 17:53:42 Imagine receiving something on a non-high-end-smartphone taking a minute 17:53:58 It's mentioned in the Carrot document, but I suck at writing so people didn't pick it up probably even if they read that 17:54:08 Monero, the money for rich people because poor cannot compute 17:54:32 jeffro: I picked it up, I think it was tangible 17:55:15 To answer your original question jeffro256: Yes i think we should officially support it. I don't mind supporting something that relies on Trust. As long as people are aware of it. 17:55:35 These posts: 17:55:35 https://forum.zcashcommunity.com/t/is-zcash-actually-quantum-private/40706 17:55:37 https://forum.zcashcommunity.com/t/churning-zcash-for-maximum-anonymity-and-privacy/40705 17:55:43 thx u so much 17:57:20 IMHO if removing delegated address generation hides a greater section of the tx graph from a QC, it's a worthwhile trade-off. 17:57:31 I need a reminder. How is the address generation delegation wallet tier different from current view keys? 17:57:57 Isn't delegation subaddress generation just about generating new subaddresses and thats all ? 17:58:08 Would that take away useful features from merchants? 17:58:11 rbrunner: regarding churning costs, if you are receiving lots of inputs, you can do batch consolidation like how Rucknium laid out. You wouldn't really lose any privacy this way either since a quantum computer with your public address already knows you own all your incoming enotes 17:59:14 For a classical computer, giving them the generate-address secret lets them generate subaddresses for you and gain *NO* on-chain information about transaction history 17:59:42 Versus today, if you can generate addresses for them, you can also view-scan their wallet 18:00:30 Isn't it better to keep the generate-address tier because it protects merchants even more from a classical computer? And quantum computer risk is still speculative? 18:00:58 That's a good point 18:01:31 The goal for the generate-address tier was to make PoS systems or invoicing systems that are more secure, not needing private key info 18:01:42 And using subaddresses instead of integrated addresses 18:02:00 Which is a solid win, right? 18:02:01 Integrated addresses already fill this role, but they have their own issues 18:04:13 Is there loose consensus for keeping generate address tier then ? 18:04:53 Handling Monero is complex. Everything that simplifies is very welcome IMHO. Giving that up again for a spectre that may come to hount us in 10 or even 20 years - or maybe never? 18:05:37 it's a win for the merchant, a lose for current-era Monero users if/when quantum adversaries look at the blockchain. 18:05:52 and the blockchain doesn't forget 18:05:59 a lose only if they make use of it 18:06:07 which most users won't 18:06:28 hmm 18:06:39 Isn't it a loss only for people who use that tier _and_ the key information in that tier falls into the hands of an adversary? 18:06:48 Rucknium yes 18:07:09 Put it in a footnote in the docs and keep it :D 18:07:14 exactly 18:09:03 only real users are merchants and they are already vulnerable to this since they provide public addresses to pay them 18:09:07 ok, so it's basically a choice for the merchant as to which kind of risk to go with, and not a risk for the customer 18:11:37 It would be a new risk to customer if the adversary knew the customers and the merchants public address, and the customer used the generate-address tier and that tier fell into the hands of the adversary, and the customer spent their money received from somewhere else into an address with only one spent into that merchants address 18:12:16 Then the adversary could track the flow of funds from the customer to the mercant, whereas they wouldn't have been able to if the customer wasn't using the generate-address tier 18:12:23 Let's assume independence of events and multiply probabilities of each to see how likely that is :D 18:12:40 lmao 18:13:02 Thanks for the attention to detail, jeffro 18:13:07 yup 18:13:21 But they could do that anyway with the status quo view key tier, right? 18:13:38 It does get a bit extreme now with the scenarious :) 18:13:56 I mean, they will see the txs come in without a QC if they had the view key 18:14:19 Well before FCMP++, all transactions will be traceable even without knowledge of any addresses 18:14:32 (if you have a QC) 18:14:46 (((DLog solver under parentheses))) 18:14:57 haha 18:15:20 Yes, and also they can mainly see outgoing with ring signatures 18:15:35 Because of probability and such 18:16:18 I mean, with a hypothetical FCMP view key without the address-generation tier. Merchants have to generate addresses somehow, unless they use integrated addresses 18:16:42 Or have a static address and track payments some other way. 18:17:11 Yes. No QC, FCMP++, with private view-incoming key, one would be able to see all incoming enotes 18:17:53 We should get a QC just to automatically generate the gf transparency report 18:18:17 What a lot of merchant software does to avoid integrated addresses and giving up that view key is to pregenerate thousands of addresses beforehand, and then load all those onto the frontend 18:19:08 Then an adversary could exhaust them by creating bogus invoices. But maybe adversaries have better things to attack 18:19:21 Exactly 18:19:52 Depends on how cheap it is to spam create invoices and how much they dislike this merchant 18:20:50 Let's end the meeting here. Feel free to continue discussing. 18:21:23 delicious meeting 18:51:58 late as per usual, was reading over the carrot spec again. I’m still in favor of the generate-address tier, despite the drawbacks mentioned 18:54:23 the majority of users won’t really run into an issue where it will matter either 18:54:25 Apologies if this isn't the right channel, but do you have an ETA for the FCMP++ in testnet? 18:54:27 in between 1 to 2 months iirc 18:54:29 if not i'll cry 18:54:31 I think we can get to *a* testnet in 1 to 2 months, but not pushing for the official testnet, because I figure we'd want code review / audits completed before deploying to testnet 18:54:33 When I say *a* testnet, I mean something like the stressnet 18:54:35 seems fair 18:54:37 I would have expected testnet to be more stressnet and stagenet to be what you call official testnet tho 18:54:44 It won't be a long wait, then. 18:56:50 From these docs (https://docs.getmonero.org/infrastructure/networks/), the testnet is for experimental release before mainnet (ideally I figure we want 1 testnet release, with code lined up ready to go), stagenet is supposed to be for app devs using monero (so stagenet expects mainnet parity) 18:57:12 "in between 1 to 2 months iirc" Is that a jberman estimate? Or jberman + jeffro256? Sounds a bit optimistic to me, frankly. 18:57:32 rbrunner you are pessimistic. 18:57:40 Right. 18:57:58 it's because of people like you that we stopped music in elevator 18:58:08 ? 18:58:41 idk something something not happy. I missed the joke sry 18:58:56 Ok :) 18:59:35 yeah ok make sense, another detached testnet would make sense 18:59:50 No, I think it's ok to set ambitious goals among devs, e.g. as a motivation, but they can leak out e.g. to Reddit and rise unreasonable expectations there 19:00:02 Unfortunately 19:00:09 that's a me estimate, I'm not sure how long carrot integration would take but it sounds mostly ready 19:00:17 As redditor in denial, I disagree with you (I'm in denial) 19:11:30 https://www.youtube.com/watch?v=u8Kt7fRa2Wc rbrunner 19:24:44 Carrot integration into `wallet2`, without Carrot key hierarchy migration, is definitely achievable within the next month 19:44:53 o​frnxmr: Didn't know those "expert" videos yet, they are cool